I suspect that is the core of all understanding: discover a mapping of a problem to something that you already know how to compute. This mapping is a computation itself. You need to start out with at least one universal automaton and a suitable search function for new mappings.https://twitter.com/bradtheilman/status/1000048502498508800 …
I suspect that transfer learning benefits from monolithic models, i.e. interpret everything as a particular part of a single world (a global maximum), and find mappings for all domains. If you go straight for compression, I'd think that you are more likely to end in local maxima.
-
-
Humans are good at transfer learning because we first build exhaustive models of perception, spatial navigation, goal directed action, syllogistic reasoning etc., and then try to tackle new problems first with the tools we already have.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.