I suspect that is the core of all understanding: discover a mapping of a problem to something that you already know how to compute. This mapping is a computation itself. You need to start out with at least one universal automaton and a suitable search function for new mappings.https://twitter.com/bradtheilman/status/1000048502498508800 …
-
-
Replying to @Plinz
It's similar to how you prove that a given problem has a solution or not, except in that case we use reductions rather than mappings.
1 reply 0 retweets 2 likes -
Replying to @FieryPhoenix7 @Plinz
What is the relationship between Transfer Learning and Compression/Unsupervised Learning? Does finding a compressed representation of a problem allow for transfer learning to take place? Or can mappings from one domain to another be made without compression?
@bradtheilman1 reply 0 retweets 0 likes -
I suspect that transfer learning benefits from monolithic models, i.e. interpret everything as a particular part of a single world (a global maximum), and find mappings for all domains. If you go straight for compression, I'd think that you are more likely to end in local maxima.
1 reply 0 retweets 0 likes
Humans are good at transfer learning because we first build exhaustive models of perception, spatial navigation, goal directed action, syllogistic reasoning etc., and then try to tackle new problems first with the tools we already have.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.