New working notes: "Reduced MNIST: how well can machines learn from small data?" http://cognitivemedium.com/rmnist
-
Show this thread
-
Replying to @michael_nielsen
I think there are some benchmarks of human performances on 'alien' alphabets somewhere in the zero-shot transfer learning (eg Omniglot) and Bayesian program synthesis literature.
1 reply 0 retweets 4 likes -
Replying to @gwern
Any keywords that might help me find them? Tried "human learning alien alphabets bayesian program [/ zero-shot]" and a couple of variants in Scholar, got nothing promising.
1 reply 0 retweets 1 like -
Replying to @michael_nielsen
The only one that comes to mind was a Bayesian program synthesis paper for digit recognition in, I think, Nature sometime n past few years ago claiming human-level performance (based on a comparison with human performance, obvs).
1 reply 0 retweets 1 like -
Replying to @gwern @michael_nielsen
see e.g. https://arxiv.org/pdf/1605.06065.pdf … and references to/from it
1 reply 0 retweets 4 likes -
-
If I've understood q correctly, Brenden Lake, Josh Tennebaum et al have been working on this for a while, they proposed MNIST Transpose (few examples for 1000s of categories :) back in 2011 (if not earlier) http://web.mit.edu/jgross/Public/lake_etal_cogsci2011.pdf … https://cims.nyu.edu/~brenden/LakeEtAlNips2013.pdf … https://arxiv.org/abs/1604.00289
1 reply 0 retweets 1 like -
Oh, there's no claim at all the problem is original, it's meant as a good playground. Pretty sure people were working on this (well, not this specific variant) back into the 70s, if not earlier.
2 replies 0 retweets 0 likes
That's a nice paper by Tenenbaum et al, thanks!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.