A new benchmark for human-level concept learning and reasoning. Humans beat #AI hands down! Shows gaps with current #DeepLearning meta/few-shot learning.
@NeurIPSConf @NVIDIAAI @wn8_nie @yukez @ZhidingYu @abp4_ankit
Blog: https://developer.nvidia.com/blog/building-a-benchmark-for-human-level-concept-learning-and-reasoning/ …
Paper: https://papers.nips.cc/paper/2020/file/bf15e9bbff22c7719020f9df4badc20a-Paper.pdf …
-
-
Analogy making or compositionality: Simple shapes compose together. E.g. small circles in highlighted figure are arranged together to form a "meta" shape. Humans are amazing at creating abstractions. We want to test this ability in
#AIpic.twitter.com/Lz4ouU4Y9a
Show this thread -
Infinite vocabulary: Previous benchmarks are limited to finite categories, and it is easy for
#DeepLearning to memorize. To prevent this, we programmatically generate new concepts. We have subcategories: free-form, basic and abstract.pic.twitter.com/vVVi8rTCFR
Show this thread -
Our dataset looks so simple and toy-like, yet deeply challenging for
#DeepLearning due to (1)context dependence (2)abstractions (3) infinite vocabulary. We resorted to synthetic data due to severe data imbalance and scarcity with real datasets for few-shot learning.Show this thread -
On our benchmark, a basic neuro-symbolic method beat all neural approaches consistently and significantly. Neural approaches include latest meta/few-shot/self-supervised approaches. Shows symbol grounding is fundamental to concept learning.
Show this thread -
Project page for our
@NeurIPSConf Bongard-LOGO paper with all resources including code for dataset generation.@yukez@ZhidingYu@wn8_nie@abp4_ankit@NVIDIAAI https://research.nvidia.com/publication/2020-12_Bongard-LOGO …Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.