Interesting that DeepMind is working on a IQ-like test to measure abstract reasoning capabilities. I've been working on a very similar benchmark for the past 6 months (taking a more formal approach). Good to see more action in that space (cc @ChrSzegedy)https://deepmind.com/blog/measuring-abstract-reasoning/ …
-
-
For more context about why we need such a benchmark, see my talk at RAAIS 2018 (e.g. 13:18):https://www.youtube.com/watch?v=2L2u303FAs8&list=PLht6tyws1YpSOGz2k6bUC1PibVG7ZiRFB&index=6&t=0s …
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Love this idea. Perphaps a small report will be nice to update us on your ideas, even if the dataset is not ready. I sense there is something important to discuss here and we may different perspectives.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Excited to see this! Will this be able to measure human performance as well, or is it meant for NNs only?
-
Both, all problems are meant to be solvable by humans (that's how we know they're relevant problems!) and the percentage of problems a human can solve should correlate with other measures of intelligence (such as IQ). Human-machine generalization quotient.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.