Interesting that DeepMind is working on a IQ-like test to measure abstract reasoning capabilities. I've been working on a very similar benchmark for the past 6 months (taking a more formal approach). Good to see more action in that space (cc @ChrSzegedy)https://deepmind.com/blog/measuring-abstract-reasoning/ …
-
Show this thread
-
The format of the problems looks extremely similar, but the content not so much. The very fact that NNs seem to do well on DeepMind's benchmark seem to indicate that it does not achieve its goal of measuring abstract reasoning, and can be solved through simple pattern recognition
3 replies 1 retweet 24 likesShow this thread -
My tests are based on colorful little matrices like these. I expect to release the full dataset and associated paper by the end of 2018. Should have happened much earlier, but Keras development is taking up a lot of my time.pic.twitter.com/mPjxgB1TFW
3 replies 12 retweets 82 likesShow this thread -
Replying to @fchollet
Excited to see this! Will this be able to measure human performance as well, or is it meant for NNs only?
1 reply 0 retweets 0 likes
Both, all problems are meant to be solvable by humans (that's how we know they're relevant problems!) and the percentage of problems a human can solve should correlate with other measures of intelligence (such as IQ). Human-machine generalization quotient.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.