I just published What does it mean for a machine to ‘understand’? https://link.medium.com/vvNZ4pks80 1/
-
Show this thread
-
Replying to @tdietterich
reg. your example, Who did IBM’s Deep Blue system defeat?, i would argue (ala
@GaryMarcus) that it is not true understanding. Just try rephrasing the question: Who defeated deep blue? -> I get the answer "IBM". if it understood the previous qn, it shouldn't make that mistake 1/1 reply 1 retweet 1 like -
current QA systems are just picking up the stastical correlations without understanding the question or context, This has been shown in prior work on robustness of current QA systems (
@percyliang &@robinomial). 2/2 replies 3 retweets 7 likes -
Replying to @sandyasm @tdietterich and
IMHO, the fact that they end up getting correct answers to many questions is NOT equivalent to "understanding" the question/context. It is more a "clever hans" effect. It is true that "clever hans" effect suffices many tasks, for limited conditions 3/
4 replies 0 retweets 3 likes -
Replying to @sandyasm @GaryMarcus and
A poor implementation (e.g., statistical correlations, memorized rules) will lead to poor understanding. But it is still understanding some of the cases. "understanding" is a functional notion, whereas pattern matching and ML are implementation methods. 1/
2 replies 0 retweets 4 likes -
Replying to @tdietterich @sandyasm and
Your intuition is correct that existing correlation-based ML methods are not going to get you deep understanding. But they can get you something usable, and perhaps they are part of a larger and more comprehensive approach
1 reply 0 retweets 5 likes
What about GPT-2 responses to prompts the math word problems I shared, often nearly random, yet perhaps very occasionally correct. Those cases are bit like a broken clock that is correct twice a day. = shallow understanding?
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.