Contrary to popular belief, training a gigantic model on a humongous dataset of human text will not lead to AGI. 
Probing Neural Network Comprehension of Natural Language Arguments: https://arxiv.org/abs/1907.07355 https://twitter.com/slashML/status/1152905615888359425 …
-
-
Failure?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Well, they were right. We are not general intelligences.
-
That's because we ran before we could walk. We're still finessing modelling rodent learning & memory networks. We're currently about 50 years from understanding how memory works. That's just memory.
End of conversation
New conversation -
-
-
Who are "they" in this case? What's the A in AGI here? And is the G really appropriate then?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
1) You assume AGI is an appropriately defined term, where we don't even know properly the thing that 'I' stands for. 2) While you could call humans smart, we are also one of the most inefficient species alive, certainly something 'AGI' should know.....
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.