here we go again: Sutskever to @NewYorker: "If you train a system which predicts the next word well enough, then it ought to understand"
ML fans when I show GPT-2 is clueless: GPT-2 is just system for predicting next words; not fair to ask more.https://twitter.com/Plinz/status/1194391612735942656 …
Without knowing what Sutskever had in mind, I think that depends on what you mean by "well enough". The word would have to be predicted based on a unified understanding. Current AI is not focused on producing unified models of the universe. Every sentient being has such a model.
-
-
agreed this current AI lacks models of the universe; the whole problem is that the field has lost sight of this and thinks that success on some benchmarks means much more than it really does.
-
No, imho it is the opposite. The field is genuinely happy about the ability to do something like credible style transfer in text, and struggles to make progress on AGI. As always, almost nobody in AI works on AGI, not because they are stupid, but because it seems too hard.
- 1 more reply
New conversation -
-
-
Could you please define the term "sentient being"
-
I use it in the sense of a system that maintains a dynamic model of its environment, itself, and the interaction between them. In other words, a system that knows (in the sense of having a working model) what it is doing.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.