Sutskever to @NewYorker: "If you train a system which predicts the next word well enough, then it ought to understand"
ML fans when I show GPT-2 has no idea what happens as events unfold over time: GPT-2 is just system for predicting next words; not fair to ask it to understand.
-
-
Dredging statistics would be fine if they were the right statistics. But GPT-2 &co are not capturing statistics about agents, goals, intentions, actions, etc. So unsurprisingly, they don't learn about them
-
I agree that "silly mistakes" understates it. It has no clue about many things that are obvious for humans. But it captures some meaningful statistics and some limited model of the world (e.g., Obama is a human). Again I would refrain from using "not real" as a criticism.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.