. @Ylecun are you arguing that your team has a robust solution to the problem of getting deep nets to understand the causal consequences of events as they unfold over time, or just pointing me to a toy model? Have you tried it on the examples in this thread?https://twitter.com/ylecun/status/1188902027495006208 …
-
-
Replying to @GaryMarcus @ylecun
This all-or-nothing approach to progress is pretty silly. I'm sure it's a toy model, and I'm sure you'll find flaws with it, but that doesn't mean it's not progress.
3 replies 0 retweets 9 likes -
Replying to @Zergylord @ylecun
Hey, I'm all for taking steps, but Dr
@ylecun told me the problem was solved, period, and that's completely ridiculous.2 replies 0 retweets 5 likes -
Not only did
@ylecun claim the problem of buiding models of causal consequences from discourse was solved, he said it's been solved for 3 years! Spoiler: It hasn't been.1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @ylecun
Did he really? My read: You claimed no robust event representations in GPT-2, he claimed (albeit rudely) that older work exhibited this property.
2 replies 0 retweets 1 like
that said, combining some properties of memory nets with GPT-2 would be interesting. i still don't think it would solve the particular cases that I describe, but you are welcome to try.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.