. @Ylecun are you arguing that your team has a robust solution to the problem of getting deep nets to understand the causal consequences of events as they unfold over time, or just pointing me to a toy model? Have you tried it on the examples in this thread?https://twitter.com/ylecun/status/1188902027495006208 …
-
-
Replying to @GaryMarcus @ylecun
This all-or-nothing approach to progress is pretty silly. I'm sure it's a toy model, and I'm sure you'll find flaws with it, but that doesn't mean it's not progress.
3 replies 0 retweets 9 likes -
Replying to @Zergylord @ylecun
Hey, I'm all for taking steps, but Dr
@ylecun told me the problem was solved, period, and that's completely ridiculous.2 replies 0 retweets 5 likes -
Not only did
@ylecun claim the problem of buiding models of causal consequences from discourse was solved, he said it's been solved for 3 years! Spoiler: It hasn't been.1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @ylecun
Did he really? My read: You claimed no robust event representations in GPT-2, he claimed (albeit rudely) that older work exhibited this property.
2 replies 0 retweets 1 like -
Replying to @Zergylord @ylecun
right, which makes his reply a non sequitur. I said GPT-2 doesn''t develop robust representations of *how events unfold over time*; he pointed to a different architecture w memory etc. bait-and-switch and overstated.
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @ylecun
"models like GPT-2" is a very subjective thing. He probably thinks models /w external memory are in the same family (as would I), whereas you probably count them as a "hybrid model".
1 reply 0 retweets 0 likes
correct, i would. i have been saying since 2001 that memory woud be vital in moving forward.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.