looking for pretrained language models that i can readily test a draft world-state change benchmark on.
GPT is scoring essentially zero.
what’s out there that is better that I can try?
@ylecun @AntoineBordes @jaseweston still would like to test your recurrent entity networks
-
-
Replying to @GaryMarcus @ylecun and
For someone not deeply involved with NLP, what is a world state change benchmark about? Or is it a more general "changing the language but keeping the underlying laws" idea, for zero or few shot learning?
1 reply 1 retweet 4 likes -
when we read or hear a story or just look around, we build an internal cognitive model of what’s going on; we update that model as we learn more. eg we track who has done what to whom, when, where, & why language neural nets like GPT-2 don’t see http://rebooting.ai 4
3 replies 2 retweets 13 likes -
Replying to @GaryMarcus @DrLukeOR and
To be fair, none of the statistical language models are trained on tracking or understanding tasks. They are trained to be statistical language models. Is someone claiming otherwise?
2 replies 1 retweet 11 likes
1. yes Hinton has dismissed my claim "talent of feature detectors.. doesn't translate into understanding novel sentences, in which each sentence has its own unique meaning" on basis of "success" of statistical systems (eg Google Translate)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.