New @OpenAI system has some of the flavor of Searle’s Chinese room, but Searle presupposed “answers to the English questions are ... indistinguishable from those of other native English speakers” @OpenAI is SoTA on some QA tests, but no match for humans on story understanding.https://twitter.com/GaneshNatesh/status/1096874784330342400 …
-
-
It's easy to see that it doesn't understand when you query it specifically. Most people give it vague prompts and so anything goes. With more specific prompts the limitations are clearer. https://metarecursive.neocities.org/Samples-from-Open-AIs-conditional-text-model.html … Transformers learn across sentences too so model is basicallypic.twitter.com/DBgP3TJyla
-
really good at knowing what sentence or phrase fits next. And cause of fine working memory, these sort of act as an attractor to keep it "on topic" so to speak. When large enough and given enough data, it's enough to keep it coherent. I suspect full model is easy to break but
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.