I’d like to see GPT-3 do interpolations. As in, I supply first and last lines of a story, plus maybe a few waypoints, and it fills in the rest plausibly. What I’ve seen so far is a tradeoff between directedness and coherence.
-
Show this thread
-
Replying to @vgr
This is what everyone wants and it’s very very hard I have some design concepts for it but they’re untested
1 reply 0 retweets 2 likes -
Replying to @liminal_warmth
I don’t think it’s solvable within the deep learning class of techniques tbh. Needs an injection of both GOFAI and data on sensory-verbal correlations beyond narrow application-tagged sets like cars or driverless car road images. Needs open-tagged broad image sets to start.
1 reply 0 retweets 1 like -
Replying to @vgr @liminal_warmth
Like at some level it needs to know that “car” refers to objects which prototypically look like
, and that the little circly things are “wheels” but beyond self-driving app context.2 replies 0 retweets 1 like -
Replying to @vgr
Hmm why are images important if you’re talking about text? I feel like it picks up descriptive pattern detail well enough to emulate from what I’ve seen
1 reply 0 retweets 1 like -
Replying to @liminal_warmth
Because verbalized data from human culture is a very reductive, low-dimensional slice of all human cognition. Trillions of training words sounds impressive until you think about i/o bit rate of just a minute of a single human life. It’s the equivalent of billions of words.
2 replies 0 retweets 5 likes
I think hunan writing comes from such “full-stack” cognition, it’s not mere wordplay.
-
-
Replying to @vgr
This my solution (roughly) which I'll explain in a thread laterpic.twitter.com/yzfwDhK5pA
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.