I’d like to see GPT-3 do interpolations. As in, I supply first and last lines of a story, plus maybe a few waypoints, and it fills in the rest plausibly. What I’ve seen so far is a tradeoff between directedness and coherence.
12
4
46
This Tweet was deleted by the Tweet author. Learn more
I don’t think it’s solvable within the deep learning class of techniques tbh. Needs an injection of both GOFAI and data on sensory-verbal correlations beyond narrow application-tagged sets like cars or driverless car road images. Needs open-tagged broad image sets to start.
Like at some level it needs to know that “car” refers to objects which prototypically look like 🚗, and that the little circly things are “wheels” but beyond self-driving app context.
1
This Tweet was deleted by the Tweet author. Learn more
Because verbalized data from human culture is a very reductive, low-dimensional slice of all human cognition. Trillions of training words sounds impressive until you think about i/o bit rate of just a minute of a single human life. It’s the equivalent of billions of words.