Conversation

If you’re in an industry based on commodity text production driven by loose, non-deterministic goals (like spam blogs, content farming, non-news-cycle driven one-eds) and only pre-existing textual sources, you should get out now. GPT-3 can already eat your lunch.
7
107
At the higher end, based on examples I’ve seen recently, I think GPT-3 can already produce op-eds of David Brooks or Taleb style — derps of baked philosophy. Tom Friedman would be harder since he writes the same column repeatedly but does hook in to current news.
Replying to
Paul Krugman would be among the harder ones to imitate since he makes precise, near-quantitative logic-based arguments that are structurally unlike the pattern-like tropey ones the rest make. Whether or not you agree with him, it’ll take more to imitate him.
3
14
Things that would make it hard for GPT-3: 1. The reader is likely to *pay attention* because a key subset of details actually matters and it’s not a casual scan 2. There is some logic involved 3. You’re reacting to non-symbolic data like a photo 4. You’re reacting to news
2
17
In general deep learning seems to do well when there are either clear, legible rules in a closed world (Go, protein folding), or an open world with illegible probabilistic rules (what I’d call “background” text that has to produce a type of effect but not a specific effect).
1
13
It’s the intermediate domains that it has so far failed to even look plausible. Where there is an open world with illegible rules as the background, but patches where things have to make more precise sense. Like simple arithmetic having to add up, or trivial logic.
2
8
5 years ago I’d have guessed you’d need to bolt on some GOFAI and traditional rules engines etc to make it work, now I’m not so sure. You could possibly deal with this hard middle regime by mashing up multiple small-world trained networks with weak links.
1
6
Like maybe hook up AlphaGoZero and GPT-3 the right way and you’d get good, cogent commentary on Go games? No GOFAI needed? Brave new world!
9
Show replies
Replying to
IMO Tom Friedman would be trivial with a bit of pre-training. Just suck in all his columns in pre-training. Then in production, feed a few news stories about the subject, then the prompt is simply "This is the Tom Friedman column about GME:". The thing really does write itself.
1