Conversation

(For folks who don't know me, I've been around for a while. Before Less Wrong existed, I found Overcoming Bias around 2007. Tbf many of the core people were in SL4, the Extropian list, etc, so I've thought about this a *lot* but not as much as some others in the space) 2/n
1
126
I am composing these tweets as I go so I will likely add more, but here are some propositions: -I think we will have another AI winter -I think slow takeoff is much more likely -I think there are weird social incentives at play -I think panic isn't helpful 3/n
3
244
(Also there is a better version of this thread with links and citations and nicely presented evidence, but given the insane amount of discourse happening right now I'm opting for the quick thread I can write, not the essay I won't for a while) 4/n
1
100
AI winter? At this hour?? Perhaps I'm just terminally contrarian, since it seems like no one agrees with me, but I'm seeing some things that make me think we are in a classic bubble scenario, and lots of trends that can't clearly continue 5/n
7
214
One of the main factions right now thinks that scaling is everything, we have the basic tools, and with more compute we hit superhuman performance. If we take that as true, do we even have enough compute? 6/n
1
125
Moore's law has been looking weaker and weaker. Clock speeds paused a long time ago. Die size is hitting physical limits. Cost per compute is still falling but it lost the exponential it was on. Without some major changes (new architecture/paradigm?) this looks played out 7/n
12
194
Existing AI applications are already taking a significant fraction of global compute power, I find it implausible that global AI efforts could scale more than another 2 orders of magnitude, and that crowds out a ton of other compute to do so. Unlikely IMO 8/n
7
157
Does another 2 OOM increase in compute with current techniques get us to AGI? Personally I am skeptical, but this is an empirical question, and we should keep an eye on how these scaling curves continue to develop in the next iteration of models 9/n
3
116
In addition to physical limits on Moore's Law, I will note that the semiconductor industry is highly centralized, with ASML being the only company making cutting edge fabs, and 1-3 companies variously on the cutting edge of chip manufacture, the major one being in Taiwan... 10/n
1
121
The semi industry faces extremely high geopolitical risk, and supply chain risk, and in the event of a great power conflict will almost certainly be one of the major targets. It's unclear to me further breakthroughs or even current production of chips will persist 11/n
4
154
Setting aside the hardware required, there's the question of economics. Are current models cost-effective enough to replace other methods, and will they continue to improve enough? 12/n
1
89
The All-In podcast folks estimated a ChatGPT query as being about 10x more expensive than a Google search. I've talked to analysts who carefully estimated more like 3-5x. In a business like search, something like a 10% improvement is a killer app. 3-5x is not in the running! 13/n
5
155
Tbf these models don't have to dominate in every area, maybe search is just a tough example. Copilot seems to be very helpful, for example. This is another empirical question - what profit centers can be created with current and next-gen models? 14/n
1
109
I think this will be THE single key behind whether we get another AI winter: will models be profitable. Tech demonstrations are flashy, but the people building the best systems are all trying to make money, and will pull the plug if that doesn't happen. 15/n
7
197
My current read is that AI models are great at automating low cost/risk/importance work, and extending the extensive margin of new work - this alone won't quite get us there 16/n
4
129
Example: someone writes a blog post, and wants an accompanying illustration. DALL-E can provide one basically for free and do a passable job. Would they have hired an artist to make one? Possibly, but unlikely. So it's new, real value, but low value and not captured 17/n
4
188
Models like LLMs suffer from low reliability and low accountability, both of which are absolutely critical in major sectors of the economy. Would you let an LLM drive your car, if the error rate is 1%, or even 0.1%? 18/n
5
175
Self-driving cars are a great example. The first test demonstrations were decades ago now, and have always seemed just over the horizon. It turns out you need *extreme* levels of reliability! One Weird Trick doesn't get you there 19/n
2
184
So the question is, which areas of the economy can deal with 99% correct solutions? My answer is: ones that don't create/capture most of the value. You can use an LLM to translate with a friend across the world, but you need a professional to write an airtight legal contract 20/n
3
281
To go back to self-driving cars, it turns out that the world is exceptionally complex. Toy problems like solving Go are cool, producing humanlike speech via LLMs is very cool, but this is far from grappling with the open world that is reality 21/n
3
186
It's hard to overstate the difference between solving toy problems like keeping a car between some cones on an open desert, and having a car deal with unspecified situations involving many other agents and uncertain info navigating a busy city street 22/n
2
144
This is IMO one of the most profoundly important things most people overlook. Getting Things Done IRL is Very Hard, Harder Than You Think. AI is going to struggle a lot more before it gets there. 23/n
4
234
Now take all of the above, and compare that to the insane fever pitch of the hype around AI, and the exponentially increasing dollars and compute being thrown at the problem. I could turn out very wrong, but this looks like a classic bubble 24/n
3
145
We see the typical explosive moves in equity markets for anything AI-related, we see earnings calls devolving into asking unrelated companies what their plan is for AI, the only area of VC still thriving is investing in AI companies, etc etc 25/n
2
89
All of this hype and investment may well turn out to be correct, some things are indeed world-changing forever, though even those technologies accrue value in places people often don't expect 26/n
3
84
So all of the above leads me to my prediction: AI is currently overhyped. There will be massive overinvestment, which will not be met with profitability, and it will only take a few years for corporate types and investors to get burned and pull back on compute spend 27/n
2
202
My second major topic: I believe that slow takeoff is much more likely than hard takeoff, and that slow takeoff will be significantly less risky and have more chances for win scenarios before the future is out of our hands 29/n
1
106
For one thing, this world looks exactly like a slow takeoff world. AI models are powerful but flawed, piecemeal, not generalizable outside their domains. None of them have "closed the loop" on self-improvement, and IMO none have that potential yet 30/n
6
142
The world being highly complex and difficult is part of the reason here. Humans are a bundle of many different functionalities, all of which are important, and we don't function well in the world missing even a single piece of the puzzle 31/n
6
94
Show replies