eliezer has IMO done more to accelerate AGI than anyone else.
certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.
Conversation
it is possible at some point he will deserve the nobel peace prize for this--I continue to think short timelines and slow takeoff is likely the safest quadrant of the short/long timelines and slow/fast takeoff matrix.
62
62
880
If the language model boom happened 10 years from now, China might be a bigger player. Domestic cooperation seems much easier than international cooperation so I tentatively agree with Sam here.
1
3
Show replies
Because AGI could save us from other existential risks? But this doesn't work if you think the existential risk from AGI is much greater over the next 50 years than the existential risk from all other sources combined.
3
2
27
Maybe he sees the status quo as dangerous (disease, poverty, etc) and AI as the necessary savior.
1
6
Show replies
Why assume that longer timelines to developing AGI, would result in our ability to make AGI long-term safe?
1
Show replies
Long would mean more easy compute available for "random" actors which could be more careless. Tbf too short and everyone's careless, but that might have fewer negative consequences than long bc of compute availability🤔
Is short synonymous with an MVP-mindset here? Wondering if is partially using his YC experience to guide
Hardware overhang would be one answer. More time to learn from/about general agents while computers are too slow for them to easily scale up.
1
4
I'm guessing that because their current plan is to use early AI for safety research?
The world's unusually stable right now (no WW3-induced arms-race-to-Skynet type of situation) and the leading labs care about safety. These things aren't necessarily still true 20 years from now.
1
3
Show more replies










