Conversation

I find it hard to think rationally about the potential of existential AI risk without the clear epistemological distinction between AI and AGI given by David Deutch, who isn’t listed in your references.
1
1
Show replies
This is excellent, thank you. I would be interested to see this model extended to include population growth due to AI innovations.
1
Interesting. Lots of qualitative description of ways AI could lead to better growth, but less about the mechanisms that might lead to extinction. An improvement might be to consider milestones like emergence of goals, sentience, resource access, capture by bad actors, etc.
1
Thanks for sharing this work in progress. In the paper, you point out that there can be compensation for risk — even existential risk. If a one-off option made things 2% better for ever, that would be worth a 1% risk.
1
5
A very interesting paper! The thing that worries me is that there is a probably a large gain from being the first company/country to develop superintelligent AI. I simply do not see how we will be able to stop or slow down the AI progress if that would be optimal.
Show more replies

Discover more

Sourced from across Twitter
The great Daniel Ellsberg has passed. Of course he is famous for the Pentagon Papers leak, as described in the riveting book "Secrets", but us economists know him better for the Ellsberg Paradox, a fundamental contribution to modern decision theory
4
116
Show this thread
Do we actually publish papers at all or are they pointers to real paper content just on the Internet?
Image
5
42