And yet, within the last few years, a large proportion of AI researchers have been convinced that they are working on a potential X-risk, and they're taking it seriously. Is it really prudent to dismiss that consensus? Would love to chat about this sometime.
-
-
-
To be clear: climate change is not an existential risk. It is a global catastrophic risk. We don't all die due to climate change. Focus on the AGI alignment problem is an attempt to ensure survival of our future light cone. Climate change is also serious, but let's be specific.
-
Absolutely. Global warming won't extinguish all human life. AGI could wipe out our entire cosmic endowment - which this article failed to emphasize. The stakes are quadrillions of potential sentient lives in the future -- not just the 8 billion humans now.
-
Yes! Precisely the emphasis largely missing from public perception/ articles broaching the topics. And we at least have great knowledge of deterring Climate Change. We haven't even stepped on the welcome mat of the labyrinth of studying AGI safety.
End of conversation
New conversation -
-
-
Considering future risks is not the denial of current threats.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Not sure why it would be problematic for a *tiny* number of researchers to think hard about the ways that things could go wrong. Indeed, that seems extremely prudent to me -- if only we'd had "futurists" thinking about the lasting effects of slavery, colonialism, etc.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I'm looking forward to
@SamHarrisOrg setting you straight on this pointThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
The day robots dream of electric sheep... we are gone.
-
Bruh, we ARE the sheep ~
-
You know, you might be right...
End of conversation
New conversation -
-
-
I think it's important to consider both. I don't see why the consideration of existential risk will directly prevent our consideration of immediate risk.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Smug and wrong-headed. This isn’t an either-or situation: looking at less obviously immediate concerns doesn’t keep you from working on other concerns. Unless you have some secret scientific knowledge that these concerns are baseless, they’re legitimate areas to worry about.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Mr Pinker, what's your stance on climate change? I've read from some well-informed people that it has been greatly exaggerated. There are some skeptical scientists too.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It's a shame that branding thing hypothetical and smugly closing your eyes does not, in fact, protect you.
- 1 more reply
New conversation -
-
-
Id rather us be vigilant about potential doomsday scenarios we could create instead of using your naive optimism to brush off threats.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Is there any plausible scenario in which climate change kills more than half of humanity? AI alignment aside, it seems worth further mitigating the very real risk that nuclear war or bioengineered pathogens could kill enough people to collapse civilization.
@GernotWagner -
Good questions. No easy answers. FWIW, chapter 4 of Climate Shock has Marty Weitzman's & my musings on it.
-
Thanks. I’ll bump that up in my to-read queue.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.