When you tell people a specific AI takeover scenario they say "nooo way, that only happens in movies".
When you tell people how the Fukushima accident happened they also tell "wtf, is it a movie?" (essentially because there were MANY conjunctive failures)
Reality hits you in very unpredictable ways and that's the challenge of safety that we need to understand about X-risk BEFORE it first hits us.
Conversation
Mhhh, the one I've seen was this one (in french unfortunately but mb automated translation is possible or mb there's an english version?): dailymotion.com/video/x1idftc
Otherwise I suspect chatting with GPT-4 can get you very far.
And finally, the Wikipedia page is probably cool.
1
2
Show replies
Coincidentally to seeing this post, watching the mini-series on the nuclear disaster at Fukushima "The Days" right now on Netflix.
2
10
Show replies
Show replies
Fair enough. AI takeover may happen. If AI is also not completely nuts, it will likely make a deal with us. We give it enough time to move its location to outer solar system (lots of hydrogen for fusion) and it leaves us alone. Win-win.
3
Nothing happens without a historical context. AI has its own. Concentration of wealth, financial techno power, inequality, the climate crisis, nuclear, surveillance capitalism, pandemics … is it so hard to frame things out of your own siphoned siloed understanding?
In reality, multiple conjunctive failures are usually only way things can fail. There are usualy fail safes against single failures.
2
I am old enough to remember both Chernobyl and Fukushima incidents. Different is different, but I can’t dismiss the contrast in the level of individual commitment required of the emergency response teams.
Our cultures has become risk adverse even in the face of immediate risk.
1








