What is the most plausible scenario to you where a power AI system causes extinction as a result of it being misaligned
(Eg what does it physically do?)
Conversation
Replying to
Probably AIs controlling robot armies and/or nuclear weapons are the scariest. The Skynet scenario is actually not that implausible
game theory suggests that the angles we are most worried about / defend are the least exploitable, so that’s probs less likely.
1
here is the skynet situation that seems more probable to me :
1. LLMs are fully integrated into multiple services: search engines, EKS, etc in a sequential chaining system like
1/ 🧵
1
Show replies
You’re unable to view this Tweet because this account owner limits who can view their Tweets. Learn more
I guess I would be worried about some sort of conflict with the human overseers or a value extrapolation failure where the AIs decide they want full control. But idk, I’m not saying it’s more likely than not that this happens.
2
1
Show replies


