I think the question stands. an understanding of intelligence that allows for such cases is probably seriously incomplete.
1/3 Thought experiment implies that AI is capable of self-improving to the point of reaching superintelligence. It starts by gaining trust of scientists/programmers to release it from the laboratory, then wins elections on populist platform, then conquers Universe.
-
-
2/3 If it just disassembles itself before conquering Universe it'll be a really bad paperclip maximizer and a really stupid superintelligence. We can't know its end goal, but we can safely assume that it will at least do everything to turn all matter into paperclips.
-
3/3 Its end goal depends on its logic and goals, but if it's just 'Make as many paperclips as possible', one possible outcome is creating an endlessly replicating paperclip-based lifeform in place of our Universe.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.