A general AI will easily be taken down by a simpler, more specialized AI whose only purpose is to take it down. Just like a simple virus can kill a human. It will be even easier if the simple AI killer has access to the similar computing resources as the general AI.
-
-
If that was a certainty (not just possible), why did evolution not favor a viral goo over complex organisms?
3 replies 0 retweets 5 likes -
Evolution doesn't favor complex organisms. Complex organisms just happened to occupy a niche in the biosphere.
1 reply 0 retweets 4 likes -
Replying to @IntuitMachine @Plinz and
IIRC Nick Bostrom covers this topic in reasonable detail in his book Superintelligence. Good reference.
1 reply 0 retweets 1 like -
Replying to @FieryPhoenix7 @IntuitMachine and
Superintelligence is a dark fantasy in which humankind places all of its resources at the disposal of an untested and unsupervised AI system. Meanwhile they obsess over turning off the machine because it might contain a mind.
4 replies 0 retweets 5 likes -
Replying to @tdietterich @FieryPhoenix7 and
I think we should worry more about the misalignment of artificial persons and humans. If you can't solve this, you can't solve some far off fantasy:https://medium.com/intuitionmachine/the-dangers-of-artificial-intelligence-is-unavoidable-due-to-flaws-of-human-civilization-f9c131e65e5e …
1 reply 0 retweets 0 likes -
Replying to @IntuitMachine @tdietterich and
Are you aware that this statement is a non sequitur that only makes sense as an appeal to moral intuitions? Imagine someone said in 1850: "if you cannot solve respiratory disease you cannot hope to build a system of automotive transportation".
1 reply 0 retweets 1 like -
Replying to @Plinz @tdietterich and
Isn't the threat of super-intelligence the same as the AI alignment problem? If super-intelligence is aligned with human values then it's a non-issue.
2 replies 0 retweets 0 likes -
Replying to @IntuitMachine @tdietterich and
The problem of building something that has enormous economic value and the problem of making it safe are unfortunately different problems; one can be solved without the other.
1 reply 0 retweets 0 likes -
Replying to @Plinz @tdietterich and
Yes, I agree. The question however will eventually boil down to what is meant by 'economic value'? Is 'economic value' in alignment with human values? If so, then the safety and the economic value question are entangled.
2 replies 0 retweets 0 likes
In the short run it is not. Humanity may have to play a longer game than our economic entities. I am currently much more worried about the possible impact of AI and ML on the financial system than about autonomous weapons, for instance.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.