I wrote a kind of exasperated explainer about the high-IQ idiocy surrounding AI risk. This is what Musk is on about: http://idlewords.com/talks/superintelligence.htm …
-
-
Replying to @Pinboard
Excellent read. Interested to hear more about your skepticism regarding galactic expansion, you sort of glossed over it.
1 reply 0 retweets 0 likes -
What I've never understood about the Bostrom paper clip scenario is that in practice it should be fairly trivial to prevent.
1 reply 0 retweets 0 likes -
Hard-code the AI to improve itself in specific increments. i.e. 5x your paperclip production efficiency, then stop improving.
2 replies 0 retweets 0 likes
Replying to @SHL0MS @shlomoklahr
the trick is how to code it to recursively rewrite itself while keeping those types of limits in place, and *wanting* to
7:04 PM - 11 Aug 2017
0 replies
0 retweets
2 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.