As Mencken famously said “One horse laugh is worth ten thousand syllogisms” and this is doubly true with rationalists
-
-
Despite what I just said, they spend a lot of time trying to predict it, and talking like they can control it (even though none of them actually work on designing actual AIs or supercomputers) They take a lot of donations from their fans to have these discussions
-
One of the posters on Less Wrong, a guy named Roko, came up with this complicated idea that because the future AI God will have perfect knowledge of the past before it was created, it will know whether or not you did everything you could to help create it
- Show replies
New conversation -
-
-
The singularity in particular is a hilarious idea. Exponential growth ends in one of two ways: 1) A sinusoid, when damping factors eventually kick in, 2) the machines explodes, if damping factors do not exist or are not sufficiently robust. 1/
-
This is basic undergrad mathematics. Anyone who has taken an introductory course in differential equations, and then thought about exponential growth for a couple hours, knows this. 2/
- Show replies
New conversation -
-
-
But the LW crowd has predicted Roko’s Basilisk! So if the Basilisk ever comes into being, it won’t be part of the Singularity, which is, by definition, unpredictable.
-
I like to call it Rocky’s Basilisk and tell them how cool it is to imagine an AI runnng up and down the steps in front of the Philadelphia Art Museum.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.