Important Thread explaining why AI Safety isn't a concern.https://twitter.com/SimonDeDeo/status/1011255347095523328 …
-
-
Replying to @simpolism
does anyone who's taken seriously by anyone believe that AI must be explicitly Bayesian in order to be an existential threat?
1 reply 0 retweets 6 likes -
Replying to @regretmaximizer
IME most people who think AI is an existential threat also seem to take Yudkowsky seriously, but there could surely be those out there who don't...
2 replies 0 retweets 1 like -
Replying to @simpolism
my point is that i don't think he actually believes that remember when he was wiggin' out about about alphago
1 reply 0 retweets 2 likes -
Replying to @regretmaximizer @simpolism
closest i could find on the subject seemed to sound more like he thinks AI is more likely to be an existential risk if it's *not* explicitly Bayesian, but that was also pretty old ah who am i kidding, punching out strawmen is too fun
1 reply 0 retweets 1 like -
Replying to @regretmaximizer
simpolism Retweeted Simon DeDeo
Bayesian or not, this is closest to my feelings: https://twitter.com/SimonDeDeo/status/1011266703223816192 … IMO: we can't predict how GAI will look and what it will do. So how can we "prepare" for it? Focusing on preparing for a Paperclip Maximizer DOES assume Bayesianism.
simpolism added,
Simon DeDeo @SimonDeDeoReplying to @simpolism @TetraspaceAdmnWe can imagine a Strong AI. We can also imagine a super-intelligent alien race hiding under the surface of the Moon, and spend lots of money studying the Outside Context Problem. The question is if there are good arguments to worry, and the best one we have doesn't work.1 reply 0 retweets 1 like -
Replying to @simpolism
why does preparing for a paperclip maximizer assume Bayesianism?
1 reply 0 retweets 0 likes -
Replying to @regretmaximizer
You are correct -- it doesn't assume Bayesianism -- but this doesn't invalidate my central point, which is that we don't know what it will look like.
1 reply 0 retweets 0 likes
thinking about ai safety doesn't require knowing what it looks like, just that it intelligently pursues goals and isn't human
-
-
Replying to @VesselOfSpirit @simpolism
i'd be interested if you have a specific example of an ai safety argument that depends on knowing the ai's architecture
1 reply 0 retweets 2 likes -
Replying to @VesselOfSpirit @simpolism
as for the aliens below the moon thing: artificial general intelligence seems possible because humans are possible
1 reply 0 retweets 3 likes - 17 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.