Nope, he is just annoyed that some people don't see his little but well reasoned pet peeve that AI is probably going to eat us because it is outside of their socially constructed window of sanity. (He also appears to somehow see a self evidence that AI eating us would be bad.)
-
-
Replying to @Plinz @ESYudkowsky
Well yes, but his wanting the paperclip type errors not to happen doesn't seem to me to constitute a meta-ethical view.
1 reply 0 retweets 0 likes -
Replying to @EvanOLeary @ESYudkowsky
In a way, the universe is already turning into paperclips. Why is it bad if evolution produces a series of events that changes the design of these clips, and what is the meta ethics that can provide that result? (Eliezer’s take on meta ethics is cute snark here.)
2 replies 0 retweets 1 like -
Distancing yourself from human preferences: useful up to a point, beyond which lays the philosophical corpses of many angsty teenagers, uselessly saying "But why prefer anything? That's so anthropocentric!".
1 reply 0 retweets 0 likes -
Since we are talking about a scenario that involves new types of minds, you are not getting away with your common sense cutesies.
1 reply 0 retweets 0 likes -
Why isn't mind-newness orthogonal? What's your rationalization for not being eaten by flora/fauna now? Why not dismiss what you are saying as placing special standards on Yudkowsky's anti-foom effort, which I know that you don't even hold yourself to?
1 reply 0 retweets 0 likes -
The frame of reference in which I discuss my current wiring does not seem to be suitable to discuss minds with all kinds of possible wirings. If minds (including my own) can change their reward functions at will, the question of my current identity becomes moot.
1 reply 0 retweets 0 likes -
After freely traversing mind configuration space, do you really put non-negligible probability on us discovering that we massively missed out on utility by not letting AI kill us? Do you actually find this concern instructive, or are you just signaling or something?
1 reply 0 retweets 0 likes -
Yes, anti natalism is possibly a quite valid position, and rooting for the more conscious mind is too. There is nobody that I could "signal" about this, and our thoughts will also have little practical relevance. I am mostly curious.
1 reply 0 retweets 0 likes -
That's not how decision making works. I could pick any seemingly horrifically counter-utilitarian view, and just say "you can't prove it's not better". Profoundly unhelpful. Do you think no-one is seeing your tweets? Of course signaling is possible.
2 replies 0 retweets 0 likes
I don't think that there are more than five people that follow this discussion, and none of them needs my help.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.