He posted a meme saying it's so terrible people die without cryonics and then when I asked him to pay for mine he ignored me, which seems to violate a large number of rationalist precepts.
-
-
This Tweet is unavailable.
-
Replying to @0K_ultra
If you think cryonics is an inferior utilitarian proposition to charitable giving then you should apply that logic to yourself as well, otherwise you're inconsistent. If you think it's a superior utilitarian proposition you should apply that logic to others as well.
1 reply 0 retweets 0 likes -
This Tweet is unavailable.
-
Replying to @0K_ultra
I'm not saying Yud's position might not be consistent, whatever it is. But it certainly isn't the position he represents himself as having.
1 reply 0 retweets 0 likes -
This Tweet is unavailable.
-
This Tweet is unavailable.
-
Replying to @0K_ultra
I think he's serious about that, but I don't think he's done the work of examining his actual behaviors to see how they reflect or reject that premise.
1 reply 0 retweets 0 likes -
Actually to a certain extent I don't know what "optimizing for preventing AI risk" looks like and suspect Yud doesn't either. I sort of think if one were serious it wouldn't look at all like sifting through a potentially infinite sea of mostly failed or mediocre hypotheses.
1 reply 0 retweets 0 likes -
Like if I were Yud my goal would just be to place as many people in positions of power and in proximity to AI research as possible and amass as much wealth as possible without spending any of it, in anticipation of needing to rapidly mobilize in an unknown way in the future.
1 reply 0 retweets 0 likes
Like there's a very real sense in which Mormons and Scientologists are in a better position to steer AI development than Yudkowsky is.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.