It just occurred to me that there is a nonzero chance that Yudkowsky's work with MIRI/AI safety is intentional sabotage due to having taken Roko's Basilisk too seriously
-
Show this thread
-
Someone needs to comb through everything he says _wouldn't_ work
1 reply 0 retweets 14 likesShow this thread
Replying to @Laconoclasm
this is hilarious and good dont listen to the other guy
3:47 PM - 30 Dec 2020
0 replies
0 retweets
1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.