I am not basing this on the empirical rationalist, who is seemingly attracted to rationalism because they are not rational but want to be. I'm basing it on Yud's writing and the aspirations it tries to instill in people.
Actually to a certain extent I don't know what "optimizing for preventing AI risk" looks like and suspect Yud doesn't either. I sort of think if one were serious it wouldn't look at all like sifting through a potentially infinite sea of mostly failed or mediocre hypotheses.
-
-
Like if I were Yud my goal would just be to place as many people in positions of power and in proximity to AI research as possible and amass as much wealth as possible without spending any of it, in anticipation of needing to rapidly mobilize in an unknown way in the future.
-
Like there's a very real sense in which Mormons and Scientologists are in a better position to steer AI development than Yudkowsky is.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.