I am not basing this on the empirical rationalist, who is seemingly attracted to rationalism because they are not rational but want to be. I'm basing it on Yud's writing and the aspirations it tries to instill in people.
I think he's serious about that, but I don't think he's done the work of examining his actual behaviors to see how they reflect or reject that premise.
-
-
Actually to a certain extent I don't know what "optimizing for preventing AI risk" looks like and suspect Yud doesn't either. I sort of think if one were serious it wouldn't look at all like sifting through a potentially infinite sea of mostly failed or mediocre hypotheses.
-
Like if I were Yud my goal would just be to place as many people in positions of power and in proximity to AI research as possible and amass as much wealth as possible without spending any of it, in anticipation of needing to rapidly mobilize in an unknown way in the future.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.