Conversation

Like you, I'd spend large parts on the reduction of global risks. In particular, I'd focus on risks associated with AI and biotechnology. I'd also invest in improving our collective epistemics - an area I think belongs to. Much more to be done in that domain.
1
29
My impression is that there are more safety-focused people going into industry than there would have been, but I don't know if this has increased total work on AI or AI progress by much (in terms of talent or numbers). It seems like a pretty hot area independently.
1
13
Many of the most talented people I know working on building AGI seem to have gotten interested in part due to AI safety arguments. This seems likely (a) to have meaningfully accelerated progress toward AGI; but AFAICT (b) has done little to make such systems safer.
11
11
87