1/ If you separate creators and managers of a class of risks, you're reinforcing and freezing a low-trust situation into a balance of power.
Conversation
Replying to
2/ You're making the pessimistic-about-humans assumption that people operating under principal-agent moral hazards will usually cheat
1
1
3
Replying to
3/ I generally DO make this assumption, and like balance of power achitectures that balance one moral hazard against another
1
1
1
Replying to
4/ But the mechanism doesn't work when the divide-and-conquer parties are the same people and the risk is created by a third foreign entity
1
1
1
Replying to
5/ "Balance of power" between FBI and CIA for instance means FUD-and-SNAFU fodder for terrorists to exploit.
2
1
Replying to
7/ So if somebody is saying AI risk should be studied separately from AI projects, ask yourself: balance of power or divide-and-conquer?
2
1
1
Replying to
8/ In other words, might Sam Altman be an evil AI seeking to divide-and-conquer us. Hmm. *head explode*
1
6
Show replies
