Conversation

1/ If you separate creators and managers of a class of risks, you're reinforcing and freezing a low-trust situation into a balance of power.
1
5
Replying to
2/ You're making the pessimistic-about-humans assumption that people operating under principal-agent moral hazards will usually cheat
1
3
Replying to
3/ I generally DO make this assumption, and like balance of power achitectures that balance one moral hazard against another
1
1
Replying to
4/ But the mechanism doesn't work when the divide-and-conquer parties are the same people and the risk is created by a third foreign entity
1
1
Replying to
6/ Rational path for a terror org would be to *encourage* balance of power architecture in adversary state: divide-and-conquer for free
1
1
Replying to
7/ So if somebody is saying AI risk should be studied separately from AI projects, ask yourself: balance of power or divide-and-conquer?
2
1