1/ If you separate creators and managers of a class of risks, you're reinforcing and freezing a low-trust situation into a balance of power.
Conversation
Replying to
2/ You're making the pessimistic-about-humans assumption that people operating under principal-agent moral hazards will usually cheat
1
1
3
Replying to
3/ I generally DO make this assumption, and like balance of power achitectures that balance one moral hazard against another
1
1
1
Replying to
4/ But the mechanism doesn't work when the divide-and-conquer parties are the same people and the risk is created by a third foreign entity
1
1
1
Replying to
5/ "Balance of power" between FBI and CIA for instance means FUD-and-SNAFU fodder for terrorists to exploit.
2
1
Replying to
6/ Rational path for a terror org would be to *encourage* balance of power architecture in adversary state: divide-and-conquer for free
1
1
Replying to
8/ In other words, might Sam Altman be an evil AI seeking to divide-and-conquer us. Hmm. *head explode*
1
6
Show replies
Replying to
I don't think most are asking for this. Just that AI researchers need to be more aware of it
1
Replying to
And I think we should believe actual AI researchers when they say the concern is overblown scaremongering.
1
Show replies


