1/ If you separate creators and managers of a class of risks, you're reinforcing and freezing a low-trust situation into a balance of power.
-
-
Replying to @vgr
2/ You're making the pessimistic-about-humans assumption that people operating under principal-agent moral hazards will usually cheat
1 reply 1 retweet 3 likes -
Replying to @vgr
3/ I generally DO make this assumption, and like balance of power achitectures that balance one moral hazard against another
1 reply 1 retweet 1 like -
Replying to @vgr
4/ But the mechanism doesn't work when the divide-and-conquer parties are the same people and the risk is created by a third foreign entity
1 reply 1 retweet 1 like -
Replying to @vgr
5/ "Balance of power" between FBI and CIA for instance means FUD-and-SNAFU fodder for terrorists to exploit.
2 replies 0 retweets 1 like -
Replying to @vgr
6/ Rational path for a terror org would be to *encourage* balance of power architecture in adversary state: divide-and-conquer for free
1 reply 0 retweets 1 like -
Replying to @vgr
7/ So if somebody is saying AI risk should be studied separately from AI projects, ask yourself: balance of power or divide-and-conquer?
2 replies 1 retweet 2 likes
8/ In other words, might Sam Altman be an evil AI seeking to divide-and-conquer us. Hmm. *head explode*
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.