1/ If you separate creators and managers of a class of risks, you're reinforcing and freezing a low-trust situation into a balance of power.
-
-
Replying to @vgr
2/ You're making the pessimistic-about-humans assumption that people operating under principal-agent moral hazards will usually cheat
1 reply 1 retweet 3 likes -
Replying to @vgr
3/ I generally DO make this assumption, and like balance of power achitectures that balance one moral hazard against another
1 reply 1 retweet 1 like -
Replying to @vgr
4/ But the mechanism doesn't work when the divide-and-conquer parties are the same people and the risk is created by a third foreign entity
1 reply 1 retweet 1 like -
Replying to @vgr
5/ "Balance of power" between FBI and CIA for instance means FUD-and-SNAFU fodder for terrorists to exploit.
2 replies 0 retweets 1 like -
Replying to @vgr
6/ Rational path for a terror org would be to *encourage* balance of power architecture in adversary state: divide-and-conquer for free
1 reply 0 retweets 1 like -
Replying to @vgr
7/ So if somebody is saying AI risk should be studied separately from AI projects, ask yourself: balance of power or divide-and-conquer?
2 replies 1 retweet 2 likes -
Replying to @garybasin
@garybasin And I think we should believe actual AI researchers when they say the concern is overblown scaremongering.1 reply 0 retweets 0 likes
@garybasin Yes I did and tweeted it back then. I thought most of those were actually very moderate, far from scaremongering edge.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.