Security has a concept called attack surface: the more kinds of exposures, the greater the attack surface. Similarly we can think of governance as having an argument surface: the more kinds of decisions there are to argue over, the less functional the institution will be.
-
-
And including decisions about what the algorithm(s) should be. The number of such decisions, and thus their argument surface, is (or can be made to be) vastly smaller than the number of decisions the resulting algorithms make, thus radically reducing the argument surface.
-
Important caveat is that when human decisions are used as data input to an algorithm, that can potentially restore the greatly larger argument surface, i.e. people putting great efforts into gaming those inputs (arguably what is going on with Twitter shadow-banning).
-
So we want the input be as simple and discrete as flipping a switch (nothing to reverse engineer or game), ie computing a nonce. But can the problem not pop out into the hardware world, eg controlling hardware supply or cheap energy access?
-
Sadly, nothing in this life is completely trustless. But there's plenty of room for improvement.
-
As long as we run commonplace hardware it seems nearly uncrackable. That is, no actor could destroy the multi-trillion dollar IT industry by banning PC components, too much pushback from biz and consumers. With more exotic hardware, more easily targeted, less trustless.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.