I think impersonal institutions are toast. If you want an alternative to neopatrimonialism, start thinking of mechanisms for enabling transitive p2p trust. Technology that makes it such that if A trusts B and B trusts C, A trusts C in certain predictable ways/defined scopes.
-
-
Then (and this is the clever bit) sign a few over to new people you need to have trust you while you’re working with them. They return them when relationship is done. If they like the work, they issue you their recommendation letters citing the ones they held.
Show this thread -
So old letters get more valuable as new letters blockchain on. You use them for more important things. It’s a secure currency because they are only useful in relation to working with you. You can stake a few letters or many, leaf-letters or roots of long chains.
Show this thread -
If they DON’T like the work, they have the option of destroying the letters they hold. You’d have to go back and get new letters from people for the entire destroyed chain. So it’s serious. You’re operating your own ransomware.
Show this thread -
How would you use this? Instead of saying “I really want this job!” you’d say, “I want this job so much I’ll stake my entire TWIMC forest* on this application!” * Set of To Whom It May Concern DAG trees
Show this thread -
You’d literally be staking your reputation. Or at least signing up for a very expensive reputation reconstruction failure mode. Including recommendations from dead people that may not be recoverable.
Show this thread -
You could robustify your reputation forest. If Bob thinks Alice may in future turn out to be a criminal, he stakes his gig with Charlie using both letters from Alice and Dan. That way if Alice goes to jail, her letter can be removed without orphaning the Charlie letter.
Show this thread -
New conversation -
-
-
interestingly I recently looked at the work I've been doing on denotational and referential semantics in compsci & realized "all well and good but what's the compelling iRL use case?" needed a good answer & what fell out of trying to answer that was a spec for almost exactly this
-
the trick is you can cook up a computable formalism for the semantics of the trust relation to supervene on and ensure if "A trusts B" holds then by induction to "A trusts ... trusts Z" also holds, but Z can't decide if it thinks A's assertion "I trust B" is *warranted* >
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
When Alice says, “I trust Bob” to Charlie, the only way to capture the promise being made is to include a model of Alice in any new Bob-Charlie relationship definition.