“Decentralized community” is like AI. Every time you make it work in some ways, it’s clear it’s not really what you meant and you have to move the goal posts and redefine the problem. So all learning follows the template “AI is not X”/“Decentralized community is not X”
Conversation
X = some place the goalposts once were
For AI, X = formal logic, theorem proving, automated planning, chess. image recognition, NLP
For decentralized communities, X = p2p permission/access protocols, leaderless management, token-based economies, DAOs
1
8
I’ll eventually do a thread about what we’ve learned 1.5 years into the but it’s a bunch of negatives of the form
“Decentralized X is not Y”
Like “decentralized projects is not co-authoring”
The trick is to survive each lesson with energy left for the next try
1
13
I think the DAO stuff is getting interesting at a painfully slow rate, but tapping into the capabilities of the tech without getting sucked into managing speculation dynamics as Job One seems to be more than just a hard problem. It seems to point to a fundamental limit.
1
2
7
The closest thing we have to a decentralized community, which is not coincidentally also the closest thing we have to an AGI, is the American-style free market. And that has speculative boom-bust as a defining feature since tulips.
1
1
6
“Extraordinary communitarian delusions and the madness of decentralized communities”
Exhibit A: Jo Freeman, tyranny of structurelessness
2
1
11
Replying to
Goalposts-complete problems
1. AGI
2. Ideal government
3. Decentralized community
4. Meaning
5. Creativity
6. Humor
7. Narrative
The moment you think you have a specification the intension of the problem shifts unpredictably in your head plato.stanford.edu/entries/logic-
3
1
11
One possible reason is that each time you make progress on a goalposts-complete problem, the learning makes *you* change in ways that destabilize the old meaning of the problem.
A special case is ego-shrinkage through disruption of anthropocentric epistemes (heliocentric —> DNA)
1
6
It’s no accident that all the goalposts-complete problems in my list happen to rest on anthropocentric epistemes. But I think the phenomenon is more general.
1
4
