To upgrade society, you have to upgrade its members first.
-
-
Replying to @pwlot
Not true, IMHO. People tend to act on their incentives, within their knowledge and abilities. Society is the architecture of incentives of a scalable group of people.
3 replies 0 retweets 2 likes -
Replying to @Plinz
Strictly not true, yes. I left out the nuance in favor of punch. It is true for the sort of radical change people like to talk about, change that in my opinion requires change of basic human drives. For my point, I also assume it's desirable to not take away agency.
1 reply 0 retweets 0 likes -
Replying to @pwlot
It seems that most people function well under a wide variety of social orders. It also seems that giving up a degree of agency is not a bug but an evolved feature of the cognitive architecture of most humans. What change do you have in mind?
2 replies 0 retweets 0 likes -
Replying to @Plinz
Let's say that the human cognitive architecture is more than just ill-equipped to handle the transition to posthumanism. Adapt or perish. :) But I also think some old school ideals will never work with homo sapiens.
1 reply 0 retweets 0 likes -
Replying to @pwlot
Ah! I thought the good news is that once we have AI we won't need humans or posthumans any more?
2 replies 0 retweets 0 likes -
Replying to @Plinz
By the way, slightly related and I'm curious what you think about this: the idea that we're not as generally intelligent as like to think we are. We want to create AGI, and of course we want cross-domain learning transfer and such, but just how much of a NGI are we?
1 reply 0 retweets 0 likes -
Replying to @pwlot
I thought for a while that we are maybe not generally intelligent and now I think we are, once we fix our epistemology. Using pen and paper for storage, we can execute any algorithm and approximate any function that can be approximated by a resource bounded Turing Machine.
2 replies 0 retweets 0 likes -
Replying to @Plinz
I think we are, but "general intelligence" might not be as clear of a concept as we might want. Even if you try to abstract the hell out of it, having it boil down to computational models and complexity.
1 reply 0 retweets 0 likes
Learning is function approximation. Searching for function approximators is meta learning. One level above that lies optimality theory of search for meta learning. I don't see how any intelligent system needs to go higher from there; it is all just deeper then.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.