All 3 work on Type B problems as defined here to enhance the reward rather than modify the solutionhttps://twitter.com/vgr/status/1147900223714316289 …
-
Show this thread
-
One of the reasons paperwork/any dealings with any sort of impersonal API are so dreadful is that there is no way to add meaning. The game is finite (form filling has less material depth to it, especially when digital) and any counterparty is figuratively or literally robotic.
2 replies 1 retweet 14 likesShow this thread -
We probably overuse the infinite game/finite game model around here, but "adding meaning" is nearly synonymous with "find the infinite game dimension of a seemingly finite activity and develop it." Call this operation "infinitizing". It can be done in material or social ways.
1 reply 3 retweets 22 likesShow this thread -
Material way is basically mindfulness++ any of a category of behavioral augmentations ranging from simple changed perception of behavior to active exploration of an infinitizing dimension (which typically feels like play because it is almost decoupled from finite function aspect)
1 reply 0 retweets 3 likesShow this thread -
Social way is managing to find a counterparty in able and willing to engage in a sort of deepening mutuality. So far this means a willing and able human. AIs haven't yet reached capability of being infinitizing counterparty and may be definitionally incapable of it.
2 replies 0 retweets 6 likesShow this thread -
Replying to @vgr
Just cause it’s never been done doesn’t mean it can’t be done



1 reply 0 retweets 1 like -
Replying to @fspacef
It's not a "doing" problem I suspect. It's a definitional problem. A square can't become a circle by trying harder. It has to go on a Square's Journey.
1 reply 0 retweets 2 likes -
Replying to @vgr
Sure. Thought experiment: you write an essay you chose words with the central assumption that the reader is intelligent. What if you could capture that “intelligence” in a bottle? An MMOG with metaphors and advice instead of powers and spells.
1 reply 0 retweets 0 likes -
Replying to @fspacef
Intelligence is easy. The problem I'm flagging here is counterparty being human in a different way... capable of feeling pain etc. Much harder, and more closely coupled to hard problem of consciousness than to AI. If you are a strong AI type, you won't see this as a problem.
1 reply 1 retweet 3 likes -
Replying to @vgr
Are you saying intelligence will come along the way to being a higher order civilization? Currently lost inside ribbon farm btw :) Inspired lately by thinking we’re in a simulation and attempting to break out of it. First step would be to make people self awarepic.twitter.com/GXdS0trbYa
2 replies 0 retweets 1 like
I don't think "intelligence" is either an interesting or well-posed problem at all :). To the extent it is, it's fairly trivial philosophically (moar deep learning!) even if difficult technically. I also don't think in terms of "higher order" or levels of civilization.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.