Key diff between AlphaGo zero and gpt2 worth mulling: AGZ discarded human training data and conquered Go playing against itself. That can’t happen with gpt2. Because there’s no competition rules or goals about language outside of closed world subsets like crosswords.
-
Show this thread
-
Now if Boston Dynamics robots evolved an internal language in the process of learning how to survive in an open environment that would at least be comparable to how I use language (in a survival closed loop in a social milieu)
1 reply 1 retweet 8 likesShow this thread -
But that example suggests the link between survival, intelligence and computation is much subtler. If you wanted to build tech that simply solved for survival in an open environment, you’re more likely to draw inspiration from bacteria than dogs or apes.
1 reply 0 retweets 10 likesShow this thread -
The only reason to cast silicon-based computation into human like form is a) replace ourselves cheaply in legacy infrastructure b) scare ourselves for no good reason
1 reply 1 retweet 11 likesShow this thread -
This is easy to see with arms and legs. Harder to see with mental limbs like human language. Asimov had this all figured out 50 years ago. The only reason AI is a “threat” is the benefits of anthropomorphic computation to some will outweigh their costs to many, which is fine.
4 replies 1 retweet 8 likesShow this thread -
Non-anthropomorphic computation otoh is not usefully illuminated by unmotivated comparisons to human capability. AlphaGo can beat us at Go A steam engine can outrun us Same story. More geewhiz essentialism that’s it.
3 replies 1 retweet 13 likesShow this thread -
Same steady assault on human egocentricity that’s been going on since copernicus. Not being the center is not the same as being “replaced.” Risks are not the same as malevolence Apathetic harm from a complex system is not the same as intelligence at work
1 reply 2 retweets 17 likesShow this thread -
“Intelligence” is a geocentrism kinda idea. The belief that “our” thoughts “revolve around” us. Like skies seem to revolve around earth. “AI” is merely the ego-assaulting discovery that intelligence is just an illusion caused by low-entropy computation flows passing through us.
4 replies 7 retweets 39 likesShow this thread -
What annoys me about “AI” understandings of statistical algorithms is that it obscures genuinely fascinating questions about computation. For example it appears any Universal Turing Machine (UTM) can recover the state of any other UTM given enough sample output and memory.
1 reply 0 retweets 9 likesShow this thread -
This strikes me as more analogous to a heat engine locally reversing entropy than “intelligence”. But nobody studies things like gpt2 in such terms. Can we draw a Carnot cycle type diagram for it? What’s the efficiency possible?
2 replies 0 retweets 12 likesShow this thread
The tedious anthropocentric lens (technically the aspie-hedgehog-rationalist projective lens) stifles other creative perspectives because of the appeal of angels-on-a-pinhead bs thought experiments like simulationism. Heat engines, swarms, black holes, fluid flows...
-
-
Most AI watchers recognize that the economy and complex bureaucratic orgs are also AIs in the same ontological sense as the silicon based ones, but we don’t see the same moral panic there. When in fact both have even gone through paperclip-maximizer type phases. Why?
4 replies 3 retweets 20 likesShow this thread -
I’ll tell you why. Because they don’t lend themselves as easily to anthropomorphic projection or be recognizably deployed into contests like beating humans at Go. Markets beat humans at Go via prizes. Bureaucracies do it via medals and training.
0 replies 1 retweet 13 likesShow this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.