Anthropomorphic design has good justifications (Asimov saw this in 1950). Designing swap-in functional replacements for ourselves cheaply evolves legacy infrastructure. Driverless cars are valuable because driver-cars are a big sunk cost. Without them, we’d automate differently.
-
-
This strikes me as more analogous to a heat engine locally reversing entropy than “intelligence”. But nobody studies things like gpt2 in such terms. Can we draw a Carnot cycle type diagram for it? What’s the efficiency possible?
Show this thread -
The tedious anthropocentric lens (technically the aspie-hedgehog-rationalist projective lens) stifles other creative perspectives because of the appeal of angels-on-a-pinhead bs thought experiments like simulationism. Heat engines, swarms, black holes, fluid flows...
Show this thread -
Most AI watchers recognize that the economy and complex bureaucratic orgs are also AIs in the same ontological sense as the silicon based ones, but we don’t see the same moral panic there. When in fact both have even gone through paperclip-maximizer type phases. Why?
Show this thread -
I’ll tell you why. Because they don’t lend themselves as easily to anthropomorphic projection or be recognizably deployed into contests like beating humans at Go. Markets beat humans at Go via prizes. Bureaucracies do it via medals and training.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.