My own initial reaction to gpt2 helps me understand why pre-modern tribes reacted to being filmed and shown the footage like the camera was stealing their souls. Except, I *like* the idea instead of being horrified by it. Unfortunately as untrue for AIs as for film cameras.
-
Show this thread
-
In fact, gpt2 has helped me clarify exactly why I think the moral/metaphysical panic around AI in general, and AGI in particular, is easily the silliest thing I’ll see in my lifetime. It’s the angels-on-a-pinhead concern of out time. Not even wrong.
3 replies 0 retweets 26 likesShow this thread -
AI won’t steal our souls AI won’t terk ehr jerbs AI won’t create new kinds of risk AI isn’t A or I I’d call it “cognitive optics”... it’s a bunch of lenses and mirrors that reflect, refract, and amplify human cognition. AIs think in the same way telescopes see. Ie they don’t.
9 replies 13 retweets 65 likesShow this thread -
“AIs reflect/reproduce our biases” misses the point by suggesting you could prevent that. That’s ALL they (deep learning algos) do. Biases are the building blocks. Take that out and there’s nothing left. Eg. gpt2 has picked up on my bias towards word like “embody” and “archetype”
2 replies 7 retweets 39 likesShow this thread -
Taking the bias out of AI is like taking the economy out of a planned economy. You’re left with regulators and planners with nothing left to regulate and plan. Or the lenses and mirrors out of a telescope. You’re left with a bunch of tubes with distortion-free 1x magnification.
4 replies 3 retweets 30 likesShow this thread -
Replying to @vgr
I'm not sure what you're onto here. AI tends to optimize for the average (mean squared error normally to be technical). So it'll pick up biases that are statistically consistent. People are racist idiots, so models trained on people act equally shitty for instance
2 replies 0 retweets 0 likes -
Replying to @vhranger
Biases are just emulated patternscwith political significance. Picking up on my word frequencies is the same mechanism but classified as “function” in emulating me.
1 reply 0 retweets 0 likes -
Replying to @vgr
Disagree. Bias is consistently taking an action that's unrelated to output. Race doesn't correlate with job performance (controlling for sociodemographics). But people still hire on race because the human brain is deeply tribal.https://www.nber.org/papers/w9873
1 reply 0 retweets 0 likes -
Models picking up on your language pattern is just models getting conditional probabilities right. But your language pattern is related to output, so that's fine. Bias and politics are joined through human tribalism but bias is an apolitical concept.
1 reply 0 retweets 0 likes -
The problem with models picking bias is that: 1) it scales up the bias 2) most models are retrained on past data related to model output, so it builds feedback loops related to the bias (bias silently goes from unrelated to output to related through past models)
1 reply 0 retweets 0 likes
My point is bias is an applied legalistic notion not a phenomenological one. With limited training set that overuses word X, model will output overuse X unrelated to intent. Bias lies in intent which must be extrinisically assessed.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.