Conversation

My own initial reaction to gpt2 helps me understand why pre-modern tribes reacted to being filmed and shown the footage like the camera was stealing their souls. Except, I *like* the idea instead of being horrified by it. Unfortunately as untrue for AIs as for film cameras.
6
72
In fact, gpt2 has helped me clarify exactly why I think the moral/metaphysical panic around AI in general, and AGI in particular, is easily the silliest thing I’ll see in my lifetime. It’s the angels-on-a-pinhead concern of out time. Not even wrong.
3
24
AI won’t steal our souls AI won’t terk ehr jerbs AI won’t create new kinds of risk AI isn’t A or I I’d call it “cognitive optics”... it’s a bunch of lenses and mirrors that reflect, refract, and amplify human cognition. AIs think in the same way telescopes see. Ie they don’t.
9
62
“AIs reflect/reproduce our biases” misses the point by suggesting you could prevent that. That’s ALL they (deep learning algos) do. Biases are the building blocks. Take that out and there’s nothing left. Eg. gpt2 has picked up on my bias towards word like “embody” and “archetype”
2
38
Taking the bias out of AI is like taking the economy out of a planned economy. You’re left with regulators and planners with nothing left to regulate and plan. Or the lenses and mirrors out of a telescope. You’re left with a bunch of tubes with distortion-free 1x magnification.
4
29
Replying to
I'm not sure what you're onto here. AI tends to optimize for the average (mean squared error normally to be technical). So it'll pick up biases that are statistically consistent. People are racist idiots, so models trained on people act equally shitty for instance
2
Replying to
Biases are just emulated patternscwith political significance. Picking up on my word frequencies is the same mechanism but classified as “function” in emulating me.
1
Replying to and
Models picking up on your language pattern is just models getting conditional probabilities right. But your language pattern is related to output, so that's fine. Bias and politics are joined through human tribalism but bias is an apolitical concept.
1
Replying to and
The problem with models picking bias is that: 1) it scales up the bias 2) most models are retrained on past data related to model output, so it builds feedback loops related to the bias (bias silently goes from unrelated to output to related through past models)
1