I don't think "real" AI in near term is likely but people underestimate how the tools developed toward that goal can be used against humans
-
-
see how the priestly class went absolutely insane during the election, totally isolated from reality. and that was only people who did that
-
new cyberpunk aesthetic, massive but uncreative government automated propaganda wing waging war against rogue nnets poisoning the well
-
the sentient computer meme is at present pure fantasy, computers are best understood as force multipliers, the longest lever yet produced
-
how long until hackers start popping boxes not to get their hands on password hashes but training data
-
fundamental constraint on applications is the bias/variance tradeoff, whether you underfit or overfit your inputs
-
imagine a hypothetical in which computer vision is perfected and the main challenge of self-driving cars is decisionmaking
-
reasonable to think they would outperform the worst drivers in all cases, and the best drivers in typical cases
-
failing in weird (ie, rare) edge cases that humans above a certain skill threshold could handle competently
-
from a utilitarian view this is strictly better as they are safer in aggregate even though they may doom the occasional outlier
-
now instead of drivers imagine a comparable system replacing juries
-
only fantastical part of this is the system working well for the given definition of "well", inferior programs already used for bail/parole
-
woke types accuse these systems of amplifying human biases, hbd types say they reveal biases as truths. I don't think either is quite right
-
assuming reasonably well-designed system, would likely handle vast majority of cases better than average and fail spectacularly on outliers
-
this would strike most as a shocking abrogation of (the ideal of) justice, as well as put a more material fear in said outliers
-
but the way the question will be posed is, do the aggregate effects justify the collateral damage?
-
and, given the track record, I expect most will answer in the affirmative
- 3 more replies
New conversation -
-
-
My son (13) was recently involved in a formal debate on AI. Afterward, we had a discussion about it. It might not be AI itself, but...
-
...human reaction/response to AI generated information that is worth fearing.
End of conversation
New conversation -
-
-
Do you mean that we aren't seeing what's happening, or that it could be a lot worse?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.