I don't think "real" AI in near term is likely but people underestimate how the tools developed toward that goal can be used against humans
-
-
reasonable to think they would outperform the worst drivers in all cases, and the best drivers in typical cases
-
failing in weird (ie, rare) edge cases that humans above a certain skill threshold could handle competently
-
from a utilitarian view this is strictly better as they are safer in aggregate even though they may doom the occasional outlier
-
now instead of drivers imagine a comparable system replacing juries
-
only fantastical part of this is the system working well for the given definition of "well", inferior programs already used for bail/parole
-
woke types accuse these systems of amplifying human biases, hbd types say they reveal biases as truths. I don't think either is quite right
-
assuming reasonably well-designed system, would likely handle vast majority of cases better than average and fail spectacularly on outliers
-
this would strike most as a shocking abrogation of (the ideal of) justice, as well as put a more material fear in said outliers
-
but the way the question will be posed is, do the aggregate effects justify the collateral damage?
-
and, given the track record, I expect most will answer in the affirmative
- 3 more replies
New conversation -
-
-
Don't need perfection. Current algorithms are already better drivers than humans in practice, because attention.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.