I don't think "real" AI in near term is likely but people underestimate how the tools developed toward that goal can be used against humans
-
-
from a utilitarian view this is strictly better as they are safer in aggregate even though they may doom the occasional outlier
-
now instead of drivers imagine a comparable system replacing juries
-
only fantastical part of this is the system working well for the given definition of "well", inferior programs already used for bail/parole
-
woke types accuse these systems of amplifying human biases, hbd types say they reveal biases as truths. I don't think either is quite right
-
assuming reasonably well-designed system, would likely handle vast majority of cases better than average and fail spectacularly on outliers
-
this would strike most as a shocking abrogation of (the ideal of) justice, as well as put a more material fear in said outliers
-
but the way the question will be posed is, do the aggregate effects justify the collateral damage?
-
and, given the track record, I expect most will answer in the affirmative
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.