Humans can be held accountable for their choices behind the wheel, robots cannot
-
-
-
"Held accountable" means punished. Would you be okay with AI drivers if the AI could suffer, and therefore be punished?
- 5 more replies
New conversation -
-
-
I think the issue is whether you are programming/training the ‘AI’ (it’s not even what I think of as a real AI with any holistic aspect) to make an ‘informed’ choice in that situation. N/A today in the current kludgy state of how the technology interprets the environment...
-
There are still reprehensible programming choices, like deciding not to categorize anything in the road as human if no crosswalk is seen (to cut down on false positives); that (plus inattention of the minder) was how the woman walking a bicycle in AZ was hit.
End of conversation
New conversation -
-
-
I think most arguments I’ve heard are about who assumes liability: ‘driver’ or manufacturer? Human-caused traffic accidents have a fairly obvious answer. If self-driven vehicles must make such decisions the moral calculus is the same as for people, obviously—
-
but then people usually have accidents because they’re shit drivers and react slower than were told these vehicles will (eventually). The moral question isn’t very potent unless there is in fact a decision to be made.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.