When software starts doing what a human used to do it creates two issues. First, we have to accept probabilistic control failures, and from a logical system that’s a black box that we have no “theory of mind” for.
-
-
It’s more difficult for us to predict how a non-deterministic software system will behave in edge cases and when it fails, the builder is liable rather than the operator.
1 reply 0 retweets 0 likes -
Replying to @jstogdill @vgr
we do have an ethics framework for making that distinction though. the responsibilities of the builder are to deliver a product that behaves according to design spec and applicable safety regulations. operator errors within those bounds are not the fault of the builder.
2 replies 0 retweets 0 likes -
our bigger problem is that we don't have an ethics framework for considering the legal status of AI systems. we also don't have an ethics framework for considering the legal status of non-humans. we elide this deficiency in our ethics by considering all non-humans property.
2 replies 0 retweets 0 likes -
Replying to @danlistensto @vgr
I think we can safely ignore this until and if software systems develop a consciousness / self awareness. In the meantime we need to prepare to test, insure, and accept non-deterministic control systems.
2 replies 0 retweets 0 likes -
Replying to @jstogdill @vgr
can you tell me what is the relevant distinction between "consciousness" and "non-deterministic control system"?
1 reply 0 retweets 0 likes -
Replying to @danlistensto @vgr
Let me put it this way, if you want to think hard about when machines qualitatively deserve personhood read Do Androids Dream of Electric Sheep again and enjoy the mental exercise. But in the meantime I know a 2020 Volvo isn’t that.
1 reply 0 retweets 0 likes -
What it’s doing is making decisions we can’t fully predict during testing. But it’s not a person.
1 reply 0 retweets 0 likes -
Replying to @jstogdill @vgr
not suggesting it has personhood. I'm suggesting personhood is irrelevant in considering of moral agency at the level of regulations for public safety. we don't need to know what it's qualia are like to know that it is making morally relevant decisions.
1 reply 0 retweets 0 likes -
Replying to @danlistensto @vgr
I sort of agree. But I think it’s still a designed product and the onus is on the designer. They don’t have to know every risk, but practically speaking the risk envelope needs to be understood, conveyed to buyers, and insured.
2 replies 0 retweets 0 likes
exactly. we do agree on this. the regulatory policy ought to be based on harm potential and cost of harm mitigation.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.