it seems like an interesting failure mode for systems with embedded AIs. no mechanical failure just "the ship-mind made a bad choice", which is not particularly different than the bad choices humans with executive control of systems make.
not suggesting it has personhood. I'm suggesting personhood is irrelevant in considering of moral agency at the level of regulations for public safety. we don't need to know what it's qualia are like to know that it is making morally relevant decisions.
-
-
I sort of agree. But I think it’s still a designed product and the onus is on the designer. They don’t have to know every risk, but practically speaking the risk envelope needs to be understood, conveyed to buyers, and insured.
-
exactly. we do agree on this. the regulatory policy ought to be based on harm potential and cost of harm mitigation.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.