Top-level takeaway of tonight's catch-up reading on ML... I still find it hard to take AI Risk seriously as a special problem here. There are definite risks but they don't seem qualitatively different from other kinds of engineering risk.
it seems like an interesting failure mode for systems with embedded AIs. no mechanical failure just "the ship-mind made a bad choice", which is not particularly different than the bad choices humans with executive control of systems make.
-
-
I think this nails it. We expect well understood risk in our engineered systems - once every million miles a tire will blow. We accept probabilistic risk from humans (but don’t test it as part of the system, these risks emerge with use).
-
When software starts doing what a human used to do it creates two issues. First, we have to accept probabilistic control failures, and from a logical system that’s a black box that we have no “theory of mind” for.
-
It’s more difficult for us to predict how a non-deterministic software system will behave in edge cases and when it fails, the builder is liable rather than the operator.
-
we do have an ethics framework for making that distinction though. the responsibilities of the builder are to deliver a product that behaves according to design spec and applicable safety regulations. operator errors within those bounds are not the fault of the builder.
-
our bigger problem is that we don't have an ethics framework for considering the legal status of AI systems. we also don't have an ethics framework for considering the legal status of non-humans. we elide this deficiency in our ethics by considering all non-humans property.
-
I think we can safely ignore this until and if software systems develop a consciousness / self awareness. In the meantime we need to prepare to test, insure, and accept non-deterministic control systems.
-
can you tell me what is the relevant distinction between "consciousness" and "non-deterministic control system"?
-
Let me put it this way, if you want to think hard about when machines qualitatively deserve personhood read Do Androids Dream of Electric Sheep again and enjoy the mental exercise. But in the meantime I know a 2020 Volvo isn’t that.
- 4 more replies
New conversation -
-
-
discussion of AIs that have mysteriously far-reaching powers and metastasize like cancer across the internet still looks like pulpy sci-fi nightmare fuel material to me though.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.