Top-level takeaway of tonight's catch-up reading on ML... I still find it hard to take AI Risk seriously as a special problem here. There are definite risks but they don't seem qualitatively different from other kinds of engineering risk.
That's my point. We actually spend a lot more if you classify right. All infosec research is also "AI risk" research if you conceptualize correctly without anthropomorphizing for example. If you buy an arbitrary religious boundary, there's few people inside it.
-
-
Personally I look at AI risk research as consisting of robustness and/or alignment work (infosec is robustness) that can scale to ML-powered systems that are stronger, faster, and harder to understand than today’s (a lot of infosec work won’t).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
there's a bunch of work that would be useful to mitigate the risk of AI related accidents (e.g. formal verification, infosec), but much less happening that's directly aimed at the big, less understood problems (e.g. value alignment)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.