Top-level takeaway of tonight's catch-up reading on ML... I still find it hard to take AI Risk seriously as a special problem here. There are definite risks but they don't seem qualitatively different from other kinds of engineering risk.
-
-
That’s why I’m also fairly concerned about those—very complex systems with many cases to check. But 1) AI is this but more so, and 2) we spend billions every year on those two but can list on two hands all AI safety researchers
-
That's my point. We actually spend a lot more if you classify right. All infosec research is also "AI risk" research if you conceptualize correctly without anthropomorphizing for example. If you buy an arbitrary religious boundary, there's few people inside it.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.