Awareness of human consequences is a necessity in all scientific & engineering disciplines. It's even more important in fields that are "high leverage", where a very small team consisting entirely of engineers can make a big impact. Like CS, and in particular AI.
-
-
Show this thread
-
If your work has "impact", then by definition it is changing the world. You must then ask *how* the world is changing -- in which direction does your impact point? Who benefits and who loses out? Technological impact always has a moral direction.
Show this thread -
I should add, the need for ethics awareness arises from the *applications* of AI. If your work is very theoretical, it generally does not have any materialized impact, and its potential impact could go in any direction.
Show this thread -
Likewise with regulations -- they should target *applications*, not research or technology in an abstract sense.
Show this thread
End of conversation
New conversation -
-
-
Isn't it better to be proactive then to wait for a Chernobyl like incident to happen cause of AI?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This Tweet is unavailable.
- Show replies
-
-
-
except perhaps harm those AI practitioners may themselves cause to others. pedantic maybe but that’s how my scale is weighted
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Here is the problem. Individuals, if responsible even just a bit, will always think about the implications. The problem starts when the implementation is for a org that sees it as a huge money making opportunity. Profit will always be the priority in that case
End of conversation
New conversation
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.