The lack of awareness of AI ethics issues by AI practitioners has been an ongoing source of very real problems. On the other hand, I have yet to hear of any harm caused by making AI practitioners think about the implications of their work.
-
-
Likewise with regulations -- they should target *applications*, not research or technology in an abstract sense.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This Tweet is unavailable.
-
It's pretty sweet how we've fielded a bunch of information systems that have already reshaped our society without bothering to ask if it was a good idea.
End of conversation
-
-
-
That's a bit too easy. Any theoretical work which is a breakthrough in a field is very likely to have as much positive as negative applications. You empower others, you got a responsability in what is done with your work.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
There's no way to define "very theoretical" term precisely. And I have yet to see a positive impact of AI applications in general. The community boasts with breakthroughs in medicine, but the society gets microtargeting, social credit evaluation and surveillance.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
As 7th-graders, we used to write boilerplate essays on- "Is Science Boon or Bane?". We went over disease eradication, atom bombs, lengthening of lives, chemical warfare, etc. But all of those boilerplate essays concluded by saying- "Science in itself is not a boon or a bane." 1/3
-
"Only the applications of science can be blamed or praised" Simple stuff. 2/3
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.