Thx for the voice.Yet there will be folks who really don’t understand why model & data bias trick our decision making in real world. Yet, there will be models built with such biases & used in real world without liability, accountability & validity of the models. 

-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Yes, the technology is flawed. Zooming out a bit, law enforcement organizations deploying face recognition lack the robustness required to apply the technology. False arrests reveal insufficient human follow up on face matches. https://arxiv.org/abs/1811.10840
-
Even if the technology were unbiased, careful human oversight and robust processes would be required, because these systems will always make mistakes.
- Show replies
New conversation -
-
-
I am not sure if it's an
#AI problem or a data problem. I wouldn't say it's not ready rather than need for stricter supervision (human in the loop, that kind of stuff). Maybe we can say a stricter audit of result is needed than just blind trust? -
it's first of all a society problem - the bias is already built into the current law enforcement process, just adding an algorithm to that will do absolutely zero to remove the underlying bias
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
