Amusingly this is actually a problem with the ML model (Probably? Not totally what percentage of under-paid human annotators would classify image 1 as not a gun.)
-
-
-
nope that's not how this works. it's not like the training data contains tons of handheld thermometer pics that are getting mislabeled, it's far more likely that due to training data biases the model has builtin associations between dark skin + guns.https://twitter.com/bjnagel/status/1245300089226174465 …
- 5 more replies
New conversation -
-
-
yeah not a great look for that guy
End of conversation
New conversation -
-
An ML “engineer”
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Training data comes from human biases. This is the only real valid point to the “robots will take over” trope too, because the only personalities we know of that want to take over the world are actually just shitty humans
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It’s not my domain but I’ve been reading this and it’s good https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design-ebook/dp/B07XLTXBXV … The authors have been giving a related talk for a gisthttps://youtu.be/tmC9JdKc3sA
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This picture doesn't disprove bad training data as the problem.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.


