I disagree. It's difficult to understand and fix bias through #AI algorithms for #DeepLearning These are highly non-linear blackbox models. they amplify biases. many works show that trying to fix superficially is like putting lipstick on a pig. We need fresh thinkinghttps://twitter.com/ylecun/status/1203211859366576128 …
-
Show this thread
-
Prof. Anima Anandkumar Retweeted Prof. Anima Anandkumar
In a recent work we report an intriguing finding to detect bias : assess hardness of different samples for
#DeepLearning model. It's a first step. Using it to fix bias is much harderhttps://twitter.com/animaanandkumar/status/1203090855097057280 …Prof. Anima Anandkumar added,
Prof. Anima Anandkumar @AnimaAnandkumarIntriguing measure of#bias in#DeepLearning We find angular distance as a robust + universal measure of hardness of a training example and corresponds with human ambiguity.@beidichen@animesh_garg@jankautz@NvidiaAI https://twitter.com/Deep__AI/status/1203028047651205120 …Show this thread3 replies 6 retweets 64 likesShow this thread -
More importantly our work shows that
#DeepLearning model is deciding which samples are hard for it to classify and introducing bias. Here I use bias to mean disparate treatment not statistical bias. Model leaves harder example with worse accuracy and more vulnerability to noise1 reply 2 retweets 42 likesShow this thread -
This Tweet is unavailable.
We believe it is aleatoric uncertainty since it has statistically significant correlation with human selection frequency. Latter is good surrogate for aleatoric uncertainty. But we also find that angular measure improves with better models, e.g. resnet vs alexnet)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.