In a recent work we report an intriguing finding to detect bias : assess hardness of different samples for #DeepLearning model. It's a first step. Using it to fix bias is much harderhttps://twitter.com/animaanandkumar/status/1203090855097057280 …
-
-
Show this thread
-
More importantly our work shows that
#DeepLearning model is deciding which samples are hard for it to classify and introducing bias. Here I use bias to mean disparate treatment not statistical bias. Model leaves harder example with worse accuracy and more vulnerability to noiseShow this thread -
So which
#DeepLearning model you choose changes the bias introduced. We show that older models like Alexnet are worse compared to newer models like resnet. So that's a good thing.@BeidiChen@Anshumali_@animesh_garg@jankautz@NvidiaAIShow this thread -
It's important for our
#AI leaders to acknowledge that#DeepLearning makes it harder to deal with bias. Models themselves introduce bias. It's important for our leaders not to be so dismissive of deep work happening in this area. They should listen and learn from others.Show this thread
End of conversation
New conversation -
-
-
Given Bias itself is essentially decision of good/bad to someone or something(w/ implied opp for other), whether an algorithm decides or human decides, the decision remains biased as long as the decision is based on exclusion of someone or something. Agree?
-
I think ML models work because they learn based on biases. The problem arises when we use ML to decide for humans, like how much credit one must get. A fat tumor will not complain that ML is biased against it to tag it as malignant. However ... (1/2)
- Show replies
New conversation -
-
-
That’s a very fair point on DL and bias . But isn’t it still harder to fix bias in people comparatively ?
-
Just answered your question as an addition to my thread
End of conversation
New conversation -
-
-
I agree with you. We encountered that bias at Amazon in certain datasets and projects. I remember having the same arguements on the PE forums there then, with some people claiming bias would just vanish from data because of ML magic, somehow. I didn't get their arguments
-
I will most remember being an L8
#Technologist getting yelled at by an L8#AppliedScientist for trying to build what they'd need in the next year instead of making myself a glorified#codemonkey to the#AppliedScientist clique. In the end, they needed exactly what we built.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.