AI bias: word2vec data encodes racism & sexism https://arxiv.org/abs/1606.06121 need more diversity http://www.bloomberg.com/news/articles/2016-06-23/artificial-intelligence-has-a-sea-of-dudes-problem …pic.twitter.com/odbhY3CwCp
Voit lisätä twiitteihisi sijainnin, esimerkiksi kaupungin tai tarkemman paikan, verkosta ja kolmannen osapuolen sovellusten kautta. Halutessasi voit poistaa twiittisi sijaintihistorian myöhemmin. Lue lisää
we should tweak them not to, as this paper is describing, I feel?
No, ML algorithms should tell the truth, and we should decide what to do with it, not tamper with the evidence.
this is the entire point of decision-theoretic classification: you weight predictions to avoid high-cost mistakes.
applying a raw probability model without considering utilities of outcomes is asking to make expensive mistakes.
Or a reflection of a biased train set & removing might be a legal requirement. ML can helphttp://arxiv.org/abs/1511.05897
Not sure if it's worse that you think black_male:assault white male:entitled_to is reality or that a news corpus is reality.
Forgive me @pmddomingos, but this notion of "truth" seems simplistic. Is it "true" that professor - associate = man - woman? @jackclarkSF
It is true only in the sense that these are the embeddings that best predict skipgrams, not true meanings of words
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.