AI bias: word2vec data encodes racism & sexism https://arxiv.org/abs/1606.06121 need more diversity http://www.bloomberg.com/news/articles/2016-06-23/artificial-intelligence-has-a-sea-of-dudes-problem …pic.twitter.com/odbhY3CwCp
Voit lisätä twiitteihisi sijainnin, esimerkiksi kaupungin tai tarkemman paikan, verkosta ja kolmannen osapuolen sovellusten kautta. Halutessasi voit poistaa twiittisi sijaintihistorian myöhemmin. Lue lisää
Model should depend on our objective function and background knowledge, as in any ML application.
societal norms are not an objective "truth". ML models furthering racist mindsets (pun intended) should be "fixed"
Depends, do you want the model to learn language as it's used or language as you wish it were used?
thats why I like the approach in the paper - develop tools to post-process data, so then you generate an additional dataset
better to have choices for appropriate applications? Eg Tay being able to scan own corpus and squelch perceived racism
here I follow your argument, but I'd argue this use of the word "truth" shld be replaced or defined more narrowly
concur, since underlying data itself may be biased.
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.