@schock says: algorithms should not be “color blind” (equality model that works best for ppl in power). Algorithms should be just (equity model that takes history & intersectionality into account). #datajustice18pic.twitter.com/JweD79JhNi
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Prob is algorithmsR mathematical opinions. Its not only about whose writing them but whose using them, 4what purpose& w/what inputs. Just like AI&ML learns 4m us via multiplicity of megadata sets(God help us) algorithms, once in t/users hands canB used2 exponentially turn up UGLY
I may be wrong. But thats what I worry about most. Its already happening on lower scales but its about to explode (see story a few days back about LAPD using predictive tech for policing of crimes)
Yeah, predictive stuff in and of itself is obviously something as a society we have to agree on, which I feel we haven't yet had that mature argument/discussion, let alone ushering in predictive racist stuff.
What I mean by mature is: we already have laws that say "if you threaten to kill somebody and it's a real threat" it's a crime (at least here in the UK) and that is obviously on the spectrum of "predictions" and "punishing before the act has taken place".
Same with terrorism. People in the UK get caught before they have done anything , when they are in the planning stage. This is on spectrum of predictions.
We need to have a non-hyperbolic and mature discussion about how richer data than just "we saw him buying acid" is used to stop crime and/or define a new category of crime. And yes, of course, like with all laws/everything everywhere we need to be less racist about it.
Many of the most telling predictive crime variables R specific 2t/community in which crime occur &some times that difference is manifestly evident on block by block basis. What about inputs based on what people in that community know?
I guarantee they know more than even local police about whats really a predictive cue/variable in that context. To the larger issues, science/scientists have for so long been enamoured with creating stuff that we gave little if any thought to ethical issues that would arise
I think we can define two different failure modes based on how complex a working solution would need to be. Stuff like a soap dispenser not recognizing dark skin, or facial recognition not working for non-majority ethnicities can be fixed by using more diverse training data.
Problems like a loan default prediction algo having an implicit bias are harder to solve, bcuz the biases the algo would develop accurately reflect the real world (i.e low SES minorities are more likely to default) But the algo ethically needs to have these biases corrected for.
Making the algo "raceblind" isn't enough, bcuz it would end up tracking race as a hidden variable (i.e. through home address or ethnic sounding names.) Interestingly enough, it seems like these problems would end up mirroring systematic racism in the real world.
Yes, they would reflect the real world descriptively but not with a real understanding of the world like a human has (or can have! Because of course some racists are real descriptive bigots).
I completely agree with you.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.