@schock says: algorithms should not be “color blind” (equality model that works best for ppl in power). Algorithms should be just (equity model that takes history & intersectionality into account). #datajustice18pic.twitter.com/JweD79JhNi
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
What I mean by mature is: we already have laws that say "if you threaten to kill somebody and it's a real threat" it's a crime (at least here in the UK) and that is obviously on the spectrum of "predictions" and "punishing before the act has taken place".
Same with terrorism. People in the UK get caught before they have done anything , when they are in the planning stage. This is on spectrum of predictions.
We need to have a non-hyperbolic and mature discussion about how richer data than just "we saw him buying acid" is used to stop crime and/or define a new category of crime. And yes, of course, like with all laws/everything everywhere we need to be less racist about it.
Many of the most telling predictive crime variables R specific 2t/community in which crime occur &some times that difference is manifestly evident on block by block basis. What about inputs based on what people in that community know?
I guarantee they know more than even local police about whats really a predictive cue/variable in that context. To the larger issues, science/scientists have for so long been enamoured with creating stuff that we gave little if any thought to ethical issues that would arise
once our "whatever it is" was gifted to civil society. Issues related to policy and use of that "gift" didnt occupy our thinking much. & civil society/government/business seeing a capitalist market, medical or military use for said gift, rushed in. By the time we realize
that discussion is needed, its all over e.g. wired medical implants issues: easily hackable, the grid, vulnerable to the max so far etc...Just wondering if there's a more temperate way to develop what society then uses. We could start by presenting ethics and policy
recommendations with each developed product b/c lets face it, no one better knows the product and the potential for abuses than the scientist developer. The reach and impact of ML, AI, predictive modeling is global and the impact depending on whose got it and how used is too.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.