Expanding the analytic paradigm here could be helpful- suppose we could (approximately) define what bias in ML looks like from first principles. Then we might filter candidate ML decisions through this analytic filter, which is at least transparent, though perhaps limiting
-
-
Unfairness is a construct left to us to decide. Fix a definition. Do you think there's not a first principles argument against preventing bias against e.g., women in workplaces? Or are we talking about semantics? ML algos can be broken apart; innards exposed as needed.
-
Gender/race are mostly agreed upon, at least on the surface. Easier case. ML creates all sorts of categories of detection that we've no way of dealing with because we couldn't detect them at scale, and which ones are relevant, to be protected, to be used etc. is very thorny.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.