Ever wondered when you shouldn't use a fair ML model? Pleased to share our new paper in #FAT2020 "Fairness Warnings & Fair-MAML: Learning Fairly from Minimal Data " (w/ @kdphd, twitterless Emile) https://arxiv.org/abs/1908.09092 where we investigate such questions.
We (1) provide Fairness Warnings, a method that suggests interpretable boundary conditions where a fairly trained model may behave unfairly and (2) provide a Fair-MAML: a meta-learning approach to training fair models from few tuning instances.