Yesterday, I ended up in a debate where the position was "algorithmic bias is a data problem". I thought this had already been well refuted within our research community but clearly not. So, to say it yet again -- it is not just the data. The model matters. 1/n
-
-
No. ML algos are in general high dimensional and non-convex. Which means that the optimizing process is path dependent and critically dependent on the architectural choices and the algorithm used. If you fail at "edge cases" (i.e. minorities), that's incompetence, not objectivity
Uusi keskustelu -
-
-
That's true when the goal is training a model then evaluating it against a fixed test set. When models are deployed to production, two different models with the same test performance can have very different outputs for new examples of mis/under represented categories & contexts.
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
Various models achieve the same score on the objective but have different biases. So no, optimizing the objective is not a good reason for disparate impact.
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
1) „objective“ objectives don’t exist. 2) objectives = minimizing or maximizing some metric. Depending on your chosen metric-space (e.g. l1- vs l2-Norm) you decide about a sample’s relative gradient weight and thus how much your optimization will alleviate or magnify data bias.
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
This is illuminating with regards to your over-confidence in the incorrect perspective you keep peddling. Even with this narrow definition, it’s still also wrong in subtle ways. But at least there’s a clear line of argument.
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
Lataaminen näyttää kestävän hetken.
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.