Expanding the analytic paradigm here could be helpful- suppose we could (approximately) define what bias in ML looks like from first principles. Then we might filter candidate ML decisions through this analytic filter, which is at least transparent, though perhaps limiting
-
-
If we had a definition fixed that interfaced either with output or various stages of operation in ML algos, why couldn't we detect them at scale- surely you're not referring to any technical constraints? This hand-wringing feels similar to past arguments on the same topics...
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.