Learning algorithms *are* biased.
I hope @ylecun 's biases (which make him think otherwise) are easy to fix...https://twitter.com/ylecun/status/1203211859366576128 …
-
-
This raises an important issue. When research groups release pre-trained networks, they are insulated from the application stakeholders. The goal is to make it easier to build applications on top of the pre-trained representations. 1/
-
It seems to me the group releasing the pre-trained system has an obligation to create additional tools for enabling application engineers to identify and mitigate biases (and other errors) in the pre-trained system. 2/
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.