.@chancancode pointed out the following AI paradox to me: "in order to deal with cognitive bias, let's model AI after the human brain" 1/
-
-
Also, this is not a panicked "AI is taking over the world" thing. It's a more boring, probably (hopefully?) controllable problem.
-
Imagine putting machine learning in charge of hiring. What if the training data is racially biased. We already know what happens...
-
We'll just feel a lot better about trusting "the computer", but we can't subpoena the computer to ask them why they did what they did.
-
Anyway, just something to think about before we treat "deep learning" as a panacea.
- 1 more reply
New conversation -
-
-
@wycats Fortunately no one knows how the brain works, so we can't actually have this problem. ;)Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@wycats@Runspired Only if the computer brain matches patterns in the exact same algorithm as the human brain. -
@WebWhizJim@wycats doesn't need to be exact, the problem with ML is it doesn't necessarily understand that biases in the data are biases. -
@WebWhizJim@wycats its a correlation vs causation problem, bias is often in the correlation column, harder for machine to understand that -
@WebWhizJim@wycats for instance, I've used a few different ML and other AI patterns to predict when I'm going to be injured as a runner -
@WebWhizJim@wycats vs when in peak form. Difficult to tell the alg. that missed or short workout was due exhaustion vs travel or laziness. -
@WebWhizJim@wycats it's easier for humans to spot these things, because we see spurious correlations too easily, understand bias/context -
@WebWhizJim@wycats probably the biggest problem is not knowing what context the machine lacks
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.