.@chancancode pointed out the following AI paradox to me: "in order to deal with cognitive bias, let's model AI after the human brain" 1/
-
-
Replying to @wycats
I'm seeing it everywhere now. http://www.vox.com/2016/3/12/11211614/why-false-rumors-on-twitter-are-such-a-headache … 2/
1 reply 0 retweets 0 likes -
Replying to @wycats
"The researchers proposed at least one solution in the paper: developing machine learning tools that can flag reports as rumors..." 3/
1 reply 0 retweets 1 like -
Replying to @wycats
If the problem is that the human brain's pattern matching is failing us, wouldn't an AI brain have a faster version of the same problem? 4/4
4 replies 2 retweets 2 likes -
Replying to @wycats
Also, this is not a panicked "AI is taking over the world" thing. It's a more boring, probably (hopefully?) controllable problem.
1 reply 1 retweet 1 like -
Replying to @wycats
Imagine putting machine learning in charge of hiring. What if the training data is racially biased. We already know what happens...
1 reply 6 retweets 6 likes -
Replying to @wycats
We'll just feel a lot better about trusting "the computer", but we can't subpoena the computer to ask them why they did what they did.
4 replies 2 retweets 5 likes
@raganwald right, exactly. it can encode forward-prejudice in exactly the same way humans encode it, using exactly the same flaw!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.