What if you WANT to create a biased machine? What if, based on everything I've just said, you've realised, correctly, that a biased machine is a highly effective, highly deniable way to deliver your bias?
-
Show this thread
-
In Mickens' "Tragedy of Darth Tay the Bigoted" the causes and effects are incredibly stark and easy to unthread. If an attacker has control over the A.I.'s training data, of *course* the resulting A.I. will misbehave
1 reply 1 retweet 16 likesShow this thread -
quarantine 'em Retweeted quarantine 'em
So what happens if any of the humans listed below is an attacker? And the system they're attacking is the very important real-world system which they are all nominally working together to automate? The criminal justice system or the healthcare system?https://twitter.com/qntm/status/1030837348630708224 …
quarantine 'em added,
quarantine 'em @qntmHumans build the training data sets. Humans pick which data sets to use in the training. Humans implement the training algorithms. Humans decide *when* the machine is adequately trained. Humans install the machine. Humans decide whether to honour the machine's decisionsShow this thread2 replies 1 retweet 20 likesShow this thread -
Mickens gives a high-level goal of computer security as "ensuring that systems do the right thing, even in the presence of malicious inputs". In a machine learning scenario, inputs come from a whole lot of places which are not the user
1 reply 1 retweet 18 likesShow this thread -
Or, of course, you can secretly replace the "A.I."/"machine decision-maker" with a highly biased human or collection of humans Yeah, you were randomly selected by this button which I press when I want the machine to randomly select you
1 reply 3 retweets 11 likesShow this thread -
* Machine learning is money laundering for bias * Machine learning is money laundering for responsibility * People do not launder their money by ACCIDENT * Assign responsibility to humans
1 reply 43 retweets 109 likesShow this thread -
Machines are going to be given control of the world so that they can more effectively and deniably implement the biases of the people who create and install them
1 reply 14 retweets 28 likesShow this thread -
So to sum up, generally I agree with James Mickens but I prefer to read less incompetence and more malice into the machine learning situation. I think this is a sound and necessary defensive strategy, regardless of the actual amount of malice involved
1 reply 2 retweets 15 likesShow this thread -
"No, I'm not paranoid! I'm just rigorously going through the *motions* of paranoia in case your machine learning algorithm *inadvertently* goes through the *motions* of drilling a hole in the keel of human civilisation"
1 reply 3 retweets 16 likesShow this thread -
Passive-aggressively assume good faith. "How did you ensure that this system does not reflect your own biases? How did you ensure that this system can't be used for abuse? ...You're looking uncomfortable. Don't worry, we can extend this meeting for as long as necessary"
1 reply 4 retweets 27 likesShow this thread
quarantine 'em Retweeted meowy catgirl
This is also a good one, neglected to add it to the thread. Reason 2.5 for not replacing humans with machines in important decision-making roleshttps://twitter.com/nyanotech/status/1030898183369457664 …
quarantine 'em added,
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.