The EFF has a piece out on how client-side scanning ‘breaks’ end-to-end encryption. They take a pretty strong position here (one I happen to agree with.) But I thought it would be helpful to explain my specific technical concerns. Thread: 1/https://www.eff.org/deeplinks/2019/11/why-adding-client-side-scanning-breaks-end-end-encryption …
-
Show this thread
-
Just to explain what we’re talking about: many current unencrypted messaging systems scan every photo sent through the service, in order to detect abusive content (CP). Encrypted messaging systems can’t do this. Hence proposals to do scanning on the client side. 2/
1 reply 4 retweets 45 likesShow this thread -
The service would send down some kind of list of content hashes that are problematic, and your app would check for matches before encrypting the message/photo/whatever. 3/
2 replies 2 retweets 37 likesShow this thread -
The problem with this approach is that it’s subject to abuse in two ways. 1. The system is designed to filter “bad” content, and “bad” means different things to different people. 2. Even if the service provider is decent, bad actors can slip inappropriate content into the DB.
5 replies 5 retweets 64 likesShow this thread -
People tend to discount the first concern because we live in a society of laws, etc. But it’s helpful to imagine how an authoritarian government will use this system. In fact, you don’t have to imagine. Just use WeChat. 5/
4 replies 6 retweets 62 likesShow this thread -
But even if you live in a healthy democracy (good for you) and you basically trust that this system will be used for good, there’s still the possibility of abuse. To prevent that, someone needs to audit the database to make sure everything in it is supposed to be there. 6/
3 replies 3 retweets 33 likesShow this thread -
And this is where existing systems largely fall down. Today’s “sexual abuse imagery” systems rely fundamentally on keeping a *secret* database of image hashes, which are computed using a *secret* algorithm. This creates a lot of potential for undetectable abuse. 7/
1 reply 7 retweets 46 likesShow this thread -
A malicious provider can insert the hash of any data they want — say political content — and they’re guaranteed to get a report from any client that sends this. Normal clients can’t audit the database since it’s kept deliberately opaque. 8/
3 replies 4 retweets 44 likesShow this thread -
Even worse, if the hashing system has collisions (much more likely for ‘fuzzy’ image hashing systems), it may be possible to find legitimate abuse imagery that just happens to collide with non-abuse content. 9/
5 replies 3 retweets 36 likesShow this thread -
In short, using today’s SAI detection systems on the client side (assuming we can even solve the ‘secret algorithm’ problem) basically means “the system is secure as long as you fully trust the provider.” That’s a pretty important security downgrade from normal e2e. /Fin
11 replies 6 retweets 69 likesShow this thread
It's the same security guarantee as using Facebook today, with an upgrade for the textual part of the communication (from readable by service provider to e2e)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.