Not sure I agree, there were good questions about why technical solutions that didn't require original pictures weren't explored, and the answers weren't very satisfying.
-
-
Replying to @taviso
Have you or your employer made any headway on this issue in the last couple of years? Since you so helpfully shared your concerns on Twitter I would have hoped you could have tried one of those options in the meantime.
1 reply 0 retweets 3 likes -
Replying to @alexstamos
This is also the answer we got at the time, "Why don't you do it this way, so that people don't have to send your team their nudes?" "Why don't you do it for us?"...I think that's not a very satisfying answer
1 reply 0 retweets 9 likes -
Replying to @taviso
There are a bunch of complications that arise when you actually try to implement this kind of mass image blocking at scale. It is easy to throw stones from the sidelines; if you spent time actually working on the problem you would realize the compromises weren’t arbitrary.
1 reply 0 retweets 5 likes -
Replying to @alexstamos
Let's hear them then, the only problem I've heard you talk about is that people don't want to publish their ImageDNA-like algorithm. There are solutions to that, use SGX or send tamperproof machines to trusted victim advocates to generate the hashes.
1 reply 0 retweets 5 likes -
Replying to @taviso
There has been some movement on the perpetual hashing front, as FB recently published new algorithms based upon more modern techniques that should be a bit more robust. The biggest problem is adversarial reporting to trigger image censorship.https://www.google.com/amp/s/about.fb.com/news/2019/08/open-source-photo-video-matching/amp/ …
1 reply 1 retweet 7 likes -
Replying to @alexstamos
You already have the image at that point, so no additional sharing has happened. Using your solution, I can send you a picture that isn't a nude, and someone looks at it and sees that it's not a nude. Using this system, you wait for a match and then see it's not a nude. Right?
2 replies 0 retweets 2 likes -
Replying to @taviso @alexstamos
I'm really trying very hard to imagine how your system is harder to abuse, but I can't see it. With your system, a human has to look at every nude to verify it. With my system, they only have to look on match, when facebook already has it and abuse is 100% happening.
2 replies 0 retweets 0 likes -
Replying to @taviso @alexstamos
One difference I can see: If people send you a hash and it matches a non-nude, you don't know whether it was an accidental hit or a hostile attempt at censoring the non-nude (in which case you can penalize them). If they send you the actual nude, you know it was accidental.
1 reply 0 retweets 0 likes -
I'm sure Facebook doesn't like to talk about retaliation, but the ability to retaliate changes the whole dynamics of their interaction with users.
1 reply 0 retweets 0 likes
Of course this assumes the "hash" matching is not only inexact in a technical sense (which it has to be to match images with minor alterations) but produces lots of false negatives and false positives. I bet that's the case, though; seems like a very hard problem to solve well.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.