This whole incident really soured my view of the tech press as not being educated or interested in the real problems, especially compared to the people I knew at FB who dedicated their careers to helping real people. Maybe my stereotype isn’t fair, but neither was the coverage.
-
Show this thread
-
Replying to @alexstamos
Not sure I agree, there were good questions about why technical solutions that didn't require original pictures weren't explored, and the answers weren't very satisfying.
1 reply 1 retweet 11 likes -
Replying to @taviso
Have you or your employer made any headway on this issue in the last couple of years? Since you so helpfully shared your concerns on Twitter I would have hoped you could have tried one of those options in the meantime.
1 reply 0 retweets 3 likes -
Replying to @alexstamos
This is also the answer we got at the time, "Why don't you do it this way, so that people don't have to send your team their nudes?" "Why don't you do it for us?"...I think that's not a very satisfying answer
1 reply 0 retweets 9 likes -
Replying to @taviso
There are a bunch of complications that arise when you actually try to implement this kind of mass image blocking at scale. It is easy to throw stones from the sidelines; if you spent time actually working on the problem you would realize the compromises weren’t arbitrary.
1 reply 0 retweets 5 likes -
Replying to @alexstamos
Let's hear them then, the only problem I've heard you talk about is that people don't want to publish their ImageDNA-like algorithm. There are solutions to that, use SGX or send tamperproof machines to trusted victim advocates to generate the hashes.
1 reply 0 retweets 5 likes -
Replying to @taviso
There has been some movement on the perpetual hashing front, as FB recently published new algorithms based upon more modern techniques that should be a bit more robust. The biggest problem is adversarial reporting to trigger image censorship.https://www.google.com/amp/s/about.fb.com/news/2019/08/open-source-photo-video-matching/amp/ …
1 reply 1 retweet 7 likes -
Replying to @alexstamos
You already have the image at that point, so no additional sharing has happened. Using your solution, I can send you a picture that isn't a nude, and someone looks at it and sees that it's not a nude. Using this system, you wait for a match and then see it's not a nude. Right?
2 replies 0 retweets 2 likes -
Replying to @taviso
Except you have now blocked that image in every private chat on the platform during the content moderation latency. There is effectively an infinite space of perceptual hashes that will probabilistically match a single photo; how do you think this holds up against 8ch*n
1 reply 1 retweet 8 likes -
Replying to @alexstamos
That doesn't make any sense. They can already submit infinite photos to your human team, do you block them as soon as the images arrive at nudes@fb.com, or do you wait for a moderator to confirm them first? If it's the second, then this is a nonsense excuse.
3 replies 1 retweet 3 likes
Think about it Alex. If you have to block before moderation, then I can send any image I want, right? It seems like either you don't really care about that latency, or you're already vulnerable to that attack anyway - either way, my system means don't have to share nudes, right?
-
-
Replying to @taviso
FB isn’t going to block until an image has been confirmed. I expect that those images will be used to train classifiers to make proactive detection more likely next time.
2 replies 0 retweets 0 likes -
Replying to @alexstamos @taviso
I think it would be great if most of this could be pushed client side. I’m actually putting together a whole workshop on client-side abuse detection after RWC.
2 replies 0 retweets 4 likes - 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.