My incredible colleague, Antigone Davis, Head of Global Safety at Facebook, has posted about our test in Australia to combat Non-Consensual Intimate Imagery (NCII).https://newsroom.fb.com/news/h/non-consensual-intimate-image-pilot-the-facts/ …
-
-
It's an open research problem, but so is post-quantum cryptography.
-
You are lucky to work at one of the few companies with an excellent safety engineering team. Maybe let your fuzzers run for a couple of months while you try your hand at this issue? More infosec-safety exchanges are great for both sides.
-
Hah, are you running low on headcount over there?

-
Plenty of headcount! Standing desk and all the turkey jerky you can eat just waiting for you, say the word. At a minimum I can get you a counter-offer. GSUaaS
- 3 more replies
New conversation -
-
-
I'm one of those people who still thinks local hashing might be the better answer, and I stand by it. But two things first: 1. I applaud
@alexstamos and Antigone Davis for stating FB's side so well in the public. -
2. I have long been on the record supporting the efforts of amazing scholar-advocates like
@daniellecitron and@ma_franks to come up with creative solutions to deal with the scourge of revenge porn and other harms that disproportionately hurt women.#ILove280 -
3. Antigone's post confirms that FB is not keeping the images around (e.g. for ML training purposes). If not, then the primary reason to require the file upload is, to use
@alexstamos' phrase, to "prevent adversarial reporting" -
4. Echoing
@blakereid https://twitter.com/blakereid/status/928827883979653120 …, if you do the check *after* somebody else attempts to uploads an image matching a hash rather than *before*, you can still tamp down on those who would game your system.This Tweet is unavailable. -
5. & to build on @blakreid's model: when somebody uploads a hash, you check to see if it matches any image already on the FB system (I'm assuming you have that capability?) If it does, that also merits human review, but without requiring an upload.
-
6. Having built in those two layers of assurance, what's the remaining threat model: Person A uploads a hash for an image that has never before been on FB because he/she is hoping to prevent Person B from uploading that exact image.
-
7. Is that possibility really so significant and worrisome that it justifies the risk to privacy you are taking on?
End of conversation
New conversation -
-
-
Do you have a plan for all the underage material you'll end up collecting?
-
Eapecially considering that in many jurisdictions even sending the image to FB is technically criminal, let alone reviewing it.
-
Exactly!!!!
End of conversation
New conversation -
-
-
Trying to identify these photos with hashing cannot scale quickly anyway. Privacy issue need a privacy system, not such a band-aid patch. If FB's facial recognition can be used to ask for permission of found people in photos, it partially solves this and make FR less creepy.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Couldn't humans review the photos that match the hash, rather than the original photo? That seems like a trivial matter to fix.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.