Should be possible to do a parental controls 2.0 app that uses machine learning to determine what’s on screen, generates a summary for parents afterward, and blocks out bad content in real-time. A parent can’t watch hours of cartoons, but they could watch a 30 second summary.https://twitter.com/josephflaherty/status/1253683522780434432 …
-
-
I think this is technically feasible. It would be a pain but is totally doable. Dumb approach: label a bunch of children’s content as well as unacceptable content. Then train a real-time binary classifier. If not likely to be in acceptable set, block. https://cloud.google.com/video-intelligence …pic.twitter.com/wP6DvER7X8
-
This could be the start of a great Black Mirror episode.
End of conversation
New conversation -
-
-
Agree. "Inappropriate" is an Anna Karenina problem: different for every family. Difficult to scale, given diverse needs & fail/attack modes. Personal solution:in interstitial moments, I screen audiobooks, music, & videos that meet my criteria @ 1.5X and youtube-dl to a dumb box.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Appears real-time + sparse example aspects of shooting were barriers for Facebook AI. So both are true: A) real-time general content filtration is intractable; and B) offline specialty filter for kids is tractable (large train set, false negs OK, etc.)https://www.theverge.com/2019/5/20/18632260/facebook-ai-spot-terrorist-content-live-stream-far-from-solved-yann-lecun …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
Seems very possible for a large non-real-time dataset like children’s videos. The shooter is a one-off and hard to get signal for — different problem technically, I think.