Driverless cars just got a whole lot harder. This technical paper by @amirrosenfeld raises some profound questions about the robustness of #DeepLearning as a perceptual mechanism. https://arxiv.org/abs/1808.03305
-
-
By your example even if there is a parade, the driverless car's use case is to identify some object and slow down to avoid collision. be it human or otherwise. Also person falling off truck is also normal frame, person and truck should be detected as innate objects.
-
I anticipate lots of autonomous vehicles being rear ended, if they are released at scale with current tech. See eg article on Waymo in the Information
End of conversation
New conversation -
-
-
It isn't clear, because the study didn't try that. The images used aren't unlikely, they could never exist in real life. I'm not arguing that these systems are robust to all natural events, of course not. But they needn't be robust to unnatural events to be safe.
-
Author here - there are two aspects to consider. One is "real life" behavior of such systems, where the likelihood of the stimulus is to be factored along with the cost of an error (i.e, risk). The other is the ease with which the systems are fooled whereas humans are not.
- 2 more replies
New conversation -
-
-
The only possibility where the same image object degenration can happen is when camera starts skipping frames and transfix two different frames together, that will create the image overlap and this scenario can occur; to avoid this multiple cameras to recreate scene can be used.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.