Very cool work showing feasibility of an adversarial-example-based attack on self-driving cars
I’ve been working on a similar hobby project and love how thorough this write-up is, and I have some comments on the real-world feasibility of these attacks:https://twitter.com/keen_lab/status/1111469579102912512 …
-
-
Supposing every Tesla shows up on a tool like Shodan a network vuln means somebody can inject a noisy image impacting 400,000+ cars. If the noisy image fools even 0.01% of the cars the potential impact is still massive because every Tesla except the Roadster has autopilot...
Show this thread -
They target the autowipers with a Worley noise image on an electronic display. However they DON'T say how effectiveness changes with screen size/relative orientation -> big difference between feasibility of billboard ad attack vs display which must be right in front of windshield
Show this thread -
They target lane following by adding small white dots to an intersection to cause the car to go into the wrong lane. They acknowledge this attack is human-detectable in clear road conditions but I guarantee it can be obfuscated in icy conditions with carefully-placed sand/salt :Ppic.twitter.com/9GSAMaA4YB
Show this thread -
The downside is that bad weather makes the road messier and harder to orchestrate specific behavior without a lot of planning and intervention, increasing adversary operational risk. To me weather robustness is the bigger safety issue here
Show this thread -
Bad weather might inadvertently cause the system to perceive an adversarial-example-like attack, leading to similarly harmful outcomes. This needs to be explored more fully by autonomous car manufacturers as a point of general robustness rather than purely for security
Show this thread -
Keen Lab’s writeup offers a concrete example of security risks in AI/ML systems - ML researchers please take note that the threat model focuses more on system architecture than on adversarial examples themselves!
Show this thread -
This also fits nicely with the excellent paper by
@jmgilmer@ryan_p_adams@goodfellow_ian et al about practical adversarial example threat models: https://arxiv.org/pdf/1807.06732.pdf …Show this thread -
Also check out the excellent blog post by
@catherineols from earlier this week about conflating unsolved research problems with real-world threat models: https://medium.com/@catherio/unsolved-research-problems-vs-real-world-threat-models-e270e256bc9e …Show this thread -
This thread keeps on giving - just remembered relevant work presented at NeurIPS SecML 2018 by
@jhasomesh et al about needing to consider system specifications and semantics when developing robust ML for self-driving cars and other cyber-physical systemshttps://m.youtube.com/watch?v=_k2PBVZYLjE&feature=youtu.be …Show this thread
End of conversation
New conversation -
-
-
Well done thank you
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.