The paper demonstrates that in certain artificial scenarios neural nets can be rather easily tricked. This has in fact long been known. But intellectual opponents of deep learning (like Gary Marcus) are seizing on it to suggest that the problems can't be fixed. That's premature.https://twitter.com/GaryMarcus/status/1034075062674935808 …
-
-
This Tweet is unavailable.
-
Yes neural nets are just approximating functions so of course they can't reason and can always be spoofed. But that's different from saying driverless cars will never work in narrow scenarios. Radar can be spoofed too but that doesn't mean today's airline autopilots are useless.
End of conversation
-
-
-
I like "intellectual opponents of deep learning" - is there another kind? Or is 'intellectual' pejorative?
#Trumpiantimes -
"Intellectual" was intended as an honorific to make clear I wasn't objecting to his motives, only to his conclusion that these problems are a major setback for deep learning. I consider being an intellectual opponent of DL perfectly legitimate. Sorry you didn't pick up on that.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.