It's not that deep learning is intrinsically incapable of driving -- it's that the situation space is extremely high-dimensional (due to edge cases), and that a deep learning system requires to be trained on a *dense sampling* of the same space that the system will operate in.
-
-
Show this thread
-
Because such a representative, dense sampling is impossible to obtain, even when heavily leveraging simulated environments, the symbolic approach will prevail (specifically, an approach that is mostly symbolic but blends human abstractions with learned perceptual primitives)
Show this thread
End of conversation
New conversation -
-
-
This Tweet is unavailable.
-
One day you or you may not have a choice to do otherwise.
End of conversation
-
-
-
@GaryMarcus Tho hesitating to comment w/ two eminences, isn't pragmatic answer likely to lie in combination of (i) symbolic &#ML approaches as part of single SDV system & (ii) constraining domain - in case of roads & driving, limiting to certain routes, pre-mapped, etc.? -
Yeah. I was thinking in the same vayne. I guess “amalgams” will prevail for quite some time.
End of conversation
New conversation -
-
-
There already exists a neural network that has achieved L5 autonomy. The neural network found in human heads. Why is it impossible for this functionality to be replicated in silicon or any other artificial computation machine ?
-
Similarly to planes. They have fixed wings and engines, not birdlike flaps. Humanity found a solution more compatible with our technical and engineering capabilities
End of conversation
New conversation -
-
Show additional replies, including those that may contain offensive content
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.