Good luck trying to do deep learning without convolutions, LSTMs, ReLUs, batch normalisation, etc. Good luck trying to solve Go without the prior knowledge that the problem is stationary, zero sum, and fully observable.
-
Show this thread
-
So the history of AI is not the story of the failure to incorporate human knowledge. On the contrary, it is the story of the success of doing so, achieved through an entirely conventional research strategy: try many things and discard the 99% that fail.
2 replies 8 retweets 58 likesShow this thread -
The 1% that remain are as crucial to the success of modern AI as the massive computational resources on which it also relies.
1 reply 1 retweet 23 likesShow this thread -
Sutton says that the intrinsic complexity of the world means we shouldn’t build prior knowledge into our systems. But I conclude the exact opposite: that complexity leads to crippling intractability for the search and learning approaches on which Sutton proposes to rely.
3 replies 5 retweets 72 likesShow this thread -
Only with the right prior knowledge, the right inductive biases, can we ever get a handle on that complexity.
1 reply 3 retweets 33 likesShow this thread -
He says “Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better”. The use of the word ‘only’ highlights the arbitrariness of the claim.
1 reply 0 retweets 20 likesShow this thread -
Deep learning wouldn’t succeed without those convolutions and invariances but these are deemed minimal and general enough to be acceptable.
2 replies 0 retweets 16 likesShow this thread -
In this way, “The Bitter Lesson” avoids the main question, which is not WHETHER to incorporate human knowledge (because the answer is trivially yes) but WHAT that knowledge should be and WHEN and HOW to use it.
1 reply 5 retweets 60 likesShow this thread -
Sutton says “We want AI agents that can discover like we can, not which contain what we have discovered.” Sure, but we are so good at discovering precisely because we are hardwired with the right inductive biases.
5 replies 2 retweets 37 likesShow this thread -
The Sweet Lesson of the history of AI is that, while finding the right inductive biases is hard, doing so enables massive progress on otherwise intractable problems.
6 replies 19 retweets 121 likesShow this thread
michael_nielsen Retweeted michael_nielsen
Great thread. You may enjoy my thread https://twitter.com/michael_nielsen/status/1106405855635791872 … making a related argument.
michael_nielsen added,
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.