1/ Clickbait. The "wall" is: "Not every area has reached the limit of scaling, but in most places, we're getting to a point where we really need to think in terms of optimization, in terms of cost benefit, and we really need to look at how we get most out of the compute we have."https://twitter.com/GaryMarcus/status/1202366200811884545 …
-
Show this thread
-
2/ I'm a big believer in optimization and intelligent use of limited resources, so I'm in agreement! I have to embrace this really, given my resource limited position. But "Hit the Wall" in context means a lot less than I think this headline conveys.
1 reply 0 retweets 8 likesShow this thread -
3/ As far as this apparent broader lack of civility between deep learning advocates and their discontents: I see a whole lot of arguing over nothing. I think what Jerome Pesenti says in the article is a pretty normal sentiment among most researchers and practitioners:
1 reply 0 retweets 2 likesShow this thread -
4/ "Deep learning and current AI, if you are really honest, has a lot of limitations. We are very very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn't have common sense, it’s more on
1 reply 0 retweets 3 likesShow this thread -
5/ the level of pattern matching than robust semantic understanding. But we’re making progress in addressing some of these, and the field is still progressing pretty fast. You can apply deep learning to mathematics, to understanding proteins,
1 reply 0 retweets 3 likesShow this thread -
6/ there are so many things you can do with it." Yes. Exactly. It's a tool. The vast majority of us using it aren't parading it around calling it one step from AGI. We know better esp after dealing with it the realities of daily trial/error/nudging of hyperparameters, etc.
1 reply 0 retweets 7 likesShow this thread -
7/ Those who are overselling deep learning seem to be those who write books/articles that need views, or do PR, or are trying to sell snake oil. Yes that's bad. But creating drama out of thin air is just a waste of time and emotion (lots of Twitter fights over this lately!).
1 reply 1 retweet 7 likesShow this thread -
Replying to @citnaj
I think you know, things are maybe a little less ideal than how you've painted them. First, on hitting a wall on compute. Many results have depended on a large investment in compute to be achievable: top pretrained LMs, stylegan, RL game bots etc. Unless something changes, andpic.twitter.com/CoHcIVwlUE
3 replies 0 retweets 0 likes -
Replying to @sir_deenicus @citnaj
No matter how efficient the methods become, someone will always attempt something on the largest hardware they can find. It's the laziest way to get something published.
1 reply 1 retweet 3 likes
Yes that's exactly it. We've heard the same general lament about software for decades- that when the hardware glass gets bigger, the first thing that developers do is fill that glass to the top. Just think of browsers, IDEs, etc. Not all bad usage of memory/compute, but still.
-
-
Replying to @citnaj @sir_deenicus
But, you got Sutton's bitter lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html … . I short, don't bother optimizing, just wait for bigger hardware!!
2 replies 0 retweets 1 like -
Replying to @IntuitMachine @citnaj
That's fine! Right now, the most useful things (outside image related) require very powerful hardware for low latency. And if you want to learn from scratch, forget about it. Having something that can be useful already with smol hardware is of great value, regardless the topend
1 reply 0 retweets 0 likes - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.