1/ Clickbait. The "wall" is: "Not every area has reached the limit of scaling, but in most places, we're getting to a point where we really need to think in terms of optimization, in terms of cost benefit, and we really need to look at how we get most out of the compute we have."https://twitter.com/GaryMarcus/status/1202366200811884545 …
-
-
I think you know, things are maybe a little less ideal than how you've painted them. First, on hitting a wall on compute. Many results have depended on a large investment in compute to be achievable: top pretrained LMs, stylegan, RL game bots etc. Unless something changes, andpic.twitter.com/CoHcIVwlUE
-
barring some hardware innovation, this dependence on compute will result in a stall for topend performance. In Warstadt et al, it's interesting to find their tasks did not find a meaningful separation between TransformerXL & LSTMs. It was GPT2 with much more training data whatpic.twitter.com/RjV5S2unJk
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.