Recent AI progress isn't sustainable because it's driven by rapid increases in the expenditure for compute. Anticipate another three years or so of accelerating progress, then a gradual dissipation of the results of that progress into the economy over a generation.https://twitter.com/nosilverv/status/1194203923957071872 …
-
-
Replying to @HiFromMichaelV
> Recent AI progress isn't sustainable because it's driven by rapid increases in the expenditure for compute. Anticipate another three years or so of accelerating progress Bullshit. It is both false that we only have another 3 years of growth in available compute for AI, and ..
1 reply 0 retweets 0 likes -
Replying to @RokoMijicUK @HiFromMichaelV
.. it is false that only increases in compute matter at the moment.
1 reply 0 retweets 1 like -
Replying to @RokoMijicUK @HiFromMichaelV
seems to me unsustainability of compute growth is important, see e.g. https://aiimpacts.org/interpreting-ai-compute-trends/ … but more of a reason to expect a one-off downward step change in growth rate than s-curve saturation
2 replies 0 retweets 0 likes -
Replying to @VesselOfSpirit @HiFromMichaelV
We have to be careful about curve extrapolation. The size and cost of experiments is increasing, but that doesn't mean that the only way to ever do more compute is to pay more $. The economy does the easiest thing to solve a problem first.
1 reply 0 retweets 0 likes -
If you are currently spending $1M on hardware for an experiment, then it almost certainly *is* cheaper to just spend $10M than to develop specialized hardware (which would cost $billions) Until experiments are pushing the limits of tech company budgets, we won't really know ...
2 replies 0 retweets 1 like -
... whether it is possible to squeeze large factors out of specialized hardware etc.
1 reply 0 retweets 0 likes -
The same thing applies to the distribution of performance gains between hardware and theory advances. If you are only spending $1M on hardware, the it is *by far* cheaper to just spend another $9M than to make a new theory breakthrough.
2 replies 0 retweets 0 likes -
Only once we have maxxed out both the money option and the ASICs will we really know whether big theory advances are (im)possible. Having oodles of compute makes it easier to *quickly* test new theories. We have seen this effect in the past - low compute stunts theory.
1 reply 0 retweets 0 likes
seems to me it's harder to turn money into theory than into hardware, theory being more talent-limited
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.