AI and Compute: Our analysis showing that the amount of compute used in the largest AI training runs has had a doubling period of 3.5 months since 2012 (net increase of 300,000x):https://blog.openai.com/ai-and-compute/
Where did the 11 GF / sec number come from? IIRC, the papers says they could evaluate ~200 mill positions per second, and were calculating ~8,000 features per position. That makes it at least 1 TF / sec, and probably at least an order of magnitude higher.
-
-
I checked the numbers. Deep Blue (1997) appears comparable to the 2012 systems:https://twitter.com/michael_nielsen/status/1192667888840142848 …
- 4 more replies
New conversation -
-
-
It comes from the LINPACK benchmark in 1997: https://www.top500.org/site/48052
-
The quoted calculation does ignore the specialized chip for evaluation of positions though
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.