I agree with the original ACM article more than I agree with Rob here. There is an obvious, extremely successful, counterexample to the “single threaded performance is king” meme and that’s GPUs.
-
-
The vicious cycle of “processors optimize for popular languages” <-> “popular languages are bad at parallelism” is a real phenomenon. GPUs are evidence that if you have a model rooted in parallelism from the start then things can turn out differently.
4 replies 0 retweets 3 likes -
Replying to @pcwalton @johnregehr
Obviously not. GPUs are bad at any problem that isn't parallel. Mining cryptocurrency requires zero shared memory (or message passing) so runs well on GPUs. Running memcached or a webserver would be horrible on a GPU.
2 replies 0 retweets 0 likes
GPUs are bad at any problem that isn’t parallel, but there are lots of parallel problems that should be done by GPUs that are not. (Such as the ones I work on) :)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.