The vicious cycle of “processors optimize for popular languages” <-> “popular languages are bad at parallelism” is a real phenomenon. GPUs are evidence that if you have a model rooted in parallelism from the start then things can turn out differently.
-
-
-
To what extent are GPUs actually massive parallelism rather than glorified SIMD?
- 4 more replies
New conversation -
-
-
Which is why I mentioned the "Xeon Phi" GPU where Intel found out-of-order beneficial to increase single thread performance.
-
You need to look at *successful* GPUs. :)
End of conversation
New conversation -
-
-
GPUs are more like the ultimate expression of single-threaded perf being king.
-
Fine, *scalar* single-threaded performance then. :)
End of conversation
New conversation -
-
-
It's almost as if they both work from different ends of a continuous problem domain, and aren't so much a reflection of a programming model as they are a reflection of different kinds of problems :)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.