I agree with the original ACM article more than I agree with Rob here. There is an obvious, extremely successful, counterexample to the “single threaded performance is king” meme and that’s GPUs.
-
-
To what extent are GPUs actually massive parallelism rather than glorified SIMD?
-
They’re basically SIMD, but they usually don’t do OoO and instead eagerly context switch away to other threads to hide stalls.
- 3 more replies
New conversation -
-
-
Also, I don’t think that there’s really any better way to implement Go at least other than shared memory. That’s because Go’s model is shared memory.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Obviously not. GPUs are bad at any problem that isn't parallel. Mining cryptocurrency requires zero shared memory (or message passing) so runs well on GPUs. Running memcached or a webserver would be horrible on a GPU.
-
GPUs are bad at any problem that isn’t parallel, but there are lots of parallel problems that should be done by GPUs that are not. (Such as the ones I work on) :)
End of conversation
New conversation -
-
-
What features would you like to see in a conventional CPU that make it better suited for languages that can express parallelism well? Presumably said features cannot still slow down single threaded code.
-
Bigger on-die GPUs.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.