this is my #1 pet peeve aaarrgh *the key problem with linked lists has nothing to do with cache misses!* (really.)
-
-
better predictors, better at discarding results without needing to flush everything.
-
i.e. finer grained tracking of everything. Power/area/complexity cost is worth it.
-
Why can't we just use that area for 100x as many super-dumb cores?
-
Short version is that, even if you have a workload that scales to 100 cores, that's not necessarily a very
-
power-efficient thing to do either. There's structural reasons for why communication within a core is more
-
efficient than between cores. Having lots of small cores work on disjoint data gives good power/perf. If you have
-
that kind of workload. But if there's any potential of sharing or need to communicate, things change.
-
Memory access (and more mem BW) is *crazy* expensive in terms of power. Hence, caches. But caches are only good
- 13 more replies
New conversation -
-
-
I wonder; in my uses, unpredictable "if()" branches can be very expensive.
-
a lot of "branch free" logic used in perf sensitive code to avoid "eating it"...
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.