Thread time! Why can't they just quickly patch #meltdown or #spectre and push out another cpu? Why could it possibly take years? Why don't they use AGILE or x/y/z? Lots of reasons:
(note: my goal is not to criticize chip manufacturers - it's to defend the constraints they have)
-
-
I like the optimism, but we've got decades of compilers optimizing to hint speculative execution instead of optimizing the code itself. It'd be nowhere as bad as P4 since the pipeline is shorter, but you're talking about a full stall, flush, & waiting for _every_single_branch_
-
Execution stall, yes, but decode/pipeline-fill can still happen speculatively. No idea if the chip has a way to disable execution w/o disabling decode/fill tho.
-
Wouldn’t an execution stall with hyper threading still hide a new TLB miss? For example, speculative execution isn’t that much help if you’re stalled anyways. So increasing the miss rate may not affect the performance that dramatically in that case.
-
Whereas if I wrote “perfect” code with no stalls, the instruction fetch would be dramatically more noticeable, and this is what scares me. Network hardware, system level processes, etc... are usually optimized this way and they are the most critical.
-
These processes slowing down by even 10% is an internet level outage in the making.
-
The latency would propagate exponentially throughout the internet going up the levels of abstraction. Higher dns latency, slower tls/crypto, drop in throughout, then the latency of high-level services would increase by an order of magnitude, then the applications depending on it
-
This is blown way out of proportion. Many of those levels aren't remotely cpu-bound, and if one increases latency, it _lightens_ load on ones that depend on it rather than increasing load.
-
Networking hardware is certainly cpu bound in many cases. Even then it’s not just cpu load, it’s latency. If it takes me 50 microseconds to execute a branch when it used to be sub-microsecond, then that adds to the overall latency of the system.
- 2 more replies
New conversation -
-
-
It’s the former that concerns me, because those are the latency sensitive applications and they’ll be impacted the most.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Theres two scenarios... I spent a year architecting the perfect service that maximizes pipelining, then most scenarios with a JAVA web service. The latter never performed well (comparatively speaking) so 30% hit to the pipeline that wasn’t optimized makes sense.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.