After Meltdown kernel calls are REALLY slow. I had HPET timer accidentally active on my Threadripper (QPC caused a kernel call). Profiling tools such as iTune practically froze the computer (so many timer calls). UE4 editor build (internal timings) got 3x slower too.
@BruceDawson0xB What would happen if the default Windows thread timeslice were reduced to sub-millisecond? E.g. Is the status quo just legacy from the slow CPU days, or would this significantly reduce efficiency?
-
-
-
Meant VTune obviously, not iTune :D
- 2 more replies
New conversation -
-
-
IIRC, servers use a much coarser quantum than home machines to improve efficiency. Whether that's the actual kernel overhead, or whether it's the effort of paging app working sets in and out of the cache is a good question - would need benchmarking.
-
I do remember Oculus had the IMU poke the CPU every time it had data. That was a significant perf hit. I don't remember what the rate was, but it was dropped to once every millisecond (and batch up the samples) and that was a reasonable tradeoff.
- 8 more replies
New conversation -
-
-
Frustratingly, that is an increasingly valid answer.
-
Imagine if we get to the point of asymmetric cores just for spinlocks. lol.
- 1 more reply
New conversation -
-
-
I think it would reduce efficiency a lot. Frequently firing interrupts cost cycles. Switching between threads reduces cache efficiencies. Switching between processes is even worse. What gains would you expect?
-
i think input latency (which is often shockingly bad even on modern devices https://danluu.com/input-lag/ )?
- 1 more reply
New conversation -
-
-
An ex colleague set his quantum to 1ms and claimed it made things more responsive, but also occasionally caused a bluescreen...
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.