Most exciting bullet point: - Eliminate all UEFI/ME post-boot activity.https://twitter.com/qrs/status/924704712896712704 …
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Yeah, that's basically it. Dynamically adjusting delays due to very small timing margin. Note that DRAM training on x86 is...
Is there a reason you can't just set static delays that are a safe margin above any measurement you'd get rather than training?
That indeed used to be possible, and I've done it for DDR1. But we eventually made trade-offs so that this is now too difficult
e.g. it turns out that we can get better signal integrity with a "fly-by" signal routing vs a tree. But this means that...
the data from chip 0 on a DIMM comes out before the data from chip N. Now you need to align the data, and this depends on propagation delays
Is there no way to just turn down the DRAM clock to the point where the propagation delay is negligible?
I'd be happy with c.2000 DRAM performance if it meant you could get by with near-zero need for per-chipset/soc-revision firmware logic.
so e.g. J-core doesn't need to do any DRAM training, but Intel isn't going to bother with this. Hashtag economics.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.