It's interesting how the same hardware gets such different performance on different operating systems when running the same benchmark: https://www.phoronix.com/scan.php?page=article&item=2990wx-linux-windows&num=2 …pic.twitter.com/3aZvr3ZC0H
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
I don't like the idea of having so many variables in these benchmarks though. There are three different compiler versions, two kernel versions, three performance governors and god knows what else. And there's not even a comparison with a different cpu.
It is useful for some people to know what you get out of the box, like comparing color reproduction of uncalibrated display out of the box. But article with details about compiler version, flags & kernel flags for best results would be very interesting!
I'd like to see the sample size and histogram here; it's hard to see what's happening without knowing the statistical method and actual data breakdown.
NUMA. The kernel scheduler plays a huge role in performance.
(I wouldn't want to benchmark a NUMA machine without a team on engineers who got several months to tune every part of the stack, with similar tuning work for the control machine.)
At least this isn't comparing the Debian Shoot-Out Game, which suffers from plenty of damning fallacies of its own, chiefly: 1. non-determinism resulting from inconsistent JVM configuration and version 2. relying on FFI instead of native feature implementation in base language
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.