So it's clear they're not "10x better perf per watt" which is something I've seen thrown around a few days ago. But they're good. They are now competing with AMD, and Intel, well, haha good luck. (They totally had it coming.)
-
-
But the question is how, and why - presumably their bus system is tighter than typical x86 ones? I'm looking forward to a deeper dive, and whether AMD/Intel care to improve this in the future.
Show this thread -
Also, remember that Apple cheated with their control over the CPU for Rosetta 2. Getting R2 x86 performance on any other ARM is impossible, due to the memory model mismatch. You have to massively slow down all loads and stores.
Show this thread -
So Apple straight up implemented the x86 consistency model on their cores. That's the kind of high-impact detail that makes or breaks emulation performance for a different arch. Did they do this for any other x86-isms? Nobody knows so far.
Show this thread
End of conversation
New conversation -
-
-
Would that come from the ARM memory model — which I assume they keep for non-Rosetta software?
-
That could be part of it, e.g. see https://llvm.org/devmtg/2014-04/PDFs/Talks/Reinoud-report.pdf … for some discussion. But I don't know the full details off the top of my head. Yes, the memory model is configurable per process.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.