It's no wonder the M1 Macs are beating the pants off of the previous Intel offering there. But Intel has been *sucking badly* for years, and there are a pile of improvements other than the CPU.
-
Show this thread
-
As for Rosetta 2, it's good, but I'm still *really* curious how it'll do in the audio domain. We're talking lots of floating point processing with some integer mixed in, written by lots of different teams, some scalar, some vector, *definitely* a lot of it not well optimized.
2 replies 0 retweets 41 likesShow this thread -
And with hard realtime constraints - if the JIT fires off anything substantial in the audio processing thread, you *will* get a dropout - and even if it's not substantial, you'll probably get a pile of priority inversion hazards that will cause inconsistent dropouts.
3 replies 0 retweets 29 likesShow this thread -
So it looks like for day-to-day stuff Mac users can probably be confident that they won't lose much vs. their older Intel Mac under Rosetta 2, and gain in many instances. But I wouldn't put my money on M1+R2 for all workloads yet.
1 reply 0 retweets 23 likesShow this thread -
It'll be interesting to see these performance details worked out in more detail; e.g. people have talked about M1 being way faster at ObjC object management, so presumably it has *way* faster atomics. That matters a lot for some kinds of software, and not at all for others.
2 replies 0 retweets 30 likesShow this thread -
But the question is how, and why - presumably their bus system is tighter than typical x86 ones? I'm looking forward to a deeper dive, and whether AMD/Intel care to improve this in the future.
1 reply 1 retweet 25 likesShow this thread -
Also, remember that Apple cheated with their control over the CPU for Rosetta 2. Getting R2 x86 performance on any other ARM is impossible, due to the memory model mismatch. You have to massively slow down all loads and stores.
3 replies 5 retweets 51 likesShow this thread -
So Apple straight up implemented the x86 consistency model on their cores. That's the kind of high-impact detail that makes or breaks emulation performance for a different arch. Did they do this for any other x86-isms? Nobody knows so far.
8 replies 31 retweets 139 likesShow this thread -
Replying to @marcan42
Is there some more information about this specifically or something I could read (wikipedia or similar) to get a rough idea of what they had to do to make this x86 consistency model work with their M1 chip? (I'm a nerd, but not genius at hardware specific stuff like you.)
2 replies 0 retweets 4 likes -
Replying to @purpleidea
https://www.nickwilcox.com/blog/arm_vs_x86_memory_model/ … found this which might be interesting but I hope to hear something from someone with actual knowledge of this stuff
1 reply 1 retweet 8 likes
It's just a bit, you flip it and the memory model becomes x86-like under the hood. There isn't much more to it. Presumably it did require quite a few changes to their reordering/cache/bus subsystem to make it happen.https://github.com/saagarjha/TSOEnabler …
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.