One outcome of my taking on kernel work might be gaining enough experience to redo it right. :-)
And the pre-C11 [lack of] memory model in C was sufficient for specifying non-concurrent behavior fully.
-
-
So IMO from a practical standpoint C11 significantly weakened the memory model, rather than adding it.
-
From a practical standpoint, they've had major issues with their own informal model not matching how GCC works.
-
They = Linux kernel devs? I don't doubt it one bit. Their informal model is "C is high level assembler". :-(
-
Yeah, they make a lot of assumptions about the memory model; often incorrect even at hardware level outside x86.
-
Linus puts the blame on the compilers and weak memory models. They used their own loose version of C11 rules.
-
But... without carefully considering how it lined up with the reality in compilers and other architectures.
-
And having many concurrent data structures / algorithms not understood even by core developers seems a bit bad.
-
I consider the whole "concurrent data structures" game a mistake. Locks are cheap and safe. RCU is awful.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.