If you break the build for them by shipping with -Werror, the likely outcome is that they start making random changes to the code the compiler warns about until the warning goes away, possibly BREAKING THE CODE IN DANGEROUS WAYS.
Conversation
Summary: -Werror is only meaningful with a known compiler version and build target, and only to developers who can meaningfully act on the failures. Don't ship with -Werror. Ever.
2
4
44
Replying to
Since the Linux kernel doesn't follow the C memory model and disregards undefined behavior rules they don't agree with, it's fairly dangerous to use a newer compiler than what they're broadly using and testing themselves. Ideally, they'd actually list what's being tested / used.
2
Replying to
For Linux it's specific bare-metal targets so -Werror is less evil than in general, but still a bad idea. Linux uses -fno-strict-aliasing etc. so it's not subject to most C memory model issues a new compiler could break.
1
Replying to
I mean that they don't follow the C11 memory model for atomics and make extensive use of atomics. The compiler developers don't agree with their homegrown rules and don't respect them. There's a whole lot of complicated lock-free data structure stuff that's really quite fragile.
2
Replying to
Aren't they all accessed as volatiles? Assuming a C compiler with a reasonable sense of volatile, you can model your own atomics that way (other cores being async hardware modifying the volatile memory).
1
Replying to
No, they don't use volatile for that since it would hurt performance too much and the whole reason for them refusing to use C11 atomics is because they think even the acquire/release semantics are too expensive.
2
Replying to
Acquire/release are a lot more expensive than volatile. Volatile just means each load/store on the abstract machine has to translate to one on the real machine, with matching load/store size (no split/combining) where the machine admits it.
1
Replying to
Look at open-std.org/jtc1/sc22/wg21. They're got their own quite fleshed out way of doing things. They decided how C should work and fleshed that out but they aren't on the same page as the C standard or compiler authors right now. There are ongoing efforts to fix the problems.
2
Volatile doesn't really mean what you're saying above and it's not how GCC/Clang interpret it or implement it. They expect usage of C11 atomics or their own built-ins for older standards. They don't treat volatile the way that Linux wants since it's not how it's standardized.
1
It doesn't really matter how many people think that it SHOULD mean what the Linux kernel assumes but rather just that the compiler developers don't agree and base things around the C11 memory model when it comes to how optimizations can reorder code and how code is generated.
It's not the only example of the Linux kernel developers and compiler developers not being on the same page. I'm just bringing it up as an example of why toolchain updates should be seen as quite scary and thoroughly tested across all the hardware, etc. that's being used.

