It's always implemented in software via hardware features. The features vary in performance. Jump-on-overflow is a lot worse than architectures with support for enabling a trapping mode, whether it's strict or propagates a poison value that can never be accessed (since it traps).
Conversation
Hardware doesn't implement C, so there isn't a standard behavior defined by hardware. It's up to the compiler to map C onto the hardware or the virtual machine. They get to choose how to handle each kind of undefined or implementation defined behavior, and everything else.
2
1
I think you're getting hung up on this idea that if the spec doesn't leave something undefined, then implementations can't *ever* deviate from what otherwise would be defined. It just means that they can't deviate by default. You can pass flags that change behavior.
1
No, that's not what I've been saying. I think it would be a serious regression to break compatibility with safe implementations by making it correct to be incompatible with them. You want to massively roll back safety and security, especially if you want to remove it by default.
2
1
It's not a big deal if the spec says that overflow wraps. Anyway, a most useful version of trap-on-overflow would do this for unsigned, which isn't allowed in the current spec. It's fine to have security technologies that create interesting new behaviors.
2
The "breaking interface contracts is a security enhancement" view is a very very harmful one. It's the opposite.
3
7
Systems code needs to be able to rely on math having some deterministic outcome and often the expected outcome is wrapping. Breaking that breaks real code, sometimes in ways that introduce security issues.
1
-fwrapv doesn't break the contract; it just introduces a compatible extension. Making unsigned no longer modular arithmetic does break the contract.
1
1
Yeah, it's a lot easier to introduce signed overflow checking, since portable C is already compatible with it and there are also simply a lot more use cases for intended signed overflow. Much harder to mark every intended unsigned overflow and fix all benign unintended cases.
1
1
Also can't simply easily upstream all of these changes for unsigned overflow, without convincing upstream projects that marking all intended overflows / fixing all benign overflows is worthwhile in order to use -fsanitize=unsigned-integer-overflow to find the unintended bugs.
1
1
And by benign cases, I mean that it's extremely common to have issues like overflowing by one after it no longer matters because the value isn't being read anymore. AOSP adopted automatic integer overflow for hardening and it wasn't easy to get it working. Lots of changes needed.
There aren't really that many intended cases: hash functions, counters that are meant to wrap, cryptography, etc. and it's really not that bad to mark functions or files with no_sanitize(unsigned_integer_overflow). More specific cases can use a wrapper around the intrinsics.


