That's not a relevant response related to the thread. He states that he wants an optimizing compiler with a comparable amount of optimization, where the programmer is writing code for an abstract machine and the compiler is making transforms that preserve abstract semantics.
Conversation
That wasn't my interpretation of his Tweet at the time, but on looking at further context, I think you are correct
1
I would definitely say that the standard should not say things are 'undefined' but rather come up with sensible constraints on how it should be implemented. Guaranteeing that signed overflow wraps would be a regression for safe implementations by forbidding them from trapping.
2
1
Guaranteeing it either wraps or immediately traps would also be a regression, by forbidding more efficient implementations that trap as late as possible by propagating overflow errors via poison bits or poison values. UBSan is explicitly not designed as efficient. It's difficult.
1
1
I do think the standard should forbid treating signed overflow as something that is guaranteed to never happen in order to optimize further, and the same goes for other cases like this. It's near impossible to do that for memory safety issues without requiring safety though.
1
1
I think that's going to be a hard sell to compiler vendors — doesn't it mean people will have to rewrite their inner loops with size_t loop counters to get reasonable efficiency?
1
Clang and GCC both implement it for both signed and unsigned integer overflow. It's not a hard sell to them. It's impractical to use it for unsigned overflow largely because it's well-defined and there are lots of intended overflows that are not actually bugs in the software.
2
2
The standard permitting trapping on signed overflow for portable C code is useful regardless of what compilers do by default. A safer language would not only have memory / type safety but would consider integer overflow to be a bug unless marked as intended (Swift and Rust).
2
1
Considering it to be a bug doesn't mean that it actually MUST trap in production, but that it CAN trap. It should always trap in debug builds, and trapping in production is an option based on performance and availability vs. correctness decisions. It's a better approach.
2
1
I meant, isn't "treating signed overflow as something that is guaranteed to never happen in order to optimize further" crucial to getting decent performance for int loops on LP64 platforms? I wasn't talking about trapping or not trapping, but wrapping in 32 bits vs. 64
1
It makes a difference, but it's not a crucial optimization. The way it tends to make a difference is letting the compiler assume that the loop terminates in common cases where it can't determine that. If the loop doesn't terminate, that's a side effect, so it can't be removed.
The compiler also wants to do things like restructuring inner / outer loops, moving code around, etc. and that's very difficult if there are side effects including memory effects that it can't understand due to aliasing, etc. It makes a difference to be able to use these things.
1
The overall impact on the program is rarely more than 1-2% performance though. Compare that to stack canaries with 2-5% performance cost. Type-based CFI can be another 2-10% performance cost. More and more performance is lost to incomplete memory safety improvements anyway.
