Conversation

I would definitely say that the standard should not say things are 'undefined' but rather come up with sensible constraints on how it should be implemented. Guaranteeing that signed overflow wraps would be a regression for safe implementations by forbidding them from trapping.
2
1
Guaranteeing it either wraps or immediately traps would also be a regression, by forbidding more efficient implementations that trap as late as possible by propagating overflow errors via poison bits or poison values. UBSan is explicitly not designed as efficient. It's difficult.
1
1
I do think the standard should forbid treating signed overflow as something that is guaranteed to never happen in order to optimize further, and the same goes for other cases like this. It's near impossible to do that for memory safety issues without requiring safety though.
1
1
Clang and GCC both implement it for both signed and unsigned integer overflow. It's not a hard sell to them. It's impractical to use it for unsigned overflow largely because it's well-defined and there are lots of intended overflows that are not actually bugs in the software.
2
2
The standard permitting trapping on signed overflow for portable C code is useful regardless of what compilers do by default. A safer language would not only have memory / type safety but would consider integer overflow to be a bug unless marked as intended (Swift and Rust).
2
1
Considering it to be a bug doesn't mean that it actually MUST trap in production, but that it CAN trap. It should always trap in debug builds, and trapping in production is an option based on performance and availability vs. correctness decisions. It's a better approach.
2
1
In Rust, both signed and unsigned integer overflow is always considered a bug. Intended overflows need to be marked and it supports wrapping for both signed and unsigned via the appropriate APIs. It traps for unintended overflows in debug builds by default and can in production.
3
2
Yes, this is clearly the correct approach for a new language design, except that perhaps trapping for unintended overflows in production is the right approach too. But humans are going to be relying on existing C code until at least 2070, if not for as long as there are humans
1
Having memory safety also makes integer overflows *substantially* less dangerous. In C, integer overflow often leads to issues like heap overflows. Size calculations for dynamic allocations are one of the most common forms of dangerous integer overflow bugs.
1
1
It's still often a correctness issue with memory safety, but it's fairly rare that it leads to a way for an attacker to exploit it. Definitely still exploitable such as bypassing verification in cryptographic verification, or assorted kinds of logic errors in calculations.
1
Show replies
Definitely. The right solution in most cases is to use big integers unless there are very clear limits that are actually enforced. You still need limits for big integers or you are going to get dynamic runtime failures from running out of resources though.
1
Show replies