Conversation

Replying to and
No, what I'm saying is that even if objects larger than PTRDIFF_MAX were supported by LLVM and GCC, pointer difference overflows would still be undefined. Since it's so common to take differences on arbitrary slices, etc. it would still be undefined to allow making those objects.
1
LLVM and GCC do actively break code with signed integer overflows if you aren't passing -fwrapv or -fno-strict-overflow. They're particularly aggressive with breaking it for pointers. They barely have any integer range analysis, etc. so they don't break much with integers.
1
1
Most languages don't say signed integer overflow is undefined like C and LLVM/GCC won't hold back generic optimizations just to avoid breaking C code. They'll eventually add proper integer range analysis. C programmers can either use -fwrapv or have their undefined code break.
1
1
Passing -fwrapv hurts optimization a lot with Clang though. It removes inbounds from pointers too, and that doesn't only lose the non-wrapping guarantee but also the guarantee of it being within the bounds of the object up to one byte past the end.
1
1
I seem to vaguely recall a hypothetical C implementation where a buffer could be placed such that "one past the end" wrapped around, and this was in fact legal. But ptr arithmetic would have to be impl'd specially to accommodate this edge case.
3
Replying to and
The inbounds marker is just a guarantee that the pointer arithmetic will result in a pointer within the bounds of the object. They define one byte past the end as a special case that's allowed. The part that goes beyond C spec are their runtime / libc assumptions.
1
That touches on my follow up: Is there something preventing creating a new marker that says "one past the object is valid", while simultaneously saying "this pointer may wrap"? This could make -fwrapv hurt less? But you'd have to actually ensure the pointer doesn't wrap!
1