So you're basically saying "even though signed overflow is undefined, it's so pervasive in ptr contexts that gcc/llvm won't optimize it out/do cute tricks"?
Conversation
No, what I'm saying is that even if objects larger than PTRDIFF_MAX were supported by LLVM and GCC, pointer difference overflows would still be undefined. Since it's so common to take differences on arbitrary slices, etc. it would still be undefined to allow making those objects.
1
LLVM and GCC do actively break code with signed integer overflows if you aren't passing -fwrapv or -fno-strict-overflow. They're particularly aggressive with breaking it for pointers. They barely have any integer range analysis, etc. so they don't break much with integers.
1
1
It's not really that they avoid breaking code due to being cautious but rather they're terrible at doing analysis of integer ranges or optimizing based on it. It's a big part of why they both suck at removing bounds checks.
1
1
Most languages don't say signed integer overflow is undefined like C and LLVM/GCC won't hold back generic optimizations just to avoid breaking C code. They'll eventually add proper integer range analysis. C programmers can either use -fwrapv or have their undefined code break.
1
1
Passing -fwrapv hurts optimization a lot with Clang though. It removes inbounds from pointers too, and that doesn't only lose the non-wrapping guarantee but also the guarantee of it being within the bounds of the object up to one byte past the end.
1
1
I seem to vaguely recall a hypothetical C implementation where a buffer could be placed such that "one past the end" wrapped around, and this was in fact legal.
But ptr arithmetic would have to be impl'd specially to accommodate this edge case.
3
But aside from that hypothetical, I'm not sure why pointers to "one past the end" buffer would be affected by inbounds (I know little about LLVM)?
1
The inbounds marker is just a guarantee that the pointer arithmetic will result in a pointer within the bounds of the object. They define one byte past the end as a special case that's allowed. The part that goes beyond C spec are their runtime / libc assumptions.
1
i.e. knowing that an object cannot ever be at 0 or the maximum address that could be represented by a pointer. They treat null in a special way (never valid object) and it allows their assumption of no wrapping for inbounds GEP.
2
1
Try twitter.com/DanielMicay/st with and without -fwrapv or -fno-strict-overflow with Clang.
In theory, the inbounds marker could be split up into 2 separate markers to provide the no-overflow guarantee as a separate guarantee from being within the bounds of the object.
Quote Tweet
Replying to @DanielMicay and @iximeow
For example, in C:
char *foo(char *x) {
return x + 10;
}
Compile this with `clang foo.c -S -emit-llvm -o - -O2`.
The function `foo` is a guarantee that `x` is not NULL and is at least 10 bytes large. The result is at most 1 byte past the end of `x`. It's a promise.
The only way you really get non-inbounds GEP from Clang is when you do stuff like casting to/from integers and it happens to compile that code back to GEP.
Casting to/from integers is what gets incredibly sketchy and is arguably broken due to pointer provenance rules they use.
2
If you convert a pointer to an integer, write it to a file, read it back in and convert it back to a pointer, is accessing that memory undefined? C standard appears to permit it. LLVM and GCC treat it as undefined. It's an ongoing debate and there's an in-progress spec document.

