Conversation

Replying to and
No, what I'm saying is that even if objects larger than PTRDIFF_MAX were supported by LLVM and GCC, pointer difference overflows would still be undefined. Since it's so common to take differences on arbitrary slices, etc. it would still be undefined to allow making those objects.
1
LLVM and GCC do actively break code with signed integer overflows if you aren't passing -fwrapv or -fno-strict-overflow. They're particularly aggressive with breaking it for pointers. They barely have any integer range analysis, etc. so they don't break much with integers.
1
1
Most languages don't say signed integer overflow is undefined like C and LLVM/GCC won't hold back generic optimizations just to avoid breaking C code. They'll eventually add proper integer range analysis. C programmers can either use -fwrapv or have their undefined code break.
1
1
Passing -fwrapv hurts optimization a lot with Clang though. It removes inbounds from pointers too, and that doesn't only lose the non-wrapping guarantee but also the guarantee of it being within the bounds of the object up to one byte past the end.
1
1
I seem to vaguely recall a hypothetical C implementation where a buffer could be placed such that "one past the end" wrapped around, and this was in fact legal. But ptr arithmetic would have to be impl'd specially to accommodate this edge case.
3
Replying to and
The inbounds marker is just a guarantee that the pointer arithmetic will result in a pointer within the bounds of the object. They define one byte past the end as a special case that's allowed. The part that goes beyond C spec are their runtime / libc assumptions.
1
Try twitter.com/DanielMicay/st with and without -fwrapv or -fno-strict-overflow with Clang. In theory, the inbounds marker could be split up into 2 separate markers to provide the no-overflow guarantee as a separate guarantee from being within the bounds of the object.
Quote Tweet
Replying to @DanielMicay and @iximeow
For example, in C: char *foo(char *x) { return x + 10; } Compile this with `clang foo.c -S -emit-llvm -o - -O2`. The function `foo` is a guarantee that `x` is not NULL and is at least 10 bytes large. The result is at most 1 byte past the end of `x`. It's a promise.
1
1
If you convert a pointer to an integer, write it to a file, read it back in and convert it back to a pointer, is accessing that memory undefined? C standard appears to permit it. LLVM and GCC treat it as undefined. It's an ongoing debate and there's an in-progress spec document.
Replying to
Everybody loses when provenance rears its ugly head. But I thought there were rules to track provenance through integers (in the C spec, and LLVM/GCC)? (You're answering my q's as I tweet them :P)
2
Replying to
The current C standard doesn't really standardize it. LLVM / GCC and likely other compilers choose to come up with those rules. They feel it's the only reasonable approach because it would be too hard to optimize C otherwise. They'd still do it for other languages regardless.
1
Show replies