The inbounds marker is just a guarantee that the pointer arithmetic will result in a pointer within the bounds of the object. They define one byte past the end as a special case that's allowed. The part that goes beyond C spec are their runtime / libc assumptions.
Conversation
i.e. knowing that an object cannot ever be at 0 or the maximum address that could be represented by a pointer. They treat null in a special way (never valid object) and it allows their assumption of no wrapping for inbounds GEP.
2
1
Try twitter.com/DanielMicay/st with and without -fwrapv or -fno-strict-overflow with Clang.
In theory, the inbounds marker could be split up into 2 separate markers to provide the no-overflow guarantee as a separate guarantee from being within the bounds of the object.
Quote Tweet
Replying to @DanielMicay and @iximeow
For example, in C:
char *foo(char *x) {
return x + 10;
}
Compile this with `clang foo.c -S -emit-llvm -o - -O2`.
The function `foo` is a guarantee that `x` is not NULL and is at least 10 bytes large. The result is at most 1 byte past the end of `x`. It's a promise.
1
1
The only way you really get non-inbounds GEP from Clang is when you do stuff like casting to/from integers and it happens to compile that code back to GEP.
Casting to/from integers is what gets incredibly sketchy and is arguably broken due to pointer provenance rules they use.
2
Replying to
Everybody loses when provenance rears its ugly head. But I thought there were rules to track provenance through integers (in the C spec, and LLVM/GCC)?
(You're answering my q's as I tweet them :P)
2
Replying to
The current C standard doesn't really standardize it. LLVM / GCC and likely other compilers choose to come up with those rules. They feel it's the only reasonable approach because it would be too hard to optimize C otherwise. They'd still do it for other languages regardless.
1
C standard retroactively turns things into undefined behavior regularly. They see their job as largely standardizing real world implementations. If compiler authors want something badly enough, they'll get it, because they'll do it and the standard will change. Likely for this.
2
1
The standard currently implies that optimization based on pointer provenance is not really a thing. It omits talking about it and says nothing about it being undefined. However, compilers do it, and the standard will likely be brought in line with what compilers choose to do.
2
2
I think DR260 (or DR236?) already allows using pointer provenance for e.g. pointer comparison, and GCC already implements that and refuses to follow the C standard. The C++ standard already allows it.
1
Actually, that’s not quite true re C++: comparing a past-the-end pointer with the pointer to the next object has unspecified results, but provenance is forbidden elsewhere.
But that prevents assuming that `new T` returns a fresh pointer: 1/
3
Rust has fully first class zero-size objects which is a 'fun' thing to remember you have to account for in low-level code with object size calculations to avoid getting a division by zero panic or other issues. Also, multiple zero-size objects can have the same address.
(The only division I can think of where this makes sense is to get the number of elements in an array. Which will get you 0/0 for ZSTs, and is undefined mathematically :D!)
1
C++ does too, on request for complete objects and automatically for superclasses... But the second case is unobservable since it doesn't occur in standard-layout classes.
1
Show replies


