I seem to vaguely recall a hypothetical C implementation where a buffer could be placed such that "one past the end" wrapped around, and this was in fact legal.
But ptr arithmetic would have to be impl'd specially to accommodate this edge case.
Conversation
But aside from that hypothetical, I'm not sure why pointers to "one past the end" buffer would be affected by inbounds (I know little about LLVM)?
1
The inbounds marker is just a guarantee that the pointer arithmetic will result in a pointer within the bounds of the object. They define one byte past the end as a special case that's allowed. The part that goes beyond C spec are their runtime / libc assumptions.
1
i.e. knowing that an object cannot ever be at 0 or the maximum address that could be represented by a pointer. They treat null in a special way (never valid object) and it allows their assumption of no wrapping for inbounds GEP.
2
1
Try twitter.com/DanielMicay/st with and without -fwrapv or -fno-strict-overflow with Clang.
In theory, the inbounds marker could be split up into 2 separate markers to provide the no-overflow guarantee as a separate guarantee from being within the bounds of the object.
Quote Tweet
Replying to @DanielMicay and @iximeow
For example, in C:
char *foo(char *x) {
return x + 10;
}
Compile this with `clang foo.c -S -emit-llvm -o - -O2`.
The function `foo` is a guarantee that `x` is not NULL and is at least 10 bytes large. The result is at most 1 byte past the end of `x`. It's a promise.
1
1
The only way you really get non-inbounds GEP from Clang is when you do stuff like casting to/from integers and it happens to compile that code back to GEP.
Casting to/from integers is what gets incredibly sketchy and is arguably broken due to pointer provenance rules they use.
2
Replying to
Everybody loses when provenance rears its ugly head. But I thought there were rules to track provenance through integers (in the C spec, and LLVM/GCC)?
(You're answering my q's as I tweet them :P)
2
Replying to
The current C standard doesn't really standardize it. LLVM / GCC and likely other compilers choose to come up with those rules. They feel it's the only reasonable approach because it would be too hard to optimize C otherwise. They'd still do it for other languages regardless.
1
C standard retroactively turns things into undefined behavior regularly. They see their job as largely standardizing real world implementations. If compiler authors want something badly enough, they'll get it, because they'll do it and the standard will change. Likely for this.
2
1
The standard currently implies that optimization based on pointer provenance is not really a thing. It omits talking about it and says nothing about it being undefined. However, compilers do it, and the standard will likely be brought in line with what compilers choose to do.
2
2
Safe Rust code is actually meant to be properly sound and completely well defined / specified.
However, unsafe Rust code pretty much just follows the same kind of rules as C based on the LLVM choices. Despite not having a formal spec it's better specified in certain ways though.
Replying to
There's less UB, even in unsafe Rust (though the documentation on UB admits the current list is nonexhaustive).
1
I vaguely remember Ralf Jung implying there’s more UB and it’s easier to exploit, since exploiting UB is guaranteed not to break safe Rust.
2
Show replies


