Neither GCC or LLVM supports ptrdiff+1 size allocations. Both of them delegate responsibility to the malloc and mmap implementations in libc to report an error for overflow large allocations. Some malloc implementations handle this and others (glibc) are broken with GCC/LLVM.
Conversation
Rust explicitly defines isize::MAX as the maximum size object that's permitted. Unsafe code needs to uphold this by making sure not to do it.
Some malloc implementations like musl & jemalloc have this check internally but Rust has to check itself with an unknown/generic malloc.
1
1
3
See gcc.gnu.org/bugzilla/show_ for a thread about this. ptrdiff_t+1 or larger objects are just not permitted with LLVM and GCC. It's not a C standard violation but rather they require that all standard ways of allocating objects (C, POSIX) have checks preventing larger objects.
1
I helped fix multiple malloc and libc implementations but there are still common ones like glibc that are broken with GCC/LLVM. It's a bad idea to use objects larger than PTRDIFF_MAX even with a compiler supporting it since x-y will be undefined if it overflows.
2
Just to be crystal clear... by "compiler supporting PTRDIFF_MAX + 1", you mean "the compiler assumes an object of PTRDIFF_MAX + 1 can possibly exist" (and gcc/llvm doesn't)?
1
Yes, that's what I mean. GCC and LLVM don't support objects larger than PTRDIFF_MAX. It's not seen as a bug but it could be seen as a missing feature. It doesn't appear there's any interest in implementing it though.
So, you need a compiler supporting that *and* it's still hard.
1
(Honestly, playing with 8088 so much, that gives me some comfort that even on flat addrspace archs, the whole addrspace can't be used for a single object in high-level langs :P)
1
twitter.com/DanielMicay/st I guess my follow up is: what do you mean by it's unrealistic to avoid "x - y" when it overflows?
The sentence immediately after describes how gcc/llvm do it... it prevents your program from continuing :)!
Quote Tweet
Replying to @DanielMicay @brouhaha and @iximeow
Even with the C standard semantics, it's unrealistic to avoid x-y when it would overflow. GCC and LLVM don't give you the opportunity to try to use it correctly. It just isn't supported. It's one of many rules they don't really bother to document. It's how they intend it to work.
1
So you're basically saying "even though signed overflow is undefined, it's so pervasive in ptr contexts that gcc/llvm won't optimize it out/do cute tricks"?
1
No, what I'm saying is that even if objects larger than PTRDIFF_MAX were supported by LLVM and GCC, pointer difference overflows would still be undefined. Since it's so common to take differences on arbitrary slices, etc. it would still be undefined to allow making those objects.
LLVM and GCC do actively break code with signed integer overflows if you aren't passing -fwrapv or -fno-strict-overflow. They're particularly aggressive with breaking it for pointers. They barely have any integer range analysis, etc. so they don't break much with integers.
1
1
It's not really that they avoid breaking code due to being cautious but rather they're terrible at doing analysis of integer ranges or optimizing based on it. It's a big part of why they both suck at removing bounds checks.
1
1
Show replies

