No. Just replace SIZE_MAX by SIZE_MAX / 4 * 3 and add -m32 and you get a genuine compiler bug.
All opt takes place by "as if" rule. Optimizations that violate that are not valid. Changing observable behavior is a bug
-
-
Sure. Everything boils down to the question what "observable behavior" is.
-
The amount of used memory is not part of observable behavior. In the same way as the execution time is not.
-
I submit this example to youhttps://twitter.com/spun_off/status/731563481007325187 …
-
There are two separate questions here. 1) Objects larger than PTRDIFF_MAX. That's what you discuss mostly. Complex question.
-
2) Optimizations of malloc <= PTRDIFF_MAX. AIUI
@RichFelker's POV is that this opt is wrong no matter what the size. I disagree. -
I have not seen Rich say that (though I did see you and JF Bastien argue as if he had)
New conversation -
-
-
You're saying: change which leads to resource exhaustion? Agree: broken. What if observing change needs UB?
-
What UB? This program is defined and prints SIZE_MAX / sizeof(short) according to the C11 standard. https://godbolt.org/g/Qs36E0
-
Clang does not warn and generates a program that prints -1, therefore Clang has a bug.
-
I agree that it would be better if clang worked in this case. And that it's much worse that gcc in this regard.
-
But the problem is not with (or limited to) malloc optimization. It doesn't catch objects > PTRDIFF_MAX at all.
-
Take this: https://godbolt.org/g/Ic1WIv . You can run it just fine without optimizations with ulimit -s unlimited.
-
clang has a limit for sizes at SIZE_MAX/8 in 64-bit version and SIZE_MAX in 32-bit one.
-
So it's not clear what clang devs think about all of this.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.