Yeah that’s the compromise approach which allowed it to go in at all… wants to get rid of it as well 🙂
IMO we need to show perf of 0 versus pattern can’t be bridged, so 0 is there to stay.
Conversation
I'm not using zero for performance. I explain the choice in github.com/GrapheneOS/har as "Zero-based filling has the least chance of uncovering latent bugs, but also the best chance of mitigating vulnerabilities.". Uncovering bugs also isn't necessarily good for this use case.
1
I understand, but the discussion was around performance. If we want to keep zero, performance is how it’ll happen.
Or, empirical data that zero is better.
1
Isn't using it for hardening software in production a valid use case? I would be using zero filling even if the CPU performed worst for zero filling. Using a different value gets complicated because it ideally creates pointers to protected memory and far away from anything valid.
1
on 64-bit platforms, repeated 0xAA have that property.
However, repeated 0xAA is a pretty large size index, zero isn't.
But! In a context like a kernel zero is often a valid pointer sentinel...
So what the "right" choice is was pretty hard to agree, and all that was left was perf
1
It has that property for 64-bit pointers, but not 32-bit pointers. In a 32-bit process on a 64-bit OS, the entire 32-bit address space is usually accessible. As another common example, the standard Android Runtime uses 32-bit pointers for the managed heaps to reduce memory usage.
1
Zero is also simply by far the most conservative and safest value. It's typically what the software is already depending on, since memory starts off zeroed and often stays zeroed or ends up that way. It's the least likely to be a change from how it already was before doing it.
1
It's by far the worst option for uncovering bugs, but by far safest option for hardening. It's the only value that I feel comfortable using in production. I'd rather leave the data uninitialized than using any other value. I'd be scared of making more vulnerabilities exploitable.
1
It would be difficult to find a real case -ftrivial-auto-var-init=zero made incorrect software less safe, while it makes lots of incorrect software safer from exploitation or other failures. Neither Clang or GCC support hard errors on all uninit use and it's not socially viable.
2
I'm not arguing this. I'm stating how I think the long-ass parameter can be removed. Two options: perf, or data.
1
1
I can provide evidence showing that Clang is already written in this language dialect and needs to start passing the switch for correctness, if that counts. I don't think there's going to be a significant performance advantage of zeroing instead of filling with another value.
That would be neat to send to the list.
On perf: Google's numbers show significant cost. We've worked on reducing cost, and they have as well. The bar I set is: can pattern reach the cost of zero? If not, then zero should be kept.
1
I agree with that this about more than just performance. It's about not exposing accidental crashes as well. My goal isn't to have the pattern init break our production builds. It's about hardening what is already there while minimizing crashes.
In cases where the compiler can see that the data is already being zeroed, it would be removing the redundant zeroing anyway. I can come up with contrived cases where zeroing is faster but I really doubt that it has a substantial performance advantage for most real world usage.


