Conversation

Many programs have bugs where they read data that has just been freed, but handle it being an arbitrary value. The issue is often benign with common allocators. However, with other implementations the access will fault and they crash. It's good it's not required to let it work.
2
3
Also, signed overflow being undefined rather than defined as wrapping means that more secure implementations where it traps are permitted. Passing -fsanitize=signed-integer-overflow -fsanitize-trap=signed-integer-overflow is standards compliant and used for hardening in AOSP.
3
5
That code can be fixed and the fixes are clear cut bug fixes. High quality C code is tested with ASan, TSan, UBSan, etc. and many of these issues are already being caught and fixed over time. Portable and safe C code needs to avoid relying on undefined behavior like this.
2
5
C isn't defined as that language, and you're not in a position where you get to define the language. In the real world, C is deployed with various safety features taking advantage of many things being undefined and reducing portability / compatibility with those wouldn't be good.
2
5
This Tweet was deleted by the Tweet author. Learn more
The Linux kernel chooses to use a superset of standard C. It doesn't ignore the rules that it isn't disabling via those switches but rather is actively tested with ASan and UBSan, with people working to address the cases that are not permitted, usually by fixing the kernel bugs.
2
1
The defined behavior can be trapping, which makes more sense in 2019 with software safety / security / robustness as such important issues. Hardware can and is being designed to make it efficient to catch these issues too. It can also just permit safety without using 'undefined'.
2
1