ok! so then would it be fair to say that you don't believe these bugs reflect any sort of shortcoming in the C language, but rather they all stem from ineptitude or a refusal to follow best practices?
1
This Tweet was deleted by the Tweet author. Learn more
This Tweet was deleted by the Tweet author. Learn more
Static analysis has had very little to show for C, because the language is fundamentally hostile to it. Dynamic mitigations have been much more effective.
4
This Tweet was deleted by the Tweet author. Learn more
Has to do with the style of code as well as the libraries you link to. The use of unbounded structures and imprecisely tracked memory objects adds to that mix
In order for static analysis to be useful, code has to be written to allow for deep analysis
Compare bugs found with ASan/UBSan/TSan + testing / fuzzing vs. static analysis. Static analysis barely finds anything. It also tends to have lots of false positives, which are harmful, and encourage making changes to the code which can and often does lead to introducing bugs.
The Debian key generation issue is a particularly well known example, but there are lots of others. Lots of projects have had bad experiences with it. Here's one: https://sqlite.org/testing.html#static_analysis….
"Static analysis has found a few bugs in SQLite, but those are the exceptions. More bugs have been introduced into SQLite while trying to get it to compile without warnings than have been found by static analysis."
It's like that for Linux kernel as well. There are classes of bugs that defy casual static analysis and you have to basically make your own runtime correctness checking tools (lockdep etc)
The Debian key generation issue resulted from *dynamic* analysis (with Valgrind flagging conditionals depending on uninitialized data), not *static* analysis