Conversation

This Tweet was deleted by the Tweet author. Learn more
No, that's not what he means. He's saying that an external file system should have a sandboxed filesystem driver, so that exploiting a bug inside it doesn't immediately grant complete control over the entire system and at least requires privesc to escape (likely via the kernel).
1
4
It would be hard enough to make a microkernel secure if it was 50k lines of Rust with only 4k lines of unsafe code with potential memory corruption, let alone millions of lines of trusted C full of memory corruption from all kinds of trivial mistakes / oversights. It's a joke.
1
17
Just want to point out that seL4 is about 10K lines of C and is formally verified to use no UB, no OOB array access, no crashes, etc... Not trying to defend C but there are ways to make anything safe if you care enough. The problem is people don’t care.
3
8
I agree that C code can be made correct with somewhere around 10x the effort folks normally put into writing C code, and that one of the ways that you could profitably spend that extra effort is formal verification (though very good testing or auditing discipline could work too).
1
1
Eliminating 90% of memory corruption bugs with clean, well-written C code and proper usage of tooling like UBSan and ASan isn't good enough or comparable to providing memory safety. Full memory safety can be provided for C without proving all the code formally correct.
1
It's not realistic for every application and library to be extremely well written throughout without any mistakes. In C, you make one small mistake or typo and you have a remote code execution bug. That's just not acceptable. Code style, code review, testing, etc. don't fix it.
1
Show replies