Conversation

No, that's not what he means. He's saying that an external file system should have a sandboxed filesystem driver, so that exploiting a bug inside it doesn't immediately grant complete control over the entire system and at least requires privesc to escape (likely via the kernel).
1
4
It would be hard enough to make a microkernel secure if it was 50k lines of Rust with only 4k lines of unsafe code with potential memory corruption, let alone millions of lines of trusted C full of memory corruption from all kinds of trivial mistakes / oversights. It's a joke.
1
17
Just want to point out that seL4 is about 10K lines of C and is formally verified to use no UB, no OOB array access, no crashes, etc... Not trying to defend C but there are ways to make anything safe if you care enough. The problem is people don’t care.
3
8
I agree that C code can be made correct with somewhere around 10x the effort folks normally put into writing C code, and that one of the ways that you could profitably spend that extra effort is formal verification (though very good testing or auditing discipline could work too).
1
1
Eliminating 90% of memory corruption bugs with clean, well-written C code and proper usage of tooling like UBSan and ASan isn't good enough or comparable to providing memory safety. Full memory safety can be provided for C without proving all the code formally correct.
1
Either write it in a subset with annotations specifically to take advantage of memory safety verification (which would not be far from what you want to write naturally in clean code most of the time, but with some adjustments and annotations) or have the compiler do it.
1
Being exposed to untrusted input can be very indirect. Consider something like a database library (SQLite). The database is untrusted input. It's a serious problem if an attacker controlled database can be used to gain code execution. A huge portion of the code must be safe.
1
Show replies