No, that's not what he means. He's saying that an external file system should have a sandboxed filesystem driver, so that exploiting a bug inside it doesn't immediately grant complete control over the entire system and at least requires privesc to escape (likely via the kernel).
Conversation
Try reading the overview in events.linuxfoundation.org/wp-content/upl. Finding a Linux kernel vulnerability is not hard. Literally hundreds of bugs are found by syszkiller every month and many are not fixed. Most are memory corruption. There are simply too many to even fix all discovered bugs.
2
5
yes, we don't need to debate the question "can people write memory safe code in C" the answer is overwhelmingly obvious to almost all of us
3
1
18
It would be hard enough to make a microkernel secure if it was 50k lines of Rust with only 4k lines of unsafe code with potential memory corruption, let alone millions of lines of trusted C full of memory corruption from all kinds of trivial mistakes / oversights. It's a joke.
1
4
17
Just want to point out that seL4 is about 10K lines of C and is formally verified to use no UB, no OOB array access, no crashes, etc... Not trying to defend C but there are ways to make anything safe if you care enough. The problem is people don’t care.
3
1
8
I agree that C code can be made correct with somewhere around 10x the effort folks normally put into writing C code, and that one of the ways that you could profitably spend that extra effort is formal verification (though very good testing or auditing discipline could work too).
1
1
My claim is that it actually takes less effort. Rather it takes abandoning your ideas that doing clever tricks is going to make your code more efficient.
1
Eliminating 90% of memory corruption bugs with clean, well-written C code and proper usage of tooling like UBSan and ASan isn't good enough or comparable to providing memory safety. Full memory safety can be provided for C without proving all the code formally correct.
1
Either write it in a subset with annotations specifically to take advantage of memory safety verification (which would not be far from what you want to write naturally in clean code most of the time, but with some adjustments and annotations) or have the compiler do it.
1
Even a project like SQLite with a huge amount of testing / dynamic analysis tooling applied to it still has serious vulnerabilities caused by memory corruption bugs.
sqlite.org/testing.html
It's important to make a small TCB for these kinds of issues.
2
It's not realistic for every application and library to be extremely well written throughout without any mistakes. In C, you make one small mistake or typo and you have a remote code execution bug. That's just not acceptable. Code style, code review, testing, etc. don't fix it.
No, that's simply not true. There's only RCE if your mistake is in code that's exposed to untrusted input and has input-dependent code paths.
1
1
Being exposed to untrusted input can be very indirect. Consider something like a database library (SQLite). The database is untrusted input. It's a serious problem if an attacker controlled database can be used to gain code execution. A huge portion of the code must be safe.
1
Show replies




