Conversation

This Tweet was deleted by the Tweet author. Learn more
No, that's not what he means. He's saying that an external file system should have a sandboxed filesystem driver, so that exploiting a bug inside it doesn't immediately grant complete control over the entire system and at least requires privesc to escape (likely via the kernel).
1
4
It would be hard enough to make a microkernel secure if it was 50k lines of Rust with only 4k lines of unsafe code with potential memory corruption, let alone millions of lines of trusted C full of memory corruption from all kinds of trivial mistakes / oversights. It's a joke.
1
17
Just want to point out that seL4 is about 10K lines of C and is formally verified to use no UB, no OOB array access, no crashes, etc... Not trying to defend C but there are ways to make anything safe if you care enough. The problem is people don’t care.
3
8
Sure, but at that point it's not C. It's a subset of C that was verified via proofs. Those proofs are an extension to the language. Similarly, you can define a general purpose subset of C with annotations for ownership, lifetimes, etc. letting you write memory safe code.
2
1
You end up with something that looks a lot like Rust. You can't compile existing C codebases with an implementation of it. They have to be specifically written for it with the annotations, but they don't need complex proofs, unless they want to prove more than memory safety.
2
By introduce a system, I mean defining a subset of C with annotations that is actually memory safe, which isn't all that complex and doesn't need to have much noisy syntax. A lot can be inferred, but still, you can't easily adapt existing code to be verified as safe like this.
2
Show replies