Hardware doesn't implement C, so there isn't a standard behavior defined by hardware. It's up to the compiler to map C onto the hardware or the virtual machine. They get to choose how to handle each kind of undefined or implementation defined behavior, and everything else.
Conversation
I think you're getting hung up on this idea that if the spec doesn't leave something undefined, then implementations can't *ever* deviate from what otherwise would be defined. It just means that they can't deviate by default. You can pass flags that change behavior.
1
No, that's not what I've been saying. I think it would be a serious regression to break compatibility with safe implementations by making it correct to be incompatible with them. You want to massively roll back safety and security, especially if you want to remove it by default.
2
1
It's not a big deal if the spec says that overflow wraps. Anyway, a most useful version of trap-on-overflow would do this for unsigned, which isn't allowed in the current spec. It's fine to have security technologies that create interesting new behaviors.
2
The "breaking interface contracts is a security enhancement" view is a very very harmful one. It's the opposite.
3
7
Systems code should be written in something higher level than assembler but lower level than the symbolic execution system that C claims to provide currently. “Just use assembly” or “just use a type safe language” aren’t useful answers.
1
Systems code benefits from memory and type safety even more than most other code because it's often in a position of trust and privilege. Using a language where unsafety can be contained and quickly wrapped into safe APIs is certainly useful advice for newly written systems code.
1
2
6
The expectations of software robustness and security have increased a lot, and it's simply not realistic to achieve it while using unsafe tools making it much more difficult to write safe code. Writing something complex like an safe ext4 implementation is C is not very realistic.
1
5
i.e. writing the entire thing with zero memory corruption bugs for an attacker to exploit either via an attacker controlled filesystem or an application. Drivers similarly have to be written treating the hardware and code using them as adversarial. Choice of tools is important.
1
2
FS drivers do not belong in privileged contexts. The FS driver for an untrusted FS should be executing in a context where it can do nothing worse than store or retrieve wrong data.
1
2
Definitely, but gaining arbitrary code execution even within a sandbox is a huge victory for an attacker, especially if it's not sitting on top of a lean microkernel that's very difficult to exploit. Escaping from a sandboxed FUSE driver on Linux is easier than initial code exec.
This Tweet was deleted by the Tweet author. Learn more
I'm not sure why you're linking that. I'm talking about vulnerabilities in the Linux kernel usable to escape from a sandbox not the userspace FUSE components, which aren't a substantial portion of the attack surface for a FUSE filesystem driver. FUSE drivers are a normal process.
1
Show replies


