I think you're getting hung up on this idea that if the spec doesn't leave something undefined, then implementations can't *ever* deviate from what otherwise would be defined. It just means that they can't deviate by default. You can pass flags that change behavior.
Conversation
No, that's not what I've been saying. I think it would be a serious regression to break compatibility with safe implementations by making it correct to be incompatible with them. You want to massively roll back safety and security, especially if you want to remove it by default.
2
1
It's not a big deal if the spec says that overflow wraps. Anyway, a most useful version of trap-on-overflow would do this for unsigned, which isn't allowed in the current spec. It's fine to have security technologies that create interesting new behaviors.
2
The "breaking interface contracts is a security enhancement" view is a very very harmful one. It's the opposite.
3
7
Systems code should be written in something higher level than assembler but lower level than the symbolic execution system that C claims to provide currently. “Just use assembly” or “just use a type safe language” aren’t useful answers.
1
Systems code benefits from memory and type safety even more than most other code because it's often in a position of trust and privilege. Using a language where unsafety can be contained and quickly wrapped into safe APIs is certainly useful advice for newly written systems code.
1
2
6
The expectations of software robustness and security have increased a lot, and it's simply not realistic to achieve it while using unsafe tools making it much more difficult to write safe code. Writing something complex like an safe ext4 implementation is C is not very realistic.
1
5
i.e. writing the entire thing with zero memory corruption bugs for an attacker to exploit either via an attacker controlled filesystem or an application. Drivers similarly have to be written treating the hardware and code using them as adversarial. Choice of tools is important.
1
2
FS drivers do not belong in privileged contexts. The FS driver for an untrusted FS should be executing in a context where it can do nothing worse than store or retrieve wrong data.
1
2
This Tweet was deleted by the Tweet author. Learn more
He's talking about a case like an external drive rather than the file system used as the backing storage for the base OS or OS state. For those, sandboxing isn't going to help much. Sandboxing the block layer, storage drivers and storage firmware certainly helps though.
Since those don't need to be trusted if there's authenticated encryption. If there isn't, then not much is gained for the main internal storage. An external drive is different.
Right. If the storage device* or the fs on it is untrusted, you can't have data whose integrity the security of your system depends on stored on it. It's just being used to import (necessarily untrusted) files or something.
* device integrity may not matter with FDE+good driver.
1
Encryption needs to be authenticated encryption to avoid trust in storage though, and current implementations don't do that. Verified boot can avoid trust in storage for the base OS at least (since it avoids trusting the data there, in case an attacker had control in the past).


