CFI + ShadowCallStack are already doing a good job against very constrained memory corruption bugs. I don't see what any mitigations short of detecting/preventing memory corruption can realistically do against more dangerous primitives. You can simply change process privileges.
Conversation
Taking over kernel control flow is entirely optional and not required for anything attackers actually want to accomplish. If they want to do that, then they have so much available to them including the page tables and immense amount of other things.
1
7
ARMv8.5 memory tagging approach only has 4-bit tags. If you have 128-bit tags via fat pointers, you could have strong memory safety. At a certain point, it would be a whole lot less work and actually more performant to just implement memory safety instead of weak mitigations.
1
5
MTE sounds a lot more useful for sanitizing (HWASAN) than as a mitigation. I’m a lot more excited about ARM8.3 PAC for that.
1
2
MTE is quite powerful despite 4-bit tag limitation. It can be used to provide deterministic guarantees. It has explicit support for reserving tags. OS reserving a single tag for internal use allows it to protect all kinds of metadata, make 16 byte granularity 'guard pages', etc.
1
2
ShadowCallStack, inline malloc metadata, freed allocations, etc. can be protected with a single reserved tag. Making sure adjacent allocations have different tags or having protected metadata between them wipes out small / linear overflows. Can do a lot more than random tags.
1
1
I think it's far more compelling than PAC even if you disregard the ability to use random tags and use entirely deterministic ones. PAC is yet another attempt at targeting exploit techniques. Only protects specific pointers rather than memory in general. IMO, it's underwhelming.
1
2
6
PAC is at odds with using the address space for exploit mitigations. It's directly opposed to approaches like splitting up the address space and avoiding reuse which is a *deterministic* UAF mitigation. It isn't just taking away bits from ASLR but also more interesting things.
1
Those 'more interesting things' also include memory tagging since eventually it could be possible to use 24-bit or larger tags via unused upper bits. PAC is using up a bunch of those precious bits for an inherently very weak and hard to widely deploy probabilistic mitigation.
2
1
“Eventually” as in somehow we solve the true issue with metadata, which is storage. Even ECC is a lie, and in a world where DRAM is precious, hard to sustain.
1
1
Going from the current 16 byte granularity to 64 byte would at least make it possible to use 16-bit tags instead of 4-bit without needing more storage. They can certainly go a lot further than that if they're willing to offer the option to waste more space on the tag metadata.
If you go to 64 byte granularity, the amount of wasted memory for the majority of consumer workloads (browsers, etc) becomes your bottleneck. SPARC worked fine with that because it aligned with Oracle DB needs, but consumer OS are in a different space
1
For the ecosystem we work on, nearly all of that is using concurrent compacting garbage collection.
C++ and now Rust are used for particularly high performance libraries or application code and it wouldn't make much sense to use it if that code was heavily impacted by malloc.
1
Show replies


