Memory tagging is a very loose approximation of dynamic memory safety checking though. Can have some nice deterministic guarantees like easily guaranteeing every small / linear heap overflow is caught but for arbitrary read/write it's a very low entropy probabilistic mitigation.
Conversation
I think one of the major benefits will be that it's essentially like having ASan deployed in production at a very low cost. It will be really good for eliminating large swaths of bugs by detecting them a high percentage of the time in production. Decent chance to bypass though.
1
SPARC ADI and ARMv8.5 MTE have 4-bit tags. It's really not a lot of entropy for mitigating arbitrary read/write. Can reserve a tag never used for active heap allocations to mark freed data or maybe other things like a shadow stack as another mitigation. Not a lot of tags though.
1
1
I'd really like to see proper efficient hardware support for integer overflow checking by propagating poison values internally and then trapping when attempting to use a value. Main barrier to automatic checking even in many higher level languages is the high performance cost.
2
Why is poison and deferred checking needed? Just trap at the time of overflow. Inserting 100%-predicted "jo" insns all over the place shouldn't even hurt performance though; no need for new hardware.
1
It does hurt performance substantially though. Even instructions that trap on overflow strictly would likely substantially hurt performance even though it's the same number of instructions. It makes much worse usage of the hardware and the effects hurt compiler optimization too.
1
Have you measured this? It shouldn't on a speculative execution architecture. Speculation should just continue as if the overflow didn't happen, discarding results before the operation retires if the overflow did happen.
1
It still makes the code bigger, uses more resources and makes poorer usage of the underlying low-level internal hardware. I've measured the impact of -fsanitize=signed-integer-overflow,unsigned-integer-overflow -fsanitize-trap=signed-integer-overflow,unsigned-integer-overflow.
1
There has been a lot of measuring / debate about it for Rust because the language is defined as requiring intended overflows to be marked as such, and it's allowed to trap on overflow, but it currently only does it in debug builds due to the relatively high performance impact.
1
1
It tends to be something like an ~5-10% cost at a hardware level, which isn't enormous, but it's still significant. That's ignoring added code size. The other side of the picture are the missed optimizations which add more cost, but doesn't require hardware support to improve.
1
It's a somewhat heated topic for Rust because the language wants to enable overflow checking by default in release builds and the x86_64 support is almost good enough to justify it, especially if the compiler heavily optimized it, but it's still a hard sell, esp. with mem safety.
Most serious integer overflow vulnerabilities are only actually serious because they lead to an issue like a heap overflow. There are definitely exploitable logic errors based on integer overflows but it's not nearly as pervasive. It makes it harder to justify another 5-10% cost.
1
Since the bounds checking is already a 5-10% cost in cases that can't get by with using nice iterator patterns. Rust's iterators are really nice because they can eliminate the overhead of bounds checking even for fairly complex cases the compiler could never figure out itself.

