Conversation

Replying to
I think it depends what your baseline is. Certainly for a long time default/standard allocators had no particular defenses against heap exploitation, so if you compare a custom allocator against that, it's probably a wash: what's easier to exploit will basically random.
1
4
Replying to and
That's starting to change though: so if you compare the median custom allocator against one of the new-ish breed of allocators that takes steps to defend against exploitation, then yeah, custom would probably be easier. Of course, the reverse could also be true: your new ...
2
2
This would be mitigating the “metadata school” speaks of here?
Quote Tweet
Replying to @pati_gallardo
There are different schools of heap exploitation, too. The metadata school focuses on heap metadata and tries to port techniques between applications, the app-specific school focuses on just application objects. I belong to the latter school, but there is no “right” answer.
2
1
The hardened_malloc implementation also guarantees that each size class below the large size classes (>128k by default) has a dedicated, statically reserved address space region. It doesn't just avoid mixing metadata/allocations but also allocations of different size classes.
3
Yes, maximizing ability to catch and trap double-free (subject to other reasonableness constraints) is one of my design goals here too. This is achieved by maximizing interval until exact same pointer is handed out again by malloc, which needn't require not reusing the memory.
2