I think it depends what your baseline is. Certainly for a long time default/standard allocators had no particular defenses against heap exploitation, so if you compare a custom allocator against that, it's probably a wash: what's easier to exploit will basically random.
Conversation
That's starting to change though: so if you compare the median custom allocator against one of the new-ish breed of allocators that takes steps to defend against exploitation, then yeah, custom would probably be easier.
Of course, the reverse could also be true: your new ...
2
2
I saw for example that is considering this in the design of his new musl malloc, and I am sure by now Windows (heap allocation ships as part of the userspace part of the kernel there) has done something by now.
1
2
Much of my work/attitude on this is inspired by discussions with regarding his hardened_malloc for GrapheneOS. In particular idea that there's a big qualitative difference between integrity of allocator state and of application data inside allocations.
1
2
Do you have or know of any papers or talks or something on this thinking?
1
1
Daniel surely has some Twitter monologues about it. The readme also has lots of info: github.com/GrapheneOS/har
1
2
1
Basic concept is that things can go catastrophically wrong much sooner if you can corrupt the state of the allocator itself and get it to hand out pointers to memory already allocated for something else. So..
1
2
Even if you can't prevent or even detect a double free or heap based overflow, there is still high value in precluding them from corrupting the allocator state, limiting fallout to application data.
2
2
OpenBSD type allocators including hardened_malloc achieve this with fully out-of-band metadata looked up from pointer via a hash table.
2
3
Quote Tweet
Replying to @pati_gallardo
There are different schools of heap exploitation, too. The metadata school focuses on heap metadata and tries to port techniques between applications, the app-specific school focuses on just application objects. I belong to the latter school, but there is no “right” answer.
2
1
The out-of-band metadata also provides security properties like being able to 100% reliably detect any free of an allocation that's not active. It's part of implementing deterministic, direct detection of many memory corruption bugs, not just hindering exploitation with them.
The hardened_malloc implementation also guarantees that each size class below the large size classes (>128k by default) has a dedicated, statically reserved address space region. It doesn't just avoid mixing metadata/allocations but also allocations of different size classes.
3
Yes, maximizing ability to catch and trap double-free (subject to other reasonableness constraints) is one of my design goals here too. This is achieved by maximizing interval until exact same pointer is handed out again by malloc, which needn't require not reusing the memory.
2




