Conversation

Replying to and
The way hardened_malloc does it for very large allocations is choosing a random guard size with a minimum and maximum number of pages determined based on the allocation size. That case could really just be a kernel feature taking care of managing randomly sized guards for you.
1
The harder case to handle would be the fine-grained use of guard pages in the slab allocation regions. It reserves a massive PROT_NONE region for all of the mutable allocation state and a separate massive region for each slab allocation size class. It allocates with mprotect.
1
It deallocates by replacing the memory it unprotected with a fresh PROT_NONE mapping. The way it ends up with fine-grained guard pages for slab allocation is that it simply skips 50% of the possible slab locations (can be configured) so each slab has a guard slab before/after it.
1
Caches a certain amount of empty slabs based on total size and then switches to replacing them with fresh PROT_NONE mappings with mmap using MAP_FIXED. 1 system call (mprotect) to allocate a slab and 1 system call (mmap) to deallocate, each grabbing mmap_sem write lock once.
2
Replying to and
so if you wanted to do that with VMA tree modifications, you'd basically need something that lets you say "fault on zero PTEs instead of allocating new pages"? you could probably hack something like that together with userfaultfd... allocation would be UFFDIO_ZEROPAGE, [cont]
1
Replying to and
That would probably work but then hardened_malloc would depend on that system call which often isn't available in the sandboxes where it runs. Fact that it uses per-size-class ChaCha8 CSPRNG seeded by getrandom for different internal uses of randomness has already been an issue.
1
I think it would probably be quite bad to add userfaultfd as a whitelisted system call in sandboxes so hardened_malloc can't really depend on it. Depending on getrandom and mprotect is no problem and should really already be whitelisted for sandboxes that are using malloc.
2
By default, hardened_malloc has 4 arenas which each have 49 size class regions that are each 64GB. Those have a 32GB usable region randomly located within the outer region. Half of that 32GB region ends up being used for guard slabs by default. All works fine without overcommit.
1
Setting a slab that was used back to PROT_NONE and purging it with madvise using MADV_DONTNEED doesn't account drop the accounted memory. It has no choice but to use mmap with MAP_FIXED to make a fresh mapping, and it's more efficient (1 system call) than 2 system calls anyway.
1
Entirety of mutable allocator state is also reserved as a separate PROT_NONE region. 2x arrays used as open addressed hash table for large allocations (alternates back & forth to grow) and massive arrays of slab metadata (bitmaps, intrusive lists) proportional to max region size.
1
Show replies