Conversation

They didn't seem to like the concept though. I find it to be really horrible to put arbitrary limits on the number of objects where if you can reach that limit you're going to cause horrible problems because it could be the wrong processes getting screwed over by other ones.
1
3
The default vm.max_map_count of 65530 is ridiculously low and increasingly inadequate for many use cases. It's very common for raising it to be recommended for server applications. hardened_malloc defaults to very fine-grained use of guard pages requiring raising the limit too.
2
1
Replying to and
I do think that the right way to do lots of guard pages would be a different one though: If you want to sprinkle guard pages into a big anonymous mapping, you could do that by using special PTE values (just like swap/migration/... PTEs), and then avoid all that VMA overhead
1
Replying to and
The way hardened_malloc does it for very large allocations is choosing a random guard size with a minimum and maximum number of pages determined based on the allocation size. That case could really just be a kernel feature taking care of managing randomly sized guards for you.
1
The harder case to handle would be the fine-grained use of guard pages in the slab allocation regions. It reserves a massive PROT_NONE region for all of the mutable allocation state and a separate massive region for each slab allocation size class. It allocates with mprotect.
1
It deallocates by replacing the memory it unprotected with a fresh PROT_NONE mapping. The way it ends up with fine-grained guard pages for slab allocation is that it simply skips 50% of the possible slab locations (can be configured) so each slab has a guard slab before/after it.
1
Caches a certain amount of empty slabs based on total size and then switches to replacing them with fresh PROT_NONE mappings with mmap using MAP_FIXED. 1 system call (mprotect) to allocate a slab and 1 system call (mmap) to deallocate, each grabbing mmap_sem write lock once.
2
Replying to and
so if you wanted to do that with VMA tree modifications, you'd basically need something that lets you say "fault on zero PTEs instead of allocating new pages"? you could probably hack something like that together with userfaultfd... allocation would be UFFDIO_ZEROPAGE, [cont]
1
Replying to and
That would probably work but then hardened_malloc would depend on that system call which often isn't available in the sandboxes where it runs. Fact that it uses per-size-class ChaCha8 CSPRNG seeded by getrandom for different internal uses of randomness has already been an issue.
1
I think it would probably be quite bad to add userfaultfd as a whitelisted system call in sandboxes so hardened_malloc can't really depend on it. Depending on getrandom and mprotect is no problem and should really already be whitelisted for sandboxes that are using malloc.
2
Replying to
true, but in theory this same functionality could be built by adding a flag on the VMA and a new madvise() op for turning zero PTEs into special PTEs signalling "faults are allowed to allocate memory here"