Programming twitter: do you have stories when you used a reference count, and that reference count overflowed?
What happened, how did it get there, how did you find the issue, and how did you fix it?
The Linux kernel has had a few cases where 32-bit refcounts could overflow if you had enough RAM (starting at around sizeof(void*)*pow(2,32) = 32 GiB). In at least one of those cases, the fix was to just not allow that many references.
once proposed to have dynamically-sized refcounts - make the inline refcount 16 or 32 bits wide, and reserve a value that means "the real value is stored in a global hash table indexed by refcount address", or store a table index in the refcount
They often don't want to pay the memory cost of having properly sized reference counts or they prevented themselves from doing it by using up a limited amount of space for other things. I think my proposal was quite good and it'd be easy to make it transparent via the same API.
They didn't seem to like the concept though. I find it to be really horrible to put arbitrary limits on the number of objects where if you can reach that limit you're going to cause horrible problems because it could be the wrong processes getting screwed over by other ones.
The default vm.max_map_count of 65530 is ridiculously low and increasingly inadequate for many use cases. It's very common for raising it to be recommended for server applications.
hardened_malloc defaults to very fine-grained use of guard pages requiring raising the limit too.
I do think that the right way to do lots of guard pages would be a different one though: If you want to sprinkle guard pages into a big anonymous mapping, you could do that by using special PTE values (just like swap/migration/... PTEs), and then avoid all that VMA overhead
The way hardened_malloc does it for very large allocations is choosing a random guard size with a minimum and maximum number of pages determined based on the allocation size. That case could really just be a kernel feature taking care of managing randomly sized guards for you.
The harder case to handle would be the fine-grained use of guard pages in the slab allocation regions.
It reserves a massive PROT_NONE region for all of the mutable allocation state and a separate massive region for each slab allocation size class. It allocates with mprotect.
It deallocates by replacing the memory it unprotected with a fresh PROT_NONE mapping. The way it ends up with fine-grained guard pages for slab allocation is that it simply skips 50% of the possible slab locations (can be configured) so each slab has a guard slab before/after it.
Caches a certain amount of empty slabs based on total size and then switches to replacing them with fresh PROT_NONE mappings with mmap using MAP_FIXED.
1 system call (mprotect) to allocate a slab and 1 system call (mmap) to deallocate, each grabbing mmap_sem write lock once.