The in-memory caching in Firefox/Chromium is also quite heavyweight on desktop Linux compared to other platforms.
On Windows, they release caches with MEM_RESET and then use MEM_RESET_UNDO to get them back when they want to use them. If there was memory pressure, it's gone.
Conversation
On Android, ashmem has unpin/pin to do the same thing.
MADV_FREE isn't usable since you don't have a way to undo it and check if everything was preserved. You can look at the platform abstraction code and it's just stubbed out for non-Android Linux so it holds onto more memory.
2
I wonder if it makes sense to bring ashmem to mainline?
1
It is in mainline. 100% of the mandatory features for Android including Binder and ashmem are there. It's just that ashmem isn't usually enabled on non-Android platforms. Android also doesn't actually allow applications to directly use the full API via SELinux restrictions.
2
1
gears are turning in my head. i wonder if bringing ashmem to alpine makes sense
1
3
It is actually a useful, well-designed API. It's just a replacement for the legacy /dev/shm API. The unpin/pin features are quite useful too. You could easily set it up so that Chromium/Firefox would use it.
It just doesn't make sense that you can only do unpin/pin for ashmem.
2
4
You should be able to just do that with regular anonymous memory. Android avoids implementing things downstream now though. They got all the mandatory kernel changes upstream and only have optional ones downstream. Many of those are quite useful though, like naming anon VMAs.
1
1
So /proc/PID/maps output on Android is a lot more useful because it's possible for everything making anonymous mappings to label the mappings. It's used by OS tooling to figure out what's using memory. It also cleverly uses page APIs to divide up the blame for shared memory.
1
1
For example, this is the GrapheneOS /proc/1/maps output on a debug build:
gist.github.com/thestinger/28c
The [anon:name] labelling is the downstream VMA naming feature. I find it useful enough that in a debug build of the OS hardened_malloc makes very good use of it.
2
i’m curious as to why you have 128KiB bins, at that point surely mmap is equally efficient, no?
1
It's optional to have slab allocation size classes above 16k. The size classes are defined here:
github.com/GrapheneOS/har
It's significantly faster than more direct use of mappings with all the optional security features off. Much less with them enabled, but it has advantages.
It needs to go up to 16k because the size classes only start incrementing by 4096 bytes after that point.
It has an isolated region for each size class with address space never reused between them. There's a fair bit of code allocating fixed-size 64k / 128k allocations, etc.
1
1
Normally, everything above 16k is guaranteed to have guard regions on both sides whether or not those extended slab allocation size classes are used. It only uses 1 slot for slabs above 16k and by default every other slab is skipped and left as a PROT_NONE guard region.
1
1
Show replies

