Conversation

Replying to
so, wasm uses a linear memory and the "page size" is solely the granularity of its size, which means that you'll never lose more than 64k this does matter for wasm on microcontrollers (which seems like a marginal use case to me anyway) but not elsewhere
2
3
Replying to and
Right, it’s not meant directly for most allocators because it’s linear, you only add more from your current heap base like sbrk. Most programs are expected to have their own allocator on top. But if you’re happy with sbrk then great! There’s also plans for larger pages…
3
1
Replying to and
Yeah, brk model is kinda backwards though. Can't individually return memory, among other things. My new allocator for musl, OpenBSD malloc, and various others don't use it. But either way usage jumps in page size increments.
1
Replying to and
I'm not sure we have the same usage models in mind. A program can, before starting, specify "I'll start with MIN memory, and grow up to MAX". You can set MIN==MAX, never grow, and allocate within the range you have. Or you can start small and grow with sbrk.
1
I believe musl is quite popular as a libc on wasm, and is in the MIN==MAX category. The VM will give you virtual memory, zero-filled lazily on first access. You can ask for 4GiB if you want (the maximum since current wasm is 32-bit only).
1
When every allocation within a page has been freed, can the memory be returned to the OS? That's why smaller granularity reduces memory usage beyond just the smallest increment size. Larger pages are much harder to clear out to be able to release memory that was used before.
3
1
So for example, maybe for a while the program is using a lot of 64 byte allocations but shifts heavily away from that for other portions of what it does. You're left with a bunch of pages with some 64 byte allocations holding onto them. Smaller pages mean freeing up more memory.
1
1
Also really want a large address space to avoid the issue of running out of memory due to address space fragmentation. Virtual memory + large address space means not needing to care much about anything but spans of pages. Smaller pages really do reduce memory usage substantially.
1
1
If the address space is small, you also need to be concerned a lot with address space fragmentation, and it becomes harder to make large allocations over time. Makes it even more important to release memory to the OS sooner so it can do a better job of avoiding the fragmentation.
1
1
sbrk-based allocators rarely do precise first-best-fit and mostly don't align everything to pages so they cause a lot more fragmentation and can't release as much to the OS. They could treat it as if they're implementing mmap/munmap though... sounds like best way for this.
1
If there's no way to release memory, still makes sense to use fairly precise first-best-fit similar to mmap implementations as the initial layer. Essentially need to reinvent what the OS usually does for it. Just don't need to care about pages if you can't release memory anyway.
1
Show replies