Conversation

Replying to
I've kind of thrown my hands up and accepted overcommit + uspace oomd; I test with WebKit too for my job which allocates 80-90G virt. Is there a benefit to going without overcommit? I'm not a kernel hacker by any means, so I just kind of accept the knob and move on. /shrug
1
Replying to
Ah - I've been using systemd-oomd (Fedora 34 defaults) - it triggers on PSI and targets the app causing the stalls (IIRC also avoids targeting shell) - I can successfully report no more shell/X kills and no freezes for months! Overengineered? probably. Seems to work though. 😅
1
Replying to
got it - I've just seen enough "clever" use of overcommit - for example, mmap'ing 32G of PROT_NONE memory between heaps to avoid heap overruns from succeeding in JSCore - phakeobj.netlify.app/posts/gigacage/, or using CoW+fork to save in Redis - redis.io/topics/faq#bac that...
3
Replying to
FAQ says with overcommit off, if you have a 3G dataset and 2G free mem, then fork() fails since w/o overcommit Linux tries to guarantee that the child can write to every page, and there's isn't enough free mem for it. w/ overcommit, Linux promises away mem that's never used
1
Replying to
<50% free mem on a Redis box isn't mem pressure unless you rewrite your entire db in the time it takes to save to disk; which I haven't seen on prod running against the barrier is a major cost save as long as you can supply the needed disk IOPS
2
1
Even with full overcommit, the Linux kernel can handle memory pressure much better if you have swap. It can be zram instead of real swap. If you don't have any, it has to start purging actively used files including your executables/libraries rather than idle / unused dirty pages.