Can do some by ensuring that all of PG's allocations are populated / backed by actual memory immediately (e.g. by only using mmap(MAP_POPULATE) to back allocations). But that still leaves library etc code not doing so.
-
-
Show this thread
-
If we did that, combined with the already existing oom scoring adjustments, it should make it fairly unlikely for PG to get killed. Unfortunately, even with overcommit_memory=2, there's a few holes in the protection. E.g. there appears not to be any reservation for stack space.
Show this thread -
So if postgres connection uses more stack space than previously (but under our limit), it can get killed because there are no pages to back the extended stack. We could explicitly reserve the maximum stack at backend start, but that'd be pretty wasteful: Rarely need that much.
Show this thread -
Perhaps we could make our stack-depth-checking function ensure that current_stack_depth + 1/10th max_stack_depth is backed by actual memory? Keeping a high watermark of how far we've guaranteed that, to avoid redundant syscalls?
Show this thread
End of conversation
New conversation -
-
-
Maybe linux should (or at least could) provide an API for applications to override the system wide setting. Seems to me that a per application approach makes sense here. « Hey I know what to do when malloc returns NULL, thanks ».
-
"Hey I know what to do when malloc returns NULL, thanks" - that's not why this happens. The issue is that most programs allocate more memory than they use, so if you only give out as much memory as if every program used all memory, you fail too early.
- 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.