Also reminds me that I really would like to find a way that PG can detect allocation failures without requiring memory overcommit to be turned off globally. Just causes too many other programs to misbehave, and is a complicated explicit configuration.https://twitter.com/AndresFreundTec/status/1225559052475871233 …
Can do some by ensuring that all of PG's allocations are populated / backed by actual memory immediately (e.g. by only using mmap(MAP_POPULATE) to back allocations). But that still leaves library etc code not doing so.
-
-
If we did that, combined with the already existing oom scoring adjustments, it should make it fairly unlikely for PG to get killed. Unfortunately, even with overcommit_memory=2, there's a few holes in the protection. E.g. there appears not to be any reservation for stack space.
Show this thread -
So if postgres connection uses more stack space than previously (but under our limit), it can get killed because there are no pages to back the extended stack. We could explicitly reserve the maximum stack at backend start, but that'd be pretty wasteful: Rarely need that much.
Show this thread -
Perhaps we could make our stack-depth-checking function ensure that current_stack_depth + 1/10th max_stack_depth is backed by actual memory? Keeping a high watermark of how far we've guaranteed that, to avoid redundant syscalls?
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.