Actually solving the underlying problem is a whole different ballpark though. It would require something a lot more dynamic, either inside postgres itself (in the model of how memory is used), or if in an external tool it must be much more dynamic and reactive.
I think one big missing building block is a way to handle/detect out-of-memory cases with a smaller hammer than overcommit_memory=2. Which unfortunately is really hard to do in a cross-platform way :(. Right now we need to be ridiculously conservative setting work_mem...
Have you tried tuning hash_mem_multiplier from Postgres 13, so that hash-based nodes get more memory? It's ultimately just a band-aid, but I bet it would help a lot in some cases.
Don't think a memory admission controller would end up working convincingly, without first having a query-global work_mem aware planner. You'll just end up with a lot of blocking because there's no way to control qry memory usage.
Although I think