It's also possible to figure out what your actual shared buffers needs are as well! #postgresql https://www.keithf4.com/a-small-database-does-not-mean-small-shared_buffers/ …https://twitter.com/AndresFreundTec/status/1178765225895399424 …
If you use the approach on a pgbench workload, it'll yield a lower s_b, even though, as my numbers above indicate, a larger s_b is the solution. At 20GB, close 23GB (when the working sets fits into memory), it suggests ~7.6GB on average. Even though 23GB is 30% faster.
-
-
Interesting. Would be interesting to chat with you sometime for what your effective means to determining a good size s_b is and determing if it's too large or small.
-
I'm inviting myself to that conversation.
- 2 more replies
New conversation -
-
-
At 7GB, another 17% slower, it stabilizes around 740MB. Where the next suggestion stabilizes around 11MB. Sorry, but it just isn't a meaningful approach.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.