It's also possible to figure out what your actual shared buffers needs are as well! #postgresql https://www.keithf4.com/a-small-database-does-not-mean-small-shared_buffers/ …https://twitter.com/AndresFreundTec/status/1178765225895399424 …
Iteratively applying the approach presented, will almost always lead to a even smaller shared buffers recommendation in each further round. With a smaller s_b setting, there necessarily needs to be buffer replacement, which in turn means there'll be buffers with usagecount < 3.
-
-
If it's an accident, it sure has worked out well for several years now. And iteratively applying it certainly has not resulted in smaller values all the time. I've watched it fluctuate up and down wildly, depending on the sampling rate.
-
If you use the approach on a pgbench workload, it'll yield a lower s_b, even though, as my numbers above indicate, a larger s_b is the solution. At 20GB, close 23GB (when the working sets fits into memory), it suggests ~7.6GB on average. Even though 23GB is 30% faster.
- 4 more replies
New conversation -
-
-
Also when you start looking at individual objects to see their demand, you can sometimes see the whole object being pegged at 5, so maybe it's better to look at individual object usage vs the usage count of everything?
-
Yea, you *sometimes* will be able to glean information that way. But very often not - there's often a time based component e.g. to indexes, leading to "one end" having decreasing usagecounts. Without that suggesting at all that s_b is too big.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.