Anyone using PostgreSQL with 10k concurrent connections, most of them idle? Let's say PgBouncer is not an option. Experiences? Asking for a friend ;)
-
-
Replying to @MarkusWinand
There’s a patch on hackers that would do wonders with that situation (and others!). You can review and test https://commitfest.postgresql.org/25/2067/
1 reply 0 retweets 5 likes -
Replying to @tapoueh
Thanks! Also: what are the actual problems one would face when doing it (with enough memory)?
1 reply 0 retweets 0 likes -
Replying to @MarkusWinand
Every single lock needs to scan through 10k proc array entries, for starters.
1 reply 0 retweets 3 likes -
Replying to @tapoueh
Yes, but how big is that problem actually? Do you know any measurements on that?
1 reply 0 retweets 0 likes -
Replying to @MarkusWinand
Not yet. You’re going to share some is my understanding, with and without mentionnée patch, right?
1 reply 0 retweets 1 like -
Replying to @tapoueh
Unfortunately, I won't :( I just try to figure out whether the common wisdom of too many connections is really bad is just legend, or if there are some actual war stories of what can/does go wrong then.
1 reply 0 retweets 3 likes -
Replying to @MarkusWinand
It’s definitely not legend. I don’t have recent DBA work to share, good people to ask about more details include
@pg_xocolatl and@RhodiumToad and@magnushagander and@AndresFreundTec I would think.1 reply 0 retweets 2 likes -
Replying to @tapoueh @MarkusWinand and
It's not a legend, indeed. It's not really lock entries that are a problem, however. There are some inefficiencies there, but not too bad. It's building snapshots for visibility determinations that is the problem. That needs to scan the procarray.
1 reply 0 retweets 6 likes -
Replying to @AndresFreundTec @tapoueh and
Which is bad, because that's a very frequent task for a lot of workloads that use a lot of transaction - at the very least once a transaction (repeatable read), but more commonly at least two to three times (read committed).
1 reply 0 retweets 1 like
You can quite easily see throughput drop w/ lots of active connections. And it gets a lot worse with bigger systems, as inter-socket traffic leads to significantly worse scalability behaviour.
-
-
Replying to @AndresFreundTec @tapoueh and
Some of this is inherent. Some of this is because we cause cachelines for the PGXACT/PGPROC entries to be dirtied a lot (leading to cachelines being bounced around), and the arrays containing this data not being scanned linearly (leading to poor pipelining, prefetching).
0 replies 0 retweets 3 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.