
Gracefully Scaling to 10k PostgreSQL Connections for $35/mo, Part Two - AlexAtFTI
https://articles.fti.bi/postgres-10k-connections-factom-bitcoin-raffle
======
craigkerstiens
Connection management in Postgres is definitely one of its sore spots. Each
connection even an idle one consumes roughly 10 MB of overhead and introduces
additional coordination overhead on queries. If you're running at any large
scale production you want pgbouncer in place which can help manage the idle
connections so they're not a hit to your system. Hopefully one day this
improves directly in core.

~~~
iKSv2
pgbouncer or pgpool (of the extensions I searched) do this. Which ones are
more of a challenge? I mean I have read on some forum that PGPOOL -II would
just shutdown and stop or misbehave. Any such experiences or gotchas with
pgbouncer?

~~~
madisonhope
We have experimented with both at scale and found PgBouncer to be a lot more
stable and reliable. We haven't seen gotcha failures, or any failures really,
in the last few years. It's also refreshingly simple to automate and scale,
although there are shortcomings in the configuration design that ends up
requiring some cleverness.

~~~
iKSv2
"although there are shortcomings in the configuration design that ends up
requiring some cleverness."

Like?

~~~
madisonhope
Namely, it doesn't support configuration via CLI or env vars, which means you
have no option but to generate a file to be loaded in. To be fair, that's kind
of a newer pattern. Also, multi-DB PgBouncer is a little more insecure than it
leads on to be by default, along with a few other things you learn quickly as
you handle larger and more complex use cases. It's not bad, just not as good
as it could be in my opinion.

------
chucky_z
I'm glad that pretty much the whole thing is 'use pgbouncer.'

This was my exact situation at a previous position. We needed up to 20k
connections to a single pgsql master. Even the most monster postgres server
falls over around ~600 connections (up to 1k depending on usage).

Using a pgbouncer-per-webserver we easily got to 20k.

I will say this was with 9.3 though, things may have changed on that front.
Nowadays I use pgsql primarily for analytics, so there's only ever 20-30
connections tops, albeit doing quite heavy querying.

~~~
iKSv2
Correct me if I am wrong in understanding this "Using a pgbouncer-per-
webserver we easily got to 20k.", but does this mean say you have 5 web-
servers, each web-server has an instance of pgbouncer installed? or you have 5
web-servers with 5 separate pgbouncer servers (1:1 mapping between webserver
and pgbouncer) while all pgbouncer server transact with a powerful, single
master Postgres.

------
sudhirj
Postgres connection pooling is pretty much necessary in almost all cases,
except when doing pretty low level connection management yourself.

Besides the old stalwart pgBouncer, Yandex recently released Odessey as well
[https://github.com/yandex/odyssey](https://github.com/yandex/odyssey)

~~~
SkyRocknRoll
Thanks for sharing..

------
iKSv2
Part 1 in case anyone missed out on it (like me)

[https://medium.com/futuretech-industries/ten-thousand-
high-a...](https://medium.com/futuretech-industries/ten-thousand-high-
availability-postgresql-connections-for-35-mo-part-one-4b7a2d61c51e)

------
ianseyer
This is great! Have run up some very expensive bills trying to scale postgres.
Eager to try this out.

