Hacker News new | past | comments | ask | show | jobs | submit login

Does anyone have experience with Postgres' pubsub[1] as a means for realtime apps?

1. https://www.postgresql.org/docs/9.6/static/sql-notify.html




I migrated a project from RethinkDB to Postgres and replaced the changefeeds with a trigger that uses NOTIFY, does the same thing except that with the changefeeds I had the diff and with this trigger I still have to re-fetch the row and compare the diff.

https://gist.github.com/bithavoc/f40bbc33b553f2fddf9e1095858...

Something that RethinkDB supported natively :(


One of the big differences that RethinkDB has going for it is that its connections are based on coroutines, so it's really efficient to have thousands of open connections. Postgres uses one process per connection, so it's much harder to scale.


pgbouncer


How do you pgbounce a bunch of poller connections? pgbouncer has an event-based acceptor on the front, and a connection pool on the back. If all the connections in the pool are taken, then it will queue up the incoming connections until a backend connection is available. If all backend connections are long-polling, then incoming connections will just hang.


If all the connections in the pool are taken, then the database wouldn't have been getting meaningful additional work done anyway, regardless of your scheme for accepting connections.

Having 100-1000x more active connections than you have cores on the box is worse than useless, it's actively detrimental.


Except it works just fine for RethinkDB because it was designed from the ground up to efficiently send changes from the shards directly to open connections. It's a design tradeoff postgres didn't make and they get other benefits rethinkdb doesn't have. But for this use case RethinkDB has a distinct architectural efficiency advantage


We don't use postgres pub/sub for exactly "realtime", but do make use of it in single-page apps that use websockets. A couple notes:

-- LISTEN/NOTIFY has a payload size hardcoded maximum. For big documents, we have to send a record ID then fetch the document upon receiving a message. In practice, it adds 5ms of overhead.

-- One of the best features is that you can combine this with pg_try_advisory_lock to create job queues that use the core locking features of postgres, which are pretty darn reliable and well defined. We use nodejs, and if the server processing a job dies, the lock is released automatically and another nodejs process can pick it up.


Note that as of Postgres 9.5, you can use SKIP LOCKED for roughly the same usecase.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: