PostgreSQL will allow a backup and then epic fail on the restore if you don’t have users and roles etc pre defined.
It’s the hardest thing to achieve in PostgreSQL.
For an actual backup solution, not one off stuff, use a tool like pgbackrest. It's just way better.
I’ll give dumpall a go today. I do still feel this could be improved in PostgreSQL because there’s sooo many blog posts who suggest dump and none of them work.
?? in my lonely research world, the PG backup defaults and restore behavior have always been gentle and predictable. I generally backup per-database, never by cluster
If I create all the roles up front then it works. But it’s a hassle compared to SQLServer where I can backup, restore, create a user and link him to the database user. And it’s done.
We haven't made the jump to 12 yet, we expect overall performance to be lower but disk I/O to be improved.
If performance doesn't matter and you don't need any of the new features you'll have a hard time finding reasons to upgrade. I consider 9.6 to be very stable.
2. The partitioning improvements are tremendous and well worth the change if you're using trigger+constraint based partitioning.
3. Parallelization and the addition of concurrent reindexing
4. Built-In Logical Replication
5. JIT query optimization
There is lots of new stuff regarding declarative partioning and parallel queries. If any of that seems useful to you those are big new features.
There have been lots of small performance improvements that add up, and also the long requested change to CTEs as omptomization fences.
I'd probably just scan the release notes for 10-13 and see if anything particular hits a pain point you're having.
9.6 is old now. I'd upgrade. Get pgadmin4 while you're at it.
Even going from latest 9.6 to latest 12 gives a huge number of fixes and changes and features.
Can't recommend this tool enough :)
What I most waited for was improved replication and table partitioning support. These were must haves for us.
If you can always just practice the upgrade on a new machine, or easily roll back if the upgrade does fail.
On the assumption right now there may be less people using the services and may be the least disruptive if anything goes down. Upgrading now means you have less hassle in the future.
There is one risk for this migration - versioning has changed a bit between 9.6 and 10 and some libraries have an issue with that. (They need to be updated.)
This should be either in Postgres 13 or 14, not sure how they decided in the end.
Obviously, it'd be great to get those patches in, but there's always the risk of pushing something that may not be quite ready yet, did not go through sufficient review etc. Which might disrupt the whole release schedule, make the stabilization phase longer, etc. And all the lockdowns around the world just amplify the risk. There's also the question whether it's worth breaking the rules the project adopted over the years. Considering all this, it probably makes sense to leave this for v14 ...
But yeah, it'd be great to have this in v13, had it been ready before the code freeze.
There's an awful lot of applications designed with the idea that connections are cheap. And on some databases it's actually true.
But it's also about hardware - machines with hundreds of cores are getting quite common, applications have "think time" so you need to use a multiple of core count to saturate the CPU, etc.
Currently the limit is probably "low hundreds" depending on the workload (the shorter/cheaper the transactions, the worse). Connection pools are great, but unfortunately not really a universal solution.
There was some work done which reduces the per-connection overhead and allows many many more direct PG connections before performance degrades, while increasing overall performance with high connection counts by a ton...but it just wasn't quite ready and missed the deadline by a little.
The memory based stats collector is another patch in the same vein, where it wasn't quite ready for 13.
That said, many of these features are quite impressive and will offer nice wins for many workloads, incremental sorting is very cool, as is the memory bound hash aggregation.
I'm with you on automatically updated materialized views, it's been something i've needed and been waiting for a long while now, and I am hoping it gets more focus. The current patch-set being worked on for it seems to have quite a few limitations.
I agree. For me, it is a sign of maturity. It's not the only sign. It could be a sign of stagnation! But from what I know, having used it for over a decade, I am happy to be in the midst of "boring software", at least for my database.
I'm curious about this!
It is a blocking protocol, This mean if the coordinator crash , some participants will never resolve their transactions.
After a participant has sent an agreement message to the coordinator, it will completely block until a commit or rollback is received from the coordinator but this will never happen.
[edit: Doh, I'm wrong. Sorry. I'll leave the comment here.]