Hacker News new | past | comments | ask | show | jobs | submit login
Simple script to analyse your PostgreSQL database config, give tuning advice (github.com)
276 points by LinuxBender 35 days ago | hide | past | web | favorite | 49 comments



You know, PostGres is a great database system, but its core data structure is prone to awful performance degradation if you have a lot of out of date rows, and sometimes the autovacuumer can't keep up.

There is something to be said for simpler B-Tree style databases like MySQL that are harder to screw up!

Also perhaps there could be an optional mode for Postgres that did something to the effect of immediate vacuuming for all modification statements. Collecting garbage as soon as its made. At the cost of slower statements, but no long term degradation. More predictable performance.


I'm not sure why this comment is downvoted. The behavior of PostgreSQL on workloads that require vacuuming of large tables -- and especially the default behavior in that case -- is hugely problematic. There's a major project ongoing to build a new storage engine to address this: http://rhaas.blogspot.com/2018/01/do-or-undo-there-is-no-vac...

Like you, I've long wished I could pay a performance penalty on queries to collect garbage immediately rather than batching up this debt to pay all at once some time later. [edited: I'd pay that cost on all queries, not just writes, if the performance were more consistent over time.]


It's not that simple... The whole point of keeping dead rows around is so the mvcc snapshot can show a consistent view of the database for any user within a transaction. If you instantly cleaned up those dead rows, other users suddenly don't have a consistent view of data, or have data disappear entirely on them.

The other way this is handled is to move dead rows to an undo log, and the min heap only contains the current version of any specific row, and a pointer to where to find the dead rows in the undo log.

The above approach keeps your main heap from bloating when updates and deletes happen, but it causes transactions which reference the old version of the rows to be slower, as well as slowing down rollbacks.

This is exactly what the zHeap project is doing along with the pluggable storage API.


The trouble is that old rows can stick around many hours after any transaction could ever see them. It's good that it's being addressed.


> The above approach keeps your main heap from bloating when updates and deletes happen, but it causes transactions which reference the old version of the rows to be slower, as well as slowing down rollbacks.

I think that's consistent the parent's goal (and mine) of allowing ongoing requests to be slower in exchange for not having to pay a major cost later to vacuum.


Yup, I meant for it to be consistent with the OPs concerns. I just wanted to point out that "just paying for it up front" also has other trade-offs. For a lot of use cases, the way PG went ends up better for performance than using an undo log (after all the performance opimizations PG has put in place like freezing pages, etc). And for some use cases an undo log is by far better, and more consistent.


I can see that. I've really only worked with a few workloads where this behavior was quite bad. (In particular, the out-of-the-box policy to vacuum when the dead tuple fraction reaches 20% means that for an INSERT-mostly workload with a bit of UPDATE/DELETE, the vacuums get further and further apart and take longer and longer.) In the worst one I've worked on, the effects take several days or weeks to show up, but they degrade system throughput by 50-80% (!) for days or weeks on end.

The experience makes me wonder if the database is designed primarily using short-term, modest-scale workloads (which are much easier to test) and not behavior that only shows up after several weeks of a continuous, heavy workload. It obviously works for a lot of users.


There's nothing to "screw up" as it's just a different storage system implementation to handle MVCC. There is a project called zheap underway which is designed to do just what you say by doing in-place updates.

It's true though that Postgres still needs work on performance features and optimizations.


Excessive old rows have broken production systems for me. Performance degradations can break things!


That's an operations issue, nothing wrong with the database.


Not everybody is going to use a tool the exact same way, as part of a web stack at a company large enough to have an operations team.

Maybe you're a solo researcher doing data science.

I want my tools to be the best they can be. Always striving to be better. To push the limit and make things simply awesome.

I don't like defeatist attitudes or telling the user they are "holding it wrong".


I reject the idea that all tools can be made simple enough for people to use them without understanding what they're doing.

Yes, of course we all want our tools to the best they can be, but that involves making tradeoffs in features because the maintainers don't have an infinite pool of time.

Postgres is a sufficiently advanced tool that if you're using it for any serious work you need to read the documentation for it. Ideally cover to cover, but at least skim the whole thing and pick out the important parts. The Postgres "Routine Database Maintenance Tasks" documentation [1] describes in great detail everything you need to do to keep your database working well.

[1]: https://www.postgresql.org/docs/11/maintenance.html


So you want PG to be better? We all agree with that, and progress is constantly being made, but nothing is perfect and there are always trade-offs.

Other than that, what is the complaint exactly? You do need to understand how your tools work if you want to use them correctly.


I suppose, the attitude of the PG community. I can think of one other time when the PG community was very dismissive of criticism from users, even when it is valid...

https://news.ycombinator.com/item?id=19119991

In that case they rejected a bugfix for something like 10 years, seemingly because of attitude issues.


Not to defend the community at large to a fault... But there are an amazing amount of bug reports which end up being nothing. Its easy to dismiss something as a non-issue by not understanding the underlying cause, which is why the NFS / fsync issue persisted for so long. To be fair, it was also an issue in other DBs.

Not that it forgives the fact the bug was reported 9 years ago and it just recently got fixed. I know personally I'd have messed it up.


On the other hand, you hardly every have to worry about weird locking problems.

It's nearly impossible to avoid deadlock in any reasonable complex use of MySQL.


If you use read committed, row based replication and explicit locks where necessary, rather than the default of repeatable read, avoiding deadlock isn't too hard.

It's usually the gap locks that bite you, I've found.

https://www.percona.com/blog/2012/03/27/innodbs-gap-locks/

Oh, and do all your deletes by primary key. Deletes with joins are especially good for creating too many locks.


And, not unimportant, PostgreSQL is released under the PostgreSQL License, a liberal Open Source license, similar to the BSD or MIT licenses.


I use PGTune https://pgtune.leopard.in.ua/#/ It spits out optimal values based on your system configuration


Interesting how this is tied to the PostgreSQL business model. Consultancy is how the ecosystem funds development for PG. So they have no real reason to automatically spot performance issues like for instance MongoDB Atlas does. Curious side effect of the business model. (I much prefer PG over MongoDB, but the automatic index suggestion thingie is pretty slick)


Anything similar with python perhaps?


It works with docker, so you don't have to worry about having perl.

>docker run -it --rm jfcoz/postgresqltuner --host=dbhost --user=username --password=pass --database=testdb


Is not having Perl such a worry? I'd imagine most systems that use this also use Git and that requires Perl, and I'm not aware of any GNU/Linux distro that doens't come with Perl (5, of course) pre-installed. While we're speaking of Perl, I just want to say how amazing Perl 6 is, especially compared to Perl 5, in every regard - from syntax to arrays/lists to the quality of the (admittedly small range of) libraries.


+1 for Perl 6 shout out. Also there are increasingly nice things for use with Postgres like https://github.com/CurtTilmes/perl6-dbpg with https://github.com/FCO/Red


Perl and python are part of LSB.


You gonna pull a random image into prod and give it access to prod db :)?


It's no more a "random image" than the script itself is a random software.


Also the Dockerfile seems to be in the repo. Looks like it just installs some apt packages and adds the script file: https://github.com/jfcoz/postgresqltuner/blob/master/Dockerf...


it's a lot easier to audit a script


Dockerfile is a script of sorts too you can audit that


Nothing guarantees the image in docker hub has anything to do with the Dockerfile or this repo. This is a valid concern.


Except it does, docker hub builds it with that Dockerfile.


Docker hub allows any user with the appropriate permissions to `docker push` any image they want.


It is just a perl script (in this case), just one that needs 45MB of baggage.


If you for some reason can not use docker and don't want to pollute your system globally with cpan packages, you can use a locallib install instead. CentOS example:

  sudo yum -y install postgresql-devel   
  sudo useradd -d /home/pgsqltun -m -s /bin/bash -c"pgsql tuner user"  psqltun
  sudo su - psqltun   
  wget -O- https://cpanmin.us | perl - -l $HOME/perl5 App::cpanminus local::lib
  echo 'eval `perl -I $HOME/perl5/lib/perl5 -Mlocal::lib`' >> $HOME/.bash_profile
  echo 'export MANPATH=$HOME/perl5/man:$MANPATH' >> $HOME/.bash_profile
  . .bash_profile
  cpanm DBD::Pg
  <install postgresqltuner.pl>


Microsoft SQL Server has had this built in since “forever”, but instead of looking at your config, it does its analysis based on workload trace, analyzing how that workload performs and what particular queries are having sub-par performance.

Thus the “tuning” matches your actual use.

If there can be done guaranteed improvements to your DBs performance based on static analysis of the config-alone, why on earth is that not the standard config? That seems like a no-brainer.


The SQL Server tuning advisor has historically been terrible for any database of any complexity at all. It imposes an enormous load, and the results are usually contradictory and often very sub-optimal if not detrimental. Works great for a hello world style database, though.

There was a post earlier (https://news.ycombinator.com/item?id=19422554) that I assume inspired this post and it was someone who benefited by correcting the DBMS' very wrong assumptions about the cost of random IO, and this was always the Achilles heel of SQL Server as well with any sort of fast storage -- it absolutely refused to accept that anything could be faster than about 150 IOPS, and would try to brute force through with terrible approaches. In this case the guy could just change the configuration and it would properly calculate query plan costs and execute accordingly, where SQL Server, to my knowledge, still doesn't let you change this costing, nor does it dynamically determine it.


On large databases, the Sql Server tool simply failed to complete its analysis for me.


So does Oracle DB as well.

In the mysql world, mysqltuner has been kicking around for more than a decade now that spits out suggested configuration changes. I haven't had a need to use it in several years though https://github.com/major/MySQLTuner-perl

There was an interesting project around applying blackbox optimisation to MySQL, called OtterTune on which a paper was published a couple of years ago, https://blog.acolyer.org/2017/08/11/automatic-database-manag.... They got some good results out of it. Haven't attempted to use it myself: https://github.com/cmu-db/ottertune

There are just so many dials and switches that can be changed that can modify the performance of the database, it really has to be moving far beyond anything any one individual could possibly grok.


Standard config and actual use are very different things. The first one implies adequate performance for most use cases, while actual use implies adequate performance considering current use case. This tool seems to be analyzing current use case.


PG tune is the standard pretty much, not sure about this new tool. In case you're not familiar with PG tuning... Here's the TL/DR, don't even think about going into production without running PG tune. The default postgres settings are not optimized, assuming you have any decent load on the system this will be a problem.


By chance, I discovered pgTune just yesterday. Would be interested to know how this differs?


Based on scanning the README this looks at your remote DB instead of a configuration file.


How's this different from https://github.com/le0pard/pgtune


They seem to be completely different.

pgtune writes config given some parameters and doesn't actually look at running PG instances.


.


The real odd thing here is that `work_mem` (and to a lesser degree `maintenance_work_mem`) is still low enough out of the box that a script got written to highlight that it's too low. Increasing it is the first step one probably ought to look into when installing Postgres.


Huge numbers of postgres installations are on small machines and/or machines shared with other software. And whether you want a large work_mem hugely depends on whether you want to run an OLTP workload with a few hundred connections, or an analytics workload with like 2. It ain't as simple as you paint it.


Love this idea, but is this better done as part of, say, an existing linting framework? (update: oops, I thought this was about reading from your pg configuration file, not the live DB)

You could make use of things like IDE syntax highlighting, and in some cases, auto-fixing.

Glad to see they have a way to run it with docker, because I have too many bad memories of missing perl modules when trying to run perl scripts in the early 2000s (but that also tells you how outdated I am from perl-world)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: