
Postgresqlco.nf: a PostgreSQL configuration guide - fanf2
https://postgresqlco.nf/
======
ahachete
As of today, it is more of a Reference than a Guide. This is intended. Further
enhancements of the website will include functionality to fully help less and
more experienced users to fine tune PostgreSQL configurations.

Read more from a recent announcement
[https://www.ongres.com/blog/postgresqlconf-configuration-
for...](https://www.ongres.com/blog/postgresqlconf-configuration-for-humans/)

Any feedback is very welcome!

P.S. postgresqlCO.NF responsible here

~~~
azinman2
I’d highly suggest removing the custom scrolling / scroll bar logic with your
hamburger menu on mobile (and probably redesigning the site to avoid the
hamburger menu all together — it’s an anti-pattern). The non-standard nature
means I lose all the natural momentum scrolling I get on iOS by default, the
scroll bars can’t be moved like I’m expecting, and it makes the whole site
feel janky and not usable.

~~~
lukevp
What’s the alternative to a hamburger menu that you would recommend for
mobile? It’s a very common ux. For example, target, Walmart, and amazon all
have a hamburger menu on mobile.

~~~
azinman2
If you notice no Apple app has a hamburger menu, nor does their website on
mobile. Hamburger menus hide all the details, with no predictability as to
what might be there or that you should need to click on it (the icon tells you
nothing). There’s plenty written out there about this issue.

It’s not clear to me exactly what one might want out of a mobile version of
this site given the information overload. That would be my first consideration
versus assuming it should be 1:1 with just a slightly different layout.

But if it is to contain the same information, I’d suggest it would be better
to have a dedicated page that organizes all the different possible settings by
topic or module, or even by “beginner to advanced” tuning. Such a directory
can then easily be understood because to get there it would have a primary
hero labeled button/link telling you what you’re about to see (the current
page is mostly videos and some pretext for the site but all the actual value
is hidden by the hamburger menu). You can always flip back to that, knowing
that the browser will retain the scrolling history to go back to your previous
place (which a hamburger menu doesn’t necessarily do).

Another style would be to have a chapter type layout where you can advance
along some existing narrative (like beginner to advanced), much like FreeBSD
documentation does (from memory).

All of this could also benefit from a search at the top that would let you
random access what you’re looking for.

As someone else suggested here, another style would be to think about
recommended configurations for typical usage patterns.. on VMs with X type
workload vs Y, letting you directly lookup what you want in a hierarchically
organized layout on a flat single page, or using drop downs/segmented controls
to configure from the set of possible options. Then you could just punch in
your actual needs and out would pop recommendations that hopefully had some
insight and further guidance to back them up.

Just some ideas from the top of my head. The main principal behind all of this
is to avoid sticking everything where you don’t know where to put it in a
hamburger menu, and instead making your information architecture explicit and
predictable.

~~~
tyingq
What do you call this thing on the upper left at apple.com?
[https://imgur.com/a/pm3kOUq](https://imgur.com/a/pm3kOUq)

Looks like a hamburger to me. It opens a menu when you click on it.

~~~
azinman2
Feeling shame for my own company! Wow! It’s marketing that runs that versus
the actual designers (to my knowledge), and I wish they hadn’t resorted to
that.

~~~
tyingq
Heh. Thanks for the honesty. I can probably get on board with an anti-
hamburger movement, but consistency matters.

~~~
ahachete
Thanks for all the feedback. We will definitely take this into account for
future reviews of the UI.

------
1996
Very nice, but the #1 problem remains: discoverability.

Say you have a IO intensive database, with about 6000 new insertions per
seconds, and every 20 minutes a few reads to use that data for analytics.

Which parameters should you tweak?

Personally, in a situation like that I first focus on the filesystem, to have
write cache (xfs_syncd_centisecs etc). Then I analyze the queries and work
mostly on RAM assignation.

For the postgresql.conf, I use recipes that build based on my experience,
derived from A/B testing a set of individual modifications.

However it remains premature optimization. Only benchmarks can say for sure.

So when I feel fancy, I set aside a few days for tests and do a dense
exploration of the parameter space for what I think will matter.

Yet this remains constrained by discoverability, i.e. by what I think will
matter: if I don't know a parameter may matter, it will not be explored in the
test set to avoid the curse of dimensionality.

TLDR: This website is nice, but could benefit from a few "typical scenarios"
to know at least which knobs to try to tweak, along with some methodology for
tweaking (ex: filesystem first)

~~~
ahachete
That's the next step we will be taking: offering subsets of parameters that
are recommended to be tuned, depending on the "desired" level of expertise.
And then, some more direct indications on how to tune this parameters for your
particular use case. Stay tuned!

------
RookyNumbas
It would be really nice to have a "start here" section. What are the 10
parameters I should look at and understand first?

------
Deimorz
Posted and discussed 9 days ago as well:
[https://news.ycombinator.com/item?id=22139975](https://news.ycombinator.com/item?id=22139975)

------
MaxGabriel
Do you have any recommended config for speeding up tests that hit Postgres? We
currently use these settings:

fsync = off synchronous_commit = off full_page_writes = off

And truncate all tables between each test

~~~
anarazel
If you're truncating/dropping tables all the the time, it can actually be
beneficial to set shared_buffers to a pretty small value. When
dropping/truncating, postgres needs to scan the buffer cache to get rid of all
entries belonging to that relation (happens at the end of the transaction).
The mapping is a hashtable, so there's no ordered scan to do so in a nice way.

Also, do the truncations all in a single transaction. Since the actual
dropping/truncations only happens after commit (for obvious reasons), we've at
least optimized multiple truncations to happen in one pass.

It's unfortunate that a larger shared buffers can hurt this way :(

I've long hoped to find the cycles to finish a prototype that converted the
buffer mapping to a hash table (for relations) over radix trees (or blocks in
the relation).

~~~
MaxGabriel
Nice, that's a good tip. It's a little hard to test small speedups like this,
but I used
[https://hackage.haskell.org/package/bench](https://hackage.haskell.org/package/bench)
and got these results:

128MB ️ ~/D/M/h/mercury-web-backend> caffeinate bench "make test" 15:19:04
benchmarking make test time 226.4 s (NaN s .. 240.6 s) 1.000 R² (0.999 R² ..
1.000 R²) mean 222.3 s (220.6 s .. 224.4 s) std dev 2.390 s (104.1 ms .. 2.986
s) variance introduced by outliers: 19% (moderately inflated)

128MB ️ ~/D/M/h/mercury-web-backend> caffeinate bench "make test" 16:18:57
benchmarking make test time 225.8 s (208.8 s .. 233.9 s) 0.999 R² (0.999 R² ..
1.000 R²) mean 221.6 s (215.8 s .. 224.8 s) std dev 5.576 s (1.671 s .. 7.491
s) variance introduced by outliers: 19% (moderately inflated)

2MB ️ ~/D/M/h/mercury-web-backend> caffeinate bench "make test" 18:30:27
benchmarking make test time 216.9 s (204.5 s .. 229.5 s) 1.000 R² (0.998 R² ..
1.000 R²) mean 219.7 s (216.7 s .. 221.3 s) std dev 2.863 s (1.283 s .. 3.658
s) variance introduced by outliers: 19% (moderately inflated)

2MB ️ ~/D/M/h/mercury-web-backend> caffeinate bench "make test" 19:29:26
benchmarking make test time 209.6 s (192.8 s .. 224.1 s) 0.999 R² (0.999 R² ..
1.000 R²) mean 216.0 s (211.0 s .. 218.5 s) std dev 3.732 s (1.442 s .. 5.168
s) variance introduced by outliers: 19% (moderately inflated)

So it seems like 2MB shared buffers shaves a few seconds off our tests. Is 2MB
in the range you were thinking of?

~~~
anarazel
> So it seems like 2MB shared buffers shaves a few seconds off our tests. Is
> 2MB in the range you were thinking of?

Well, hard to say without knowing anything about your tests. I think the cases
where I've seen it make a really big difference were more ones where people
upped and upped shared_buffers trying to improve test performance, without
noticing that it made things worse.

I'd probably not go below 16MB, otherwise it's plausible you could run into
some errors (e.g. if you have a large number of cursors or such - they can end
up pinning a few buffers, and if there's only 256 buffers, you don't need to
be too creative for that).

------
FpUser
Alternatively they could realize that not everybody is willing to spend too
much time and send people to something like
[https://pgtune.leopard.in.ua/](https://pgtune.leopard.in.ua/)

~~~
anarazel
The suggestions pgtune makes are really bad. It sets max_wal_size way too low,
max_parallel_workers_per_gather to ridiculously high values, shared_buffers
way too low, the work_mem settings make no sense whatsoever.

~~~
krn
Could you recommend any other tool that would generate more sensible
PostgreSQL configuration values given similar input?

In other words: here is my PostgreSQL version, use case, OS, number of CPUs,
available memory size, and storage type. What's the most optimal
configuration?

~~~
ahachete
As I mentioned upthread
([https://news.ycombinator.com/item?id=22224019](https://news.ycombinator.com/item?id=22224019))
and in the other thread
([https://news.ycombinator.com/item?id=22140773](https://news.ycombinator.com/item?id=22140773)),
it's not that easy. And simple rules to compute parameters may lead to worse
performance than the defaults. Otherwise, they would be the defaults ;)

