
Advanced Postgres Performance Tips - adamnemecek
https://robots.thoughtbot.com/advanced-postgres-performance-tips#
======
olavgg
Among other great performance tips I recommend are:

* Don't always create a index for a single column, multi-column indexes can in many cases be a lot faster.

* For large tables, I often see that PostgreSQL choose sequential scan instead of index scan. This may be because the planner needs more help. You can do this by adjusting the table column statistics. [https://wiki.postgresql.org/wiki/Planner_Statistics](https://wiki.postgresql.org/wiki/Planner_Statistics)

* If you still have problems with large tables and sequential scan, adjust seq_page_cost and random_page_cost in postgresql.conf

* Proper indexing / use of indexing. If you have a timestamp column but queries with a WHERE clause with for example extract hour. Then make a proper functional index for that or else you may risk do a full table scan or filtering many rows.

For example,

SELECT id, last_updated, description FROM some_table WHERE extract(HOUR FROM
last_updated) >= 18

You want this query to be optimal, create the following functional index:

CREATE INDEX idx_t_hour ON some_table (extract(hour from last_updated));

* Don't use UUIDs as primary key for every table. They cost 16 bytes per record. Smallint is two bytes, int is 4 bytes and bigint is 8 bytes.

You want use the data types that is as small as reasonable, because you want
to have as much as possible of the index accessible in your RAM. Most tables
are fine with int, some large tables needs bigint.

* Use tablespaces, hot tables and large indexes should be put on fast drives. While lesser used tables and indexes can be on slower drives.

~~~
colanderman
Don't forget that when using multicolumn indexes, order matters. The index
will still be used if you filter (or sort by) any _prefix_ of the column list,
but not otherwise.

