Hacker News new | past | comments | ask | show | jobs | submit login
Ways to shoot yourself in the foot with Postgres (philbooth.me)
743 points by philbo on April 25, 2023 | hide | past | favorite | 268 comments



Few tips I gathered along the years:

- Configure Vacuum and maintenance_work_mem regularly if your DB size increases, if you allocate too much or too often it can clog up your memory.

- If you plan on deleting more than a 10000 rows regularly, maybe you should look at partition, it's surprisingly very slow to delete that "much" data. And even more with foreign key.

- Index on Boolean is useless, it's an easy mistake that will take memory and space disk for nothing.

- Broad indices are easier to maintain but if you can have multiple smaller indices with WHERE condition it will be much faster

- You can speed up, by a huge margin, big string indices with md5/hash index (only relevant for exact match)

- Postgres as a queue is definitely working and scales pretty far

- Related: be sure to understand the difference between transaction vs explicit locking, a lot of people assume too much from transaction and it will eventually breaks in prod.


> Index on Boolean is useless, it's an easy mistake that will take memory and space disk for nothing.

However if the field is highly biased (e.g. 90 or 99% one value) it can be useful to create a partial index on the rarer value. Though even better is to create a partial index on the other stuff filtered by that value, especially if the smaller set is the commonly queried one (e.g. soft-deletes).


Yeah. Finding ”open” tickets, for example. There’s actually some really good cases to index on a Boolean.


We work in different places. Here the index in closed tickets would be smaller. But you know, some sales guy called and they want this little feature NOW.


Nice example.


Also good to remember that booleans can have 3 values: true, false, or null. Creating a partial index on `WHERE NOT NULL` can be helpful too.


Postgres uses distinct nulls. I've not checked, but I'd assume postgres simply does not index nulls, as it can't find them again anyway (unless you use the new "NULLS NOT DISTINCT" anyway). I think you need a separate index on the boolean IS NULL (which should probably be a partial index on whichever of IS NULL and IS NOT NULL is better).


Postgres absolutely adds nulls to its indexes. You can even control how they are ordered, and on the last few versions if nulls are equal or not.

A complete index over a column will have entries for all records, and can be used on "x is null" and "x is not null" filters.


You're correct, thanks for noting this – I had it backwards.


Even when the column is made with NOT NULL?


No.


If it's highly biased indeed, in combinaison of a condition it's useful.

I was referring of indexing the column without distinction, the last time I checked (years ago) Postgres didn't do any statistical distribution so the query planner was always discarding the index anyway.


> - Related: be sure to understand the difference between transaction vs explicit locking, a lot of people assume too much from transaction and it will eventually breaks in prod.

I recently went from:

  * somewhat understanding the concept of transactions and combining that with a bunch of manual locking to ensure data integrity in our web-app;
to:

  * realizing how powerful modern Postgres actually is and delegating integrity concerns to it via the right configs (e.g., applying "serializable" isolation level), and removing the manual locks.
So I'm curious what situations are there that should make me reconsider controlling locks manually instead of blindly trusting Postgres capabilities.


Just FYI if you didn’t already know this:

  Any transaction which is run at a transaction isolation   level other than SERIALIZABLE will not be affected by SSI. If you want to enforce business rules through SSI, all transactions should be run at the SERIALIZABLE transaction isolation level, and that should probably be set as the default.

Given that running everything at SERIALIZABLE probably isn’t practical for you, I think it’s more clear code wise to use explicit locks. That way, you can grep for what queries are related synchronization wise, vs. SERIALIZABLE being implicit.


Continuing with the FYIs:

Explicit locks can mean just calling LOCK TABLE account_balances IN SHARE ROW EXCLUSIVE MODE; early in the transaction and then doing SELECT ... FOR UPDATE; or similar configurations to enforce business rules where it matters.

https://www.postgresql.org/docs/current/sql-lock.html


I think, in the using Postgres as a queue scenario, it's not fixing the problem that two processes can read the same row at the same time thus both executing the process.

If you manually SELECT FOR UPDATE SKIP LOCKED LIMIT 1, then the second process will be forced to select the next task without waiting for the lock.


> - You can speed up, by a huge margin, big string indices with md5/hash index (only relevant for exact match)

Do you mean a https://www.postgresql.org/docs/current/indexes-types.html#I... index? It's a 32-bit hash (but which hash is it, is it CRC32?). How to do a MD5 index?

Anyway, MD5 is slow, does Postgres offer fast hashes like SipHash (DoS resistant) or FNV (not DoS resistant)?


You can store the md5 (or any hash) in a new column and use it in the index instead of the string column. It will still be a string index but much shorter. You have to be aware of hash collision but in my case it was a multi column index so the risk was close to zero. MD5 was maybe not the best choice but it's builtin so available everywhere.

What I did to not maintain a second column is to use the function directly in the index:

``` CREATE UNIQUE INDEX CONCURRENTLY "groupid_md5_uniq" ON "crawls" ("group_id", md5("url")); ```

``` SELECT * FROM crawls WHERE group_id= $0 AND md5(url) = md5($1) ```

This simple trick, that did not required an extensive refactor, speed up the query time by a factor of thousand.


We do this by making the md5 a char(32) generated column of the text column.


> - Index on Boolean is useless, it's an easy mistake that will take memory and space disk for nothing.

I’ve seen this advice elsewhere as well, but recently tried it and found it wasn’t the case on my data set. I have about 5m rows, with an extremely heavy bias on one column being ‘false’. Adding a plain index on this column cut query time in about half. We’re in the millisecond ranges here, but still.


Just index the less common value:

    CREATE INDEX ON session(is_active) WHERE is_active;


There is no need for adding the boolean value to the index in this case, since it is constant (true). You can add a more useful column instead, like id or whatever your queries use:

CREATE INDEX ON session(id) WHERE is_active;


I tested that and it seemed to make 0 difference between a basic 'create index on table(column)'.


Have you measured the disk size of the index? That's where you should see a difference, not in speed.


It does appear smaller, but single digit megabytes on a table with millions of rows. Not a major difference for most use cases I think. But good to know for the few that it would make a difference.


I know nothing about partial indices in Postgres, but it seems like for indexing a Boolean, you either index the true or false values right? I feel like Postgres could intelligently choose to pick the less frequent value


Is that correct? I would think that, even with NOT NULL Boolean field, the physical table has three kinds of rows: those with a true value, those with a false value, and those no longer in the table (with either true or false, but that doesn’t matter)

If so, you can’t, in general, efficiently find the false rows if you know which rows have true or vice versa.

You also can only use an index on rows with true values to efficiently find those with other values if the index can return the true rows in order (so that you can use the logic “there’s a gap in the index ⇒ there are non-true values in that gap)


Yeah that seems more like how it would work, I’m curious about the internals there


The benefit is a proportionally smaller index.


> Index on Boolean is useless, it's an easy mistake that will take memory and space disk for nothing.

Why? Is it because an index on the bool alone, with symmetric distribution, will still leave you with half the table to scan? In other words, does that statement apply to biased distribution (as mentioned by another response) or indices on multiple fields of which one is a boolean?


Yes, it is because it leaves you with half the table the scan while adding the overhead of doing an index scan. And of you have a biased distribution you probably want a partial index since those are smaller.


No, it’s rarely half the table, most bool columns are biased to one value.


Half the rows to scan in 99% of cases means you’ll still hit every page and incur exactly the same amount of IO (the expensive part) as a full table scan.


Would periodically clustering the table on the boolean index help here? Since then the true rows would be in different pages than the false rows. Unless I misunderstand what clustering does.


The thing is that, since you can only cluster around a single ordering, a boolean column is very rarely the most useful one to use.

But then, given the number of things that very rarely happen in a database, you are prone to have 1 or 2 of them happening every time you do something. Just not that specific thing; but if you keep all of those rules in mind, you will always be surprised.


Yes, that would indeed help.


> You can speed up, by a huge margin, big string indices with md5/hash index

Dumb question: what's the use case for having a md5/hash field in your database?


Postgres offers hash indexes as opposed to b-tree indexes: https://www.postgresql.org/docs/current/hash-intro.html

For equality comparisons of large types it's quite beneficial


I have answered here: https://news.ycombinator.com/item?id=35701126

In my case, I had to index big tables with URLs, with no need for partial match. I did it naively but did help a lot.


> Postgres as a queue is definitely working and scales pretty far

You mean with triggers and listen/notify ?


Probably meant as something like this: https://www.crunchydata.com/blog/message-queuing-using-nativ...

But I find that listen/notify seem to be drastically underused.


I’d add ’not reading the table of contents of the manual’ to the list.

I’ve probably worked with hundreds of people now who use a database daily either in code or just to explore data and can count on two hands (optimistically…) the number of folks who actually read the fine manual in any other way than googling something specific. Pro tip: read it so you know what to google for!


Googling? That's so passe.

I just enter my vague question into this AI chat thingy and I try the first thing that it tells me on my production server.

Has worked fine for me so far. What could possibly go wrong?


"I'll copy paste this snippet from a 2013 StackOverflow thread instead."

"Let me just install this 2MB js dependency real quick. I don't know anything about its author nor if its maintained at all, but it will prevent me from writing 10 lines of vainilla js."

ChatGPT is a vast improvement from this.


Wow that's old school.

Here we have an agent integrated into langchain that executes the command directly on the server. If there are any errors it uses ai to debug and fix them too.

!/s see https://python.langchain.com/en/latest/modules/agents/toolki...


Agent, shmagent.

CREATE EXTENSION pggpt;

and you're done. The AI watches your db and adjusts the params on the fly as needed. We're joking, but I give it less than 12 months before there'll be something like this.


You can ask this AI chat thingy if it's a good idea to experiment on your production server. Maybe it will suggest something else.


It took me too long to understand this. I always felt pressure to get things done so skipped reading the manual. Turns out I would have gotten more done had I just read the manual.


There's the hoary old cliche about "if I was given three hours to cut down a tree with an axe, I would spend the first hour sharpening the axe" but in many cases it's really true. You can look like a superhero just by pointing out some small feature that makes life easier.

One time I pointed out that, rather than reordering the tables to make loading work with foreign key constraints, we could pause trigger execution, load all the data, and then resume the triggers. pg_dump can even do it for you if you pick the options right (might be default), in addition to natively ordering the data correctly, but if you're messing around with large SQL files anyway, it's helpful..


Here to point out that Lincoln said "six hours to cut down a tree... four hours sharpening", and I think that ratio is better.

He wasn't the first to observe the general principle, it's at least as old as the Bible[0].

[0]: https://biblehub.com/ecclesiastes/10-10.htm


Clever architecture and approach that borders on a series or layers of simple decisions can often put perform clever coding and maintain a greater degree of flexibility.


iOS Typo

*put perform = outperform



FWIW I think this is good advice for any tool. You don't have to (and shouldn't) read the manual front to back, but you absolutely should look at the table of contents and at least start reading the introductory material.


I do find SQL not easy in this regard. alter table add constraint is a totally different command than alter table. gotchas like that.


It’s not that bad to learn sql by example.

Project based learning is best for sql and excel formulas. Start at the start and it builds up quickly.

Don’t worry it’s way less work than trying to make a nosql database into a sql database.


Here's one, postgres default has this:

  seq_page_cost = 1.0
  random_page_cost = 4.0  
Which is fine if you are using spinning disks to store your data. It makes postgresql prefer sequential scans over index usage

I think it's time the default were changed to suit SSDs, where a random page cost is the same as a sequential one.

  seq_page_cost = 1.0
  random_page_cost = 1.0


The defaults do suck but common storage options like SSDs or Elastic Block Storage still do sequential IO substantially faster than random.


Yes but nowhere near the extent rotating rust did.

You may want to set random page costs higher than 1.0, in part because DB/FS-level pages and SSD blocks are completely different (and going through a block will be more efficient than having to hit multiple blocks), but probably 1.5 to 2.5.

Interestingly enough according to some folks “seek” on EBS is highly concurrent, whereas “scan” is slow and more erratic, so you may want to set random_page_cost lower than on SSDs in order to favour seeks.


I wouldn't set random_page_cost lower than seq_page_cost. It can cause the query planner to do wacky things (I learned the hard way). The documentation mentions it, but not as strongly as I think is warranted given how erratic my PostgreSQL cluster started behaving after I made that configuration change.

> Although the system will let you set random_page_cost to less than seq_page_cost, it is not physically sensible to do so. However, setting them equal makes sense if the database is entirely cached in RAM, since in that case there is no penalty for touching pages out of sequence. Also, in a heavily-cached database you should lower both values relative to the CPU parameters, since the cost of fetching a page already in RAM is much smaller than it would normally be.

https://www.postgresql.org/docs/current/runtime-config-query...

Curious though, lowering both values is something I haven't done before but now I am curious about.


Looking up some random SSD benchmarks, 2.0 seems about right for high-quality SSDs. Though you might as well benchmark your specific setup.


First way to shoot myself in the foot: not using it.

Too often, I ruled out Postgres as a solution to a certain problem before even trying and jumped to more specialized solutions or moved the problem to the application layer.

It took me years to stop underestimating what this awesome software can do.


I am of the firm opinion that Postgres + Redis are basically the only DBs you ever need.


Agreed, and I'm over in the corner sharpening an axe, looking at Redis with highly malicious intent, too. I think many of the things people use Redis for could be accomplished with a small postgres server and a decent schema e.g. `create table redislike (id bigserial primary key, contents jsonb);`


You can stuff some pretty insane stuff in Redis keys beside ints, giant multi MB strings with globby query patterns. Your general point is dead on.


I definitely think they're great picks but I don't think the statement is making a particularly strong or interesting claim. I think it's equally true with basically any RDBMS in place of PostgreSQL. MySQL + Redis? Absolutely you'd be fine, tons of high-performance sites do this, probably more than use PostgreSQL. SQL Server + Redis? Still fine; you're Stack Overflow. Oracle + Redis? Weird choice but you'll still be fine. Also most companies probably don't need the Redis.

So... what are we saying here, exactly? Are we saying that you don't need a no-SQL database, and that just SQL is enough with some Redis frosting on the cake to taste? I agree with that, but now we're not really talking about PostgreSQL any more, we're just debating SQL vs. no-SQL.


Well what I'm saying here is that SQL is great and sometimes it's genuinely helpful to have a much faster key-value store in a variety of scenarios.

That's basically all you need.

From there, it's PostgreSQL and Redis specifically because Postgres is the best SQL database and Redis is the best high-performance KV store.


I think a columnar database can be super useful.

I use Postgres a lot, but with a largish database, I managed to get the whole database footprint in Clickhouse smaller than the primary key index in Postgresql. (~80GB vs 160GB, and 1.2 TB for the whole unencrypted PG database) Now, it wasn't a great index key and the original schema wasn't good, but optimizing PG got me about a factor of 3 or 4, and Clickhouse compression got me another order of magnitude.


And 95% of the time you don't even need Redis.


What do you use redis for?


Caching frequently fetched complex objects to mitigate load on the Postgres DB.


You could also cache those in the VFS or VMS. A cache is a cache, but one that is built into everything and doesn't require networking can be quite a bit less maintenance, risk, etc


So are you caching the query results keyed by query or something higher up?


amen. I've run both in furious production conditions for over a decade with failures only caused by myself or other inept coders!


"2. Push all your application logic into Postgres functions and procedures"

Why are functions and procedures (an abstraction layer at db layer) considered harmful to performance when the same abstraction layer will be required at the application layer (introducing out of process overhead and possibly network traffic)? I don't agree with this advice. (Or I don't understand it.)


Worst mistake I've ever made was implementing logic in the db - made for horrible debugging. It was only a few small bits of logic, but man, the amount of gotchas years later not realising something was there.. certainly I think you either have to all/most of your logic in the DB or none. Definitely not a sprinkling..


That's right, you either do all of it in the DB or none of it. Mixing the two makes long term maintenance complicated unless your overall solution is very well documented and the documentation is very well maintained. That's two rare "very well"'s.


I once worked for a place years ago that had these crazy data integrity bugs. There were about 35 different bugs, and we suspected that most of them were pieces of legacy code not cleaned up from acquisitions (it was a healthcare company).

Anyways, if you were able to solve one, you got a $5k bonus and a two week vacation. While I was there, someone figured one of them out, and it was due to logic in the DB. ...but I have a feeling that all of them were due to logic in the DB.


1. It's much easier to debug concurrency issues when you use SPs, but much harder to debug anything else. 2. At some point you will want to move some of your data into another system, and will have to pull the logic out into the application layer. 3. PL/pgSQL (or any other imperative SQL extension) isn't something you can find lots of devs on the market for. 4. Upgrades and rollbacks are much more painful and require downtime.

My team wrote a few critically important pieces of software that are running on Oracle, and here's why we did this:

1. Concurrency issues were the biggest pain point that we tried to avoid. We still had to fix a lot of bugs in the logic itself, and debugging them without unit tests was painful 2. We were tightly integrated with another system written in PL/SQL. When we started on v2, an independent solution, I moved almost all logic out of the database except for the critical synchronization logic. 3. We had a veteran team of PL/SQL devs in house. We still needed to get a subcontractor that wrote the API layer in Java, something PL/SQL isn't suited for at all. 4. Upgrades and rollbacks were a pain, especially after we had to move to a 24x7 SLA that left us with no upgrade window. Oracle has edition-based redefinition, but Postgres doesn't.


A middle ground that has had some success is managing a queue in Postgres that falls out business or application logic in the app, whether it’s micro service or monolith.


Yes, that's basically what we ended up doing: a queue of tasks in Postgres that a variable number of workers could access via a clever SP that encapsulated all inter-task conflicts and spat out the next task you were allowed to process.


There are some good business process management tools that could manage that mapping instead of the clever SP, but you have me intrigued.

Are there any links that could be helpful to work through the programs and caveats of such a SP?

Part of it for me is creating a table that can store the process in the database to be able to traverse it reasonably.


It was basically a more complicated version of this: https://stackoverflow.com/questions/68809885/how-to-select-f...


Nice, thanks!


I don't understand it either.

Author seems to be arguing against long functions/procedures. But if you move that to the client, presumably with ORM support - you're going to be executing more or less the same sequence of SQL queries and commands. Only difference is that when doing it on client you will have a lot of latency.

Yes, you can cache some data in between those commands to avoid same multiple queries, but if you use temp tables to do so, they will use memory only if it is available, otherwise you are limited with the actual storage size.

Only time I had memory issues with PostgreSQL when I used too much data in arrays and json's. Those are memory only variables. For example, I'd return a huge json to client and I'd run out of memory on PostgreSQL. I started streaming row by row and problem solved.


Fwiw the specific case which motivated that section in the post was a set of recursive functions we used to denormalise an irregular graph structure (so not suitable for CTE) into a single blob of JSON to be sent to another data store. 99% of the time there were no issues with this but at times of heavier load and on complex subgraphs, those recursive call stacks contributed to severe replication lag on the replicas they were running on.

Moving the traversal logic into the application and just sending SQL queries to Postgres (we don't use an ORM) eliminated the lag. RTT between the application and the db was a few ms and this wasn't user-facing logic anyway, so extra latency wasn't an issue in this case.

Probably the fundamental problem here was a sub-optimal schema, but sometimes you're just working with what you've got. Plus a commenter on Reddit pointed out that if we used pure SQL functions instead of PL/pgSQL, we'd also have seen better performance then.


"Probably the fundamental problem here was a sub-optimal schema, but sometimes you're just working with what you've got. Plus a commenter on Reddit pointed out that if we used pure SQL functions instead of PL/pgSQL, we'd also have seen better performance then."

So, would the better advice not have been to use simpler SQL instead of complex recursive statements, instead of taking a drastic approach to abandon ship (move logic to a completely new layer)?

Also, if you're doing string concats manually for your Json, this might cause some overhead for larger objects. ??


> So, would the better advice not have been to use simpler SQL instead of complex recursive statements, instead of taking a drastic approach to abandon ship (move logic to a completely new layer)?

Probably, yep! But I didn't know that when I wrote it.

I didn't want to give any concrete advice at all tbh. The entire rationale for the post was that I'm not an expert and I've broken prod in some surprising ways and if I share those ways maybe it will stop other people making similar mistakes in future. But I guess I over-stepped in my discussion for this mistake, sorry about that.


> Also, if you're doing string concats manually for your Json, this might cause some overhead for larger objects. ??

Good point, I hadn't considered that part of it. It wasn't string concats, we were building it with `jsonb_set`, but I can definitely see the JSON structure in memory as being part of the problem now you mention it (although maybe that reinforces the argument for doing it in the application layer).


I’m fully against using triggers to implement business logic, but find stored procedures can be great for encapsulating some elements of application/business logic. I’ve got several applications that access the same databases, and putting logic in the database avoids needing to duplicate it in clients.

Most comments about debugability are nonsense. It’s just different, with some pros and cons. One simple example - if you have a bug in production, you’re not going to attach a debugger to your production application. But you can absolutely open a readonly connection to the database and start running queries, invoking functions, etc. It helps if you can architect your functions to distinguish pure readonly functions from those with side effects, but you can still debug even if that’s not the case.


Probably because you can't do proper testing as easy as application code. And debugging is much harder.


I think I’m this is a commonly stated fact, but I don’t find it particularly true. Like any other technology, you just need to put in some initial effort to set up your test framework. In the case of PostgreSQL, pgTAP does a great job.


It is commonly stated and I found it to be very true. PostgreSQL is quite advanced in its procedural aspects (Oracle isn't too far behind either) but they were not made with particular focus on debugging. I'll need to have hacks like creating temp tables to dump records at a given stage vs simply setting a breakpoint. I can unit test the shit out of bog standard Java code; PL/SQL for all its capabilities doesn't even come close. The one area this tilts to the other side is when you need to do heavy processing with high volume of data on database side; a well written stored proc would handily outperform application side logic simply due to the network latency involved. But for typical use cases, putting complex business logic in stored procs just isn't worth it.


I disagree on both points.

Edit: but I was referencing specific performances claims, that you will somehow take some load of database server. I just don't see it.


The context here was that it’s not free, as I understood it. So, moving logic to the database, might make it faster, but that doesn’t mean that it’s instantaneous or that I no longer have to think about the scaling concerns of it.

So, personally, I read that section as “logic in the database is not a zero cost abstraction.


In theory it's attractive to perform business logic in the DB with functions and procedures but in practice the "devops experience" is painful.

Functions are restricted in their available languages, ability to A/B test and scale. There's also complexity entailed by having two sources of business logic because people can forget which one does what, needing to constantly switch back and forth between procedures and app code when debugging.

Additionally the networking, resiliency and update patterns of databases are often not well suited to functions. You may want your functions to have public internet access but not you DB or rollbacks of your data but not function versions.

All of these issues can be overcome by people who're confident DBAs and sysadmin types in addition to being application developers but that's a small group of people.

I wish there were more startups in this area working to improve the developer experience of DB functions because you're correct about the superior performance and I'm pretty sure most of the issues I raised could be solved with well thought out tooling. However, at the moment such tools don't exist so these functions are painful to use.


It is always more easier to scale horizontally at application layer (just adding more servers) than at database layer (which involves syncing data between multiple database instances).


In my experience more often than not, Postgres performance problems aren't really caused by the database, but either badly designed schemas or queries. For a lot of developers, the thinking goes that 10s of millions of rows sounds like a lot like big data, so they must start building microservices, distributed systems, use K/V stores and horizontally scale a la Google, whereas their entire dataset could actually fit in RAM of a single server and they could avoid the majority of the complexity if they just scaled vertically, which is usually much, much easier.


I think many people underestimate the capabilities of SQL databases by a couple orders of magnitude. I once worked on a feature that integrated tightly with a third party service. Their api didn't have any support for aggregate queries, and my company was smaller without real BI or data team, so I ended up writing a tool to dump our account data into a local Postgres db in order to do a some data analysis. By the time I left the company that db was approaching 50 GB, the table holding the primary data had about 40 million rows, and a couple of the supporting tables were over 100 million rows. This was all on a 2018-era Dell dev laptop - a fairly powerful machine (6 core/12 thread, 32 GB RAM, SSD), but certainly no server. It took about 90 seconds to update the materialized views that summarized the data I looked at most frequently. More than acceptable for my use case, and there was a lot of room for improvement in that schema (it was pretty much a straight dump of the api data).


Came here to say exactly this. Over the last 12~ years working with PostgreSQL I've dealt with quite a few performance related issues - almost all were poorly written queries.


Can you point to some good resources on how to write better postgres queries? Or give examples of common pitfalls?


Start with EXPLAIN ANALYZE then work from there. You can use tools where you paste the output of it and it shows you the data in a more easy to understand format.


I am using https://explain.dalibo.com/ for that exact purpose and it does a great job highlighting the perfs issues.


I don't have any other good recommendations other than learning how to read and use EXPLAIN [1]. This should typically be the first tool when you have a slow query.

[1] https://www.postgresql.org/docs/current/sql-explain.html


An old classic but too many indices can be harmful too.



I would guess the most common pitfall is either not having indices or having the wrong kind of index for your query.


100%. Exaggerating the bigness of their own data is a common phenomena. Sometimes one is talking to a group of developers who are all so impressed with this bigness but every one of them has a phone in their pocket which could fit their entire dataset.


You can horizontally scale the database directly.


Depends on the scenario


This comes a lot from people who want to "horizontal" scaling. The camp that thinks everything should be in the middle tier (Java/C#/). Also cost on AWS is cheap for those, and expensive for RDS. In the end db will be bottle neck. Of course DevOp ppl will can also create cache layer etc to lessen the stress to the db.


I think the article is kinda mixing two points here.

One the one hand, it is sensible to try and keep all your business logic in one place (could be the database, could be the application) as spreading it across multiple places can make it hard to maintain.

The current trend is to do your business logic in the application and treat the db as a data storage layer. The point in the article is that if you're using this model, but then stored procs and functions start creeping in to your db and it turns out they're actually doing some heavy lifting, then this can negatively impact the performance of your 'data storage layer' (which is actually not a data storage layer any more).


Why splitting logic between Postgres and an application considered worse than splitting it between multiple micro-services? A DB is a storage service with INSERT/SELECT/e.t.c. as an API. Why we cannot extend this API to include stored procedures too? Indexes are commonly used to enforce data integrity. Why we cannot use triggers to do this even better?


Comparing to microservices isn’t really apt, because of how you’re (meant to) slice service responsibilities.


For sure, it doesn't matter where you put it if it is in one place. If one look at company like superbase, and their product like PostgresREST. It is just way faster way to develop API, and it will scale too. Often it is about how one 'horizontally' scale.


PostgresREST was invented way before Superbase was around. Just a FYI.


I've been out of the trenches for some time, but when I participated in projects that relied on heavily in store procedures, we felt constrained in terms of flexibility (the language options were very restricted, and we were not able to use common libraries), the tooling (the support in the IDE was not great, neither it was straightforward to debug the code) and the scalability (vertical, instead of horizontal). Also, this approach introduced a heavy coupling.

It is true that we were much more familiar with application layer technologies, but the lack of expertise can also be considered a restriction, I think.


While I agree with all the other commenters about debugging and scaling issues at least some of the time the stored procedure route can be very powerful.

Stored procedures will eliminate insane levels of latency if there are many records to be updated in ways that are hard to do in application layer code without repeated calls the the db. I use them a lot for DB maintenance. Often for that kind of work its also a lot simpler and easier to reason with than app layer code.


Pain to upgrade, double pain to a/b rollout, triple pain to debug.


One is better keeping heavy processing away from the database. Your application layer can scale almost indefinitely, and the main bottleneck for a random system is usually the database.

As a rule, processing cost should give you a default bias into moving anything away from the database. Multiple sources, the need for temporary storage, and the existence of concerns that don't deal directly with your data should bias you more towards moving your code away from the database.

On the other hand, data consolidation and enforcing non-local (to a record) rules should bias you towards moving your code into the database. If a lot of those happen, moving it there may even reduce the load of your database.

Any one sided advice here is guaranteed to be wrong.


I work with all major RDBMS on the market (I integrate with ERPs/Accounting packages so even RDBMS that are niche and things that are a insult to call DBMS).

ANYONE that have a problem with RDBMS "functions and procedures & views (!)" are invariably mishandling the RDBMS: Bad schemas, null refactoring in the DB after years/decades(!) of cruft, re-implementation, poorly, of things the RDBMS already have (like for example, date types), procedural doing stuff that SQL already do easier and in short time, too big SQL that never, ever, use VIEWS to abstract away, the RDBMS was never upgraded or is assumed never will so nothing of the new things inventing like 10 years ago is used.

And that is a short list.

---

If you consider the RDBMS like the BEST programming language environment (sans SQL but still better than most languages for data!) and use the most BASIC ideas around it: like think a little about how do your schemas considering the queries you will do, some refactoring at least once every 5 years, pls!, use the CORRECT data types, pls pls!, use VIEWS pls pls pls!, etc your logic in triggers/functions MUST BE short and EASY.


Like most things, it depends.

Having application logic in the db is harder to debug and test (and possibly scale, but that also depends). But as you mention, it can be much faster if the logic is working on a lot of records.

Also, IME, the data store often far outlives the original application. Having the logic tightly coupled to the data model means future applications are less like to break the assumptions made in the original data model.


Think about it this way: you have to implement the same amount of business logic in any case. The only question being discussed here is where the work will be performed. The author is talking about scaling. If you have 3 server insurance but 1 database instance, it’s better (generally speaking) to put the logic in the server because there are 3 of them. That will scale better. In the case of Postgres, even if you have replicas, they’ll be read replicas. If you put everything in Postgres you are putting everything in the bottleneck. You can add more server instances but you can’t add more database writers.


I'd say because it is a much more specialized skill than programming in python / ruby / JS / or whatever your app language. Ideally, I would say "use the best tool for the job", which may very well be a stored procedure for data locality reasons, but practically speaking, with a larger team, you may be asking for trouble.


Sometimes you must `EXPLAIN ANALYZE` expensive queries in production, sadly. The behavior of postgres (even on non-bitwise copies) can be different under load.

The biggest way I have seen this be true is with fragmented tables/indexes - same data but organized more compactly can change planner behavior.

Article actually touches on another way that can be true - if your `work_mem` is drastically different the planner may prefer not to use an index, for example, and there are similar settings. Even with identical settings PG may choose different plans, it can be a bit tempermental in my experience, so sometimes you have to run the nonperformant query in offpeak production to understand performance.


The article and the comments here don't make it clear why running it in production shouldn't be done. If slow_query is already running in production, why would running EXPLAIN ANALYZE slow_query be bad?

Is the overhead of running EXPLAIN ANALYZE so much worse than running slow_query itself?


No, it's really not, and that's why I say it must sometimes be done. Certainly if you're running tens or hundreds of copies of the query per minute, one more won't hurt (much).

The real problem you run into is when the query in question is doing something pathologically bad - locking a bunch of tables and then grinding away for an hour, which effectively is a denial of service attack.


one thing to consider is the person who needs to run explain analyze may not have any access whatsoever to the production database. Also, there may be no process in place to get someone to run it on prod on their behalf. Finally, if there is a DBA on production they may just say no.


One reason you might not want to run it in production is if it's not a read-only query.


I don't _think_ the query planner takes "current load" into account.

If you have:

- Same resources (CPU/memory) - Same settings (eg work_mem, amongst others) - Same dataset - Same/similar statistics (gathered with ANALYZE or the autovacuum)

you should get the same results. If I'm wrong, please somebody correct me!


You might be right, definitely if you run e.g. a fully cloned server on the same disk image and hardware it will behave the same, I'm thinking of times when memory pressure on the database from other queries caused caches to get evicted, things like that. It's not really the planner, it's the measured performance from lock contention and the data in memory etc.


What do you do if you need to check index ideas, or new table design?


Generally indexes are cheap, if built concurrently, so I often build all the possible indexes (for relatively modest sized data types, load, and tables) and look at the planner statistics in production to validate which indexes are useful for the query load. That only works if you have a read-heavy usage pattern, for write-heavy usage patterns it can sometimes be better not to have an index at all (one of the things he alludes to in the article about missing foreign key indexes - indexes come with a write cost, so on a write-only table, foreign key indexes are a bad idea) but as with everything real timing data is the key.

For new table design in Serious Business Databases I do two or three step: offline replica with the same data and settings (but not 100% identical), usually in a transaction on a logical replica or main production (postgres supports DDL transactions, which is a big help, but if you alter a table in a transaction it can block tons of other transactions).

It's important to carefully construct your alter table / create table / create index statements to ensure that everything is concurrent and won't block, of course, and there are plenty of pitfalls there, but definitely doable.


Indexes can multiply your storage cost though.


Some additional techniques for triggers I've found helpful:

- Triggers for validation are awesome. Avoid triggers for logic if you can help it -- harder to debug and update than a server sending SQL and easier than you might think to cause performance problems and cascading triggers.

- Use custom error codes in validation triggers and add as much context as possible to the message when raising the exception. Future you will thank you.

    RAISE EXCEPTION USING
      ERRCODE = 'SR010',
      MESSAGE = 'cannot add a draft invoice ' || new.invoice_id || ' to route ' || new.route_id;
- Postgres exceptions abort transactions, so if using explicit transactions, make sure you have a defer Rollback() so you don't return an aborted transaction to the server connection pool.

- For better trigger performance, prefer statement-level triggers [1] or conditional before-row-based triggers.

[1]: https://www.postgresql.org/docs/current/trigger-definition.h...


Here's one that bit us a few years ago:

SEQUENCEs, used to implement SERIAL and BIGSERIAL primary keys, are not transacted. "BEGIN; {insert 1,000,000 rows}; ROLLBACK" always adds 1,000,000 to the table's primary key SEQUENCE, despite the ROLLBACK. Likewise for upsert (via INSERT ON CONFLICT).

The end result: A table's SERIAL (32-bit signed integer) primary key can overflow even when it contains far fewer than 2^31 rows.


this also matters if you do a lot of upserts on a table that are predominantly updates. postgres requests an id from the sequence for each row of the incoming data ahead of time since it doesn't know which rows are updates and which are inserts. the sequence doesn't reset down for the unused so this can eat through it unexpectedly quickly.

if you hit the max integer for the sequence and need space to implement a fundamental fix you can quickly change the sequence to start at -1 and go down. there's no issue with negative ids since they're also integers.


But that is the point of serials, that they ignore transactions and are monotonically increasing.


Yes. Another reason I blindly use `bigserial` as the PK for everything.



If only every piece of software had a page like that.


There's something ironic about having so many features that you have a dedicated page telling users which ones not to use.


PostgreSQL has been around for almost 27 years (and even longer, if you include the Ingress and Post-Ingress eras). And things, well things, they tend to accumulate, to quote Trent Reznor, heh.

On a related note, one could say that the C++ Core Guidelines[1] at least partially represent such a list. 55 matches for “don't use” and 247 for “avoid”, although not all of them are about language features, obviously.

[1]: https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines


I think that every platform that's old and has backwards compatibility has to have such a page - because there inevitably will be features for which we (now!) know that there are better ways to achieve the same goal, but they have to stay there for compatibility reasons.


To be fair, some of them exist in PostgreSQL because it tries rather hard to conform to the SQL standard.


I honestly expected the list to be longer.


> 9: Compare indexed columns with IS NOT DISTINCT FROM

Does anybody know why this is the case? Usually, an index is not used if the semantics of the index do not match the semantics of the query, so "using" it cannot ever produce correct results. But the workaround presented seems to have identical semantics to IS DISTINCT FROM and still uses the index, so why isn't IS DISTINCT FROM using the index then?


I asked this specific question before on the PostgreSQL IRC, and was told that it simply wasn't implemented. There's no huge technical blocker to it being done, it's just a bit awkward to make it work the way the code is structured AIUI.


My first guess would be that IS NOT DISTINCT FROM considers nulls to match. Indexes might not contain null values. I don't know of postgres indexes are sparse or can be non-sparse? It would be hard to do use a sparse index and find null matches. I would expect this could be optimized away if the column is also not null, but then why would you use the operator in that case so maybe no one has handled it.


Indices in Postgres do contain NULL values. My guess is that it's possible to make IS DISTINCT FROM an indexable operator, but it wouldn't be straightforward given that NULLs are a bit of a special case for indexing comparisons in the sense that they totally ignore data types. IS DISTINCT FROM would probably have to redefine a bunch of existing opclasses to account for the possibility of NULL.


Hmm - actually read some docs and I think that the implementation of IS NOT DISTINCT FROM was originally implemented one way when nulls were not indexed, but then hasn't been fixed since null indexing was implemented.


> Does anybody know why this is the case?

Most of the time the answer to that is: because nobody cared enough or had time enough to implement it


That might be the case, but my experience with databases (and especially PostgreSQL) is that most of the time I actually misunderstood the exact semantics of either the operation or the index. That would be a good chance to learn something :)


Two years ago I moved to a new company using Postgres as THE relational db, coming from years of Sql Server I found poor query plan issues troubleshooting tools.

Anyway, I don't know if it's the same in Postgres, but in Sql Server an OR condition like that could kill your performance quite easily in a relatively complex query, often I had to refactor my query to a UNION (usually with ALL to avoid a distinct step, but it depends on the specific case).


Similar in postgres, depending on version.

SQL Server is a damn fine DB if you can afford it. Highly recommended.


SQL Server is one of the descendants of Ingres, and PostgreSQL is, as the name might suggest, the successor project for database research after Ingres. They're both great databases really, it's a fun connection in their mutual history.


Wow, I didn't know!


Sybase, actually


The first Sybase grew out of the Ingres project at UCB, so transitively, SQL Server is also a descendant.


bob epstein was vp at brittion-lee when he left to form sybase. BLI built a relational database machine (IDM), which was influenced by ingres but not much was inherited, code wise. sybase used a VM/pcode architecture, very much not like ingres.

https://www.google.com/search?q=britton+lee+inc+wikipedia


Neat, thanks for elaborating, I knew BLI grew out of the ingres project but not how closely they were related.


I second that. I found it by far the most pleasant database to work with, including its tooling. Postgres is probably second. Too bad SQL Server is so expensive.


How much SQL Server costs?


You can google it, but the short answer is Enterprise Edition is on the order of tens of thousands of dollars per CPU.


Maybe they'll change it at some point. I believe if SQL Server was free, it would dominate the business. I last used it 7+ years ago and back then it was (from my subjective experience) better than Postgres is now. And I love Postgres. But SQL Server was a dream to work with. Feature rich with amazing tooling. You can get most features in Postgres with plugins and manual work, but SQL Server does it all for you.


Compared to Sql server, PG's lack of true clustered indexes and no query plan re-use was very surprising, also no hints to wrangle a bad query plan!


And SQL server's lack of backup with SSH piping makes it basically pain to administer.


I've run across this in Oracle and MySQL as well...and--same solution as you--have split out the or conditions using union all.


Postgres can optimize OR pretty well, but not always.


The main tip I learned from using PostgreSQL (or relational databases in general) is never use an ORM. They cause far more trouble than they are worth and it's far easier to see what is going on when you're writing SQL queries directly.


ORMs can be great in many situations. Any ORM worth its salt has a way to spit out exactly what SQL it is generating. I have worked on apps, though, where we end with a disproportionate amount of raw SQL so obviously in those scenarios they become useless. It's almost always been writing some sort of reporting system in Postgres instead of using a column store, though.


I am strongly biased against ORMs, but I still recognize their use in CRUD operations. In any case where you are doing bulk operations, true upserts, or more complex queries, then I 100% agree.


It's not just the fact that it hides the queries. ORMs are all-around cancerous. I've been on several teams that's tried to use one, and there were regrets every time.


And by regrets I don't mean "this ORM wasn't worth," I mean "the ORM ruined everything to the point where we have to rewrite it."


This is ancient knowledge and I would have agreed with you 15 years ago, today the only reason to not use an ORM is analytical queries.

Since the Postgres planner doesn't really allow you to tune your query there aren't many ways to construct your query in a way which would to a much worse execution plan. Over the years we have migrated most raw SQL back to using the ORM without taking performance hits, pretty much the only remaining raw queries are CTEs where we force a certain order of query execution.

Usually these ORM problems are caused by schema design anyways. If you need 10+ joins you are going to have a hard time with or without an ORM.


Do CTEs actually force an order of execution? I thought the point was that they are declarative


Before Postgres 12 yes - always, starting with Postgres 12 you have to add "AS MATERIALIZED" to force the order.


Might be a good idea if you arr the only one on the team or if everyone is like you.

If not, be prepared to check for sql injection vulnerabilities the first n PRs from any new team member.

Also to explain how to do it and your reasoning for it.


Kind of goes without saying that any framework/library you use must not allow you to write SQL injection vulnerabilities, and if it does you should stop using it right now.


SQL injection is always possible with an ORM, since they always allow executing raw SQL as an escape hatch.


True. But unlike with the alternative that many end on, raw sql, it doesn't funnel anyone into a place where writing

  "select * from sometable where id=" + id 
feels like a logical next step unless you actually have studied the subject or read the manual ;-)


I dunno, sounds like a great way to get rekt to deal with SQL queries directly, there's some ORM's that let you do this when needed but default to the ORM DSL when needed.

Im a happy user of prisma, I value It supports several databases and not only pg. using for example sqlite for localhost dev has its perks, and it's easy to move later to other stuff if you have already planned for it and not using types that are incompatible between your two targets


Anyone not using ORM, will eventually build his own ORM (or at least query builder). I think the argument is not about "not using ORM", but more about "not using ORM made by someone else".


This configuration tuner will help address some of the articles points.

https://pgtune.leopard.in.ua/


If only there were profiles for popular cloud providers.


At my first DevOps job we had defined a function which deleted old partitions from time to time. If you invoked it manually because of some automation failure with the wrong arguments, it had a habit of nuking production data, which, needless to say, happened at least once. Naturally, the function was called footgun.

We can pretend that the proliferation of managed databases, newfangled NoSQL datastores, and other abstractions preclude the need for accumulated empirical observation across a range of tech stacks and time, but sometimes there's really no substitute for greybeard wisdom.


Using `truncate` in combination with `cascade` is another that I found unexpected:

"Automatically truncate all tables that have foreign-key references to any of the named tables, or to any tables added to the group due to CASCADE."

So it will simply clear out other tables that reference table to truncate, even if you have `on delete set null` and the foreign key column is null.

https://www.postgresql.org/docs/current/sql-truncate.html


Another footgun is that, while `DELETE FROM table_name` is transactional, TRUNCATE is not transactional. Once you push the truncate button, that data is gone in every transaction everywhere all at once.

You should be very hesitant about using TRUNCATE on a production database unless that table (and all related foreign keyed tables) are truly ephemeral. Even if the data is cleared every night at midnight, for example, is there going to be a 10 second analysis transaction running across the midnight boundary that will fail hard with suddenly missing rows?

Running a full delete on the rows and vacuum will still result in a tiny file on storage and doesn't wake me up in a cold sweat when I have a flashback. Even renaming the table is in many ways safer.


TRUNCATE is absolutely transactional. You can rollback a TRUNCATE statement if you run it in a transaction.

https://dbfiddle.uk/xkgzxMUU

The only difference to other DML statements is, that it will put an exclusive lock on the table. So until the TRUNCATE is committed, no other transaction can read from the table.


> TRUNCATE is not MVCC-safe. After truncation, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the truncation occurred.

Sorry, that's what I mean. It's safe in the sense that you can roll it back, but it's not safe in the sense that other concurrent transactions will see the table as empty if they are long-running.


> The only difference to other DML statements is

Truncate is DDL. It's like dropping and recreating the table in a single operation.


Postgres doesn't automatically create indexes for foreign keys. This may come as a surprise if you're more familiar with other databases, so pay attention to the implications as it can hurt you in a few ways.

I don't know of any database system that does this. In the case for SQL Server, the foreign keys usually get added by the ORM layer (Entity Framework migrations if you're using dotnet).


MySQL InnoDB does I believe...


It will create an index if no existing index meets the requirements of the foreign key.


The recommendation for work_mem doesn't account for all the possible cases. It is already noted elsewhere on this thread [1] that the use of memory per connection could be higher than work_mem, and this is true even if you don't use stored procedures, as the memory incurred can be on a per-query node. So it can be a multiple of work_mem per connection.

But there's a factor that even worsens this: parallel query, which is typically enabled by default, and will add another multiple to work_mem.

Tuning work_mem is a hard art, and requires a delicate balance between trying to optimize some query's performance (that could avoid touching disk or using some indexes) vs the risk of causing db-wide errors like OOMs (very dangerous) or running out of SHM (errors only on queries being run, but still not desirable). So I normally lean on being quite conservative (db stability first!) so I divide the available memory (server memory - OS memory - shared_buffers and other PG buffers memory) by the number of connections, also divided by the parallelism and by another multiple factor --and then leave some additional room.

In any case I'd recommend reviewing the detailed information, suggestions and links on the topic on postgresqlCONF [2] (disclaimer: a free project built by my company)

[1]: https://news.ycombinator.com/item?id=35697986

[2]: https://postgresqlco.nf/doc/en/param/work_mem/

(edit: formatting)


The two biggest ones imo were not mentioned:

1. Contrary to popular belief, Postgres isn't fully ACID (specifically the "I") with the default isolation mode. For example, selecting the sum of a column then inserting conditionally on that creates a race condition. Serializable mode is fully isolated, but it has many caveats and shouldn't be used often, so you should instead become familiar with what's isolated and what isn't. See https://www.postgresql.org/docs/current/transaction-iso.html

2. timestamp (without time zone) is bad; always use timestamptz, no exceptions. Unintuitively, timestamp (without time zone) is the one that makes your DB's time zone affect your selected data. Neither one actually stores a time zone, it's just a difference in output formatting. This is a moot point if your DB's locale is set to UTC, but that's not the default.


> it has many caveats and shouldn't be used often

I’d argue it isn’t used enough given its isolation advantages


Serializable mode is only useful in low throughput cases. You take a big performance hit and can easily have a bunch of queries waiting one each other until the queue fills up. I haven't used it in forever, I just make sure my stuff works with the default isolation mode. You also can't slap it on like a band-aid for unsure users. The DB client has to implement retry logic, which most Postgres libs don't seem to have, and the code has to careful not to cause side effects inside the retriable area.

Spanner and some other DBMSs are fully serializable always. They do a better job of that than Postgres, I guess cause they focus on it, but they're still a lot slower in the end.

What would be cool is some extension to watch your non-serializable transactions for concurrency anomalies like 1% of the time and use that tiny sample to alert you of race conditions. Like a thread sanitizer does for code. Does that exist??


Yes you take the performance hit, but many workloads are low enough throughput for it to be completely fine.

I’m not sure how you could detect such anomalies though without running the same serialization checking logic within postgres


Actually I agree. There are probably lots of low-throughput, high-consequence things like that.

For the sampling, you'd have to run the same logic. The idea is just to do it infrequently as to not totally ruin your overall performance. Idk if this would work.


It’s definitely an interesting idea, I wonder if it’s easy to implement such a thing.


> Setting acquired_at on read guarantees that each event is handled only once. After they've been handled, you can then delete the acquired events in batches too (there are better options than Postgres for permanently storing historical event data of unbounded size).

This bothers me. It's technically true, but ignores a lot of nuance/complexity around real-world event processing needs. This approach means you will never be able to retry event processing in case it fails (or your server is shut down/crashes). So you either have to update the logic to also process events where "acquired_at is older than some timeout", which breaks your "handled only once" guarantee, or you can change to a SELECT FOR UPDATE SKIP LOCKED approach which has its own problems like higher database resource usage (but at least it won't process a slow job twice at the same time).


Yep, a few people have mentioned this to me here and on Reddit. I didn't know about the issues with the approach I proposed, so was pleased to read the comments. Will add a correction to the post as soon as I have a sec, thanks.


Thank you! It was a great article, and definitely pointed out a few things I'm doing wrong in my Postgres setups.


Most common one I’ve seen in the last 5-10 years: using a JSON column instead of putting in a lookup table, or instead of properly analyzing and normalizing your data.

That’s a mistake that you’ll be paying for for a while.


How much slower is it in your experience?


If you let the JSON blobs grow past the page size the cost of recovering TOAST tuples can be 10x reading main storage. Also JSON is fundamentally de-normalized so you can incur scan and read costs just from hauling out duplicate values where a nice normalized lookup would be snappy. And finally JSON recovery is going to pull the whole object every time, even though you are probably only interested in one element of the object, so again, higher recovery times compared to an equivalent normalized model.


It’s not even the speed, it’s writing queries. The syntax is clunkier, and since you don’t have a nice clean schema, you end up doing a lot of jumping through hoops to check for the presence of keys/proper data types, etc. Or doing your basic data validation in your app instead of letting the database do its job.


From my understanding, `work_mem` is the maximum available memory per operation and not just per connection. If you have a stored procedure with loops and/or many nested operations, that can quickly get quite big.

One trick worth noting, is that you can override the working memory at the transaction level. If you have a query you know needs more memory (e.g doing a distinct or plain sorting on a large table), within a transaction you can do:

`set local work_mem = '50MB'`

That will override the setting for operations inside this transaction only.


This is a great tip, I had no idea there was `set local work_mem`. Thanks!


PL/pgSQL needs some styling improvements:

1. Get a better name, PL/pgSQL doesn't exactly roll off the tongue.

2. Get rid of those $$ at the start and end of any PL/pgSQL, it's just verbose and ugly.


> Get rid of those $$ at the start and end of any PL/pgSQL, it's just verbose and ugly.

The Postgres parser itself doesn't know the language rules and thus can not parse the source code. The body of the function/procedure is passed to the language handler as a string without Postgres looking at it. The procedural code (PL/pgSQL, PL/python, PL/perl, plv8, ...) will not be parsed until the function/procedure is actually executed.

Today there might be better ways to implement such a dynamic system to register new languages, but I guess it will be huge effort to change this to a way that would understand the old and the new syntax.

Since Postgres 14, at least "LANGUAGE SQL" functions/procedures can be written without using dollar quoting if the SQL/PSM syntax is used.


Dollar quoting is just the easiest way to write strings with lots of escape characters, it's not required (I think it's the best way to do pl/pgsql though). You can also tag dollar-quoted strings:

https://www.postgresql.org/docs/current/sql-syntax-lexical.h...


I found one yesterday. Names can only be like 62 characters long. Then it silently truncates then. At least when creating triggers that is.


My rule of thumb is to use DB functions for writes that require good data consistency, and for reads that are join heavy.

For things like formatting and transforming data, I prefer to have that work done on the client (web browser) if it's for a presentation concern, or middle tier if it's for an integration. My theory there is "don't make the database work hard". Use it for what it's great at: fast reads and data consistency.

DB functions and sprocs are a great way to reduce latency and load, and boost performance if used for the right things.

An easy rule of thumb is that if my function isn't using SQL as the language, I need to really think about whether it belongs in the DB.

That all being said, I do use udfs to construct JSON results for some reports also, when it makes sense based on the params and number of round-trips to the server it would cause in the middle or UI tiers. Even though it violates some of the things above, it's really damn convenient and usually limited to low-traffic queries.


A writing tip: even in lists of "don't"s like this, find a way to write directives/imperatives in a positive sense.

Asking readers to keep mentally flipping the sense of the thing you're telling them to do just adds cognitive load and makes it harder for them to pay attention to what you want them to pay attention to.

Write "do"s, not "don't"s.


Yeah, a lot of people have said this. I was wary of asserting "do's" because I'm not a Postgres expert. It felt more honest to phrase stuff in terms of my own mistakes and "don't do what I did", but of course that's confusing. And then I ruined it by still asserting "do's" in the bodies of some sections.


I think the "don't"s are fine, but each bullet should say "don't" in that case.


That would also work well, yeah


I mostly agree, but there is one slight benefit: whenever I read one of these articles, each topic acts as a little quiz where I get to test whether I think it's a do or a don't before seeing the explanation


Glad I wasn't the only one confused by this.


> With that in place you could acquire events from the queue like so:

  UPDATE event_queue
  SET acquired_at = now()
  WHERE id IN (
  SELECT id
  FROM event_queue
  WHERE acquired_at IS NULL
  ORDER BY occurred_at
  LIMIT 1000 -- Set this limit according to your usage
) RETURNING *;

Would you need a FOR UPDATE in that subquery?


No, because they're setting acquired_at which marks it as "handled". You only need FOR UPDATE (SKIP LOCKED) if you want to process the event inside a transaction; but the approach in the article is to just bulk "grab the events and mark them as done, atomically"


Another distinct set of similar tips that I found useful https://medium.com/productboard-engineering/eight-tips-to-re...


One thing that caught me out is that if you are doing an operation on a string like "lower(email)" then the query planner will not use an index on the email column, instead you would need an index on "lower(email)", which is fine if you always access it in the same way but otherwise requires multiple indexes to get coverage in all your scenarios.

There are also plenty of other weird planner choices which I can't work out but which were much easier to understand in SQL Server. Sometimes, the smallest change will stop using an index e.g. using LIMIT on a query can completely bypass an index.


one bad surprised I had had was people but understanding transactions.

Or more specifically transaction isolation levels.


I'm using last stable postgres but I can confirm you that point 7 still exists. here my case https://dba.stackexchange.com/questions/323960/query-using-v...


The "solution" for the number 4 is a pretty bad footgun in itself, as it is obviously prone to race conditions.

Didn't read further.


Yep, a few people have made the same comment. I'll make a correction to the post soon, sorry that it upset you. As obvious as it seemed, I had no idea.

Thanks for pointing it out though, feedback from experts is the big payoff to posting from my p.o.v. and now I know a thing which I didn't know before.


Even within a transaction?


What about `pg_notify`? I just want to use it to replace my kafka server which is lite overload but costs much.


If you are using NOTIFY/LISTEN, keep track to check if your database does not have any long running queries. If you end up getting PostgreSQL to vacuum freeze your tables while the long running query is active, PostgreSQL will delete files from the pg_xact folder and that will bork out any LISTEN query, until you fully restart the database.


> PostgreSQL will delete files from the pg_xact folder and that will bork out any LISTEN query, until you fully restart the database

that sounds like a bug; are you aware of an issue/ticket tracking that?


While I haven't found an "issue" talking about this (sorry, I don't know how PostgreSQL tracks open bugs), they do know about it since I already seen that issue being talked about on PostgreSQL's mailing list.

Here's my issue on StackExchange, for anyone that wants to delve deeper into my issue: https://dba.stackexchange.com/questions/325104/error-could-n...

Here's an thread talking about the issue, while OP's issue doesn't seem to match exactly what I was experiencing, one of the replies describes my exact issue: https://postgrespro.com/list/thread-id/2546853


Take a look into logical replication. You can even send messages over it


Any good guides like this for MySQL? Unfortunately having to use it instead of Postgres at my current job.


Check out https://www.percona.com/blog (filter for MySQL)

I have been a heavy user of the Percona distribution of MySQL for many years and highly recommend it. I think that most of the MySQL articles on percona.com are relevant for other MySQL distributions.

As a side note, depending on the software stack, ProxySQL can have a major positive impact on performance and scalability.


They seem pretty generic, the same ideas/concepts can probably be taken over to mysql.


This is definitely not true. They are not generic at all.

* work_mem is postgres specific

* stored procs and functions don't perform as badly in SQL Server

* Triggers behave differently

* NOTIFY is postgres specific

etc.


Handed down to me by the grey beards! I like that. Postgres default settings are not the best, I remember moving from mysql to postgres some time back and discovered this. I always wondered why the default would not work best for most deployments? Anyone know why this is so?


Things that Postgresql should address:

- Start compiling queries and functions to native code and hash them just like other DBs

- Improve connection scalability by using tasks instead of threads (each thread needs a lot of RAM!)

- Automatically maintained clustered index (like in SQL Server) would be nice to have


These are some good tips but I've not hit these performance issues. I work on smaller scale applications. One has been in production since 2012 the database performs very well. I guess I need to get out and work for bigger companies to experience this.


One doesn’t need to work for bigger companies to have lots of data these days. Has been true for many years already.

In our Postgres DB we have more than 4 TB of data, which I don’t think is too big. We didn’t need any special sauce, no vendors chiming in, only a better understanding of the technology than average Joe.

On the big company part - I have yet to employ more than five developers.


Yes, there's nothing quite like the query planner deciding to try something new and suddenly 100 application servers are DDOSing your primary :)


This may be irrational but it's something that worries me about using postgres in production. Sure as a developer I love all the features, but the fact that the query planner can suddenly decide on a radically different (and incredibly inefficient) query plan makes it hard to trust. In some ways, a dumber query planner that needs coercing into the right query plan is more reassuring, in that you know it'll keep using that query plan unless explicitly told otherwise.


But that dumber query planner will bite you when your data changes. If the relative size of multiple tables change the query might have to change to be still efficient. Postgres query planner handles that just fine. I've used Postgres for years multi TB databases and I've experienced a problem with Postgres suddenly changing plans.


Is there a “never” missing from your last sentence?


Yes :( And it is too late for editing it.


haha, this example is actually from Mongo, but it's pretty rare.


This list conceptually applies to just about any RDBMS. NOTIFY is incredibly powerful and when used prudently, can be a fantastic solution to some business problems.


I’ve even found “Is not null” queries on a Boolean column to be slow so I made a generated column that’s Boolean.


So you added a generated boolean column that uses the value from a another boolean column? Sounds like a strange concept.

Did you try a filtered index using WHERE ... IS NOT NULL instead?

Or maybe an index on an expression coalesce(the_column, false) and then use that expression the where clause?


I'm surprised to see a discussion of queueing in Postgres without mentioning SKIP LOCKED.


Anyone using StackGres/PGO can confirm if they have better defaults?


[flagged]


The first suggestion is not to gatecrash a random thread in a random forum with an off-topic question. You could have asked it as a HN question and got better results and not distracted from the topic being discussed.


Hypercode.com is worth a look, i think


I could never warm up to PostgresSQL. Guess I'll always stick with MySQL


Can I ask why? I generally only see the “I switched from MySQL to PostgreSQL and loving it” comments in my info-bubble, so it'd be interesting to know what people who prefer to use MySQL feel is still lacking in PostgreSQL.


From an admin perspective: Updates are a hot, complex mess which means I put them off until it's no longer feasible to do so (=because some software requires a newer version).

MySQL is easy: apt-get update/docker stop && docker rm && docker run/kubectl apply, depending on your stack that is literally all you need to do.

PostgreSQL in contrast is hell. You have to shut down the existing database server, install the new one in parallel, manually do the upgrade (that involves copying the whole dataset), remove the old stuff, and then start back up again. Or you have to export the whole database into an SQL dump and import it on a fresh instance.

In any case, way more involved, way more likely to go bonkers, and way, WAY more downtime needed.


You do not need to copy the DB, see pg_upgrade


That depends on the circumstances. To quote the manpage:

> If you use link mode, the upgrade will be much faster (no file copying) and use less disk space, but you will not be able to access your old cluster once you start the new cluster after the upgrade. Link mode also requires that the old and new cluster data directories be in the same file system. (Tablespaces and pg_wal can be on different file systems.) Clone mode provides the same speed and disk space advantages but does not cause the old cluster to be unusable once the new cluster is started. Clone mode also requires that the old and new data directories be in the same file system. This mode is only available on certain operating systems and file systems.


I'm not familiar this much with MySQL then: if you upgrade your MySQL database, can you still roll back to the old version? If you can't then this is the same restriction when you use the `pg_upgrade` tool in link mode.

(Note that one approach if concerned about the potential for downtime if it goes wrong might be to `pg_upgrade` your secondary in a hot-standby replication setup, then failover and `pg_upgrade` the former primary. In practice, `pg_upgrade` runs very quickly — seconds even on multi-terabyte databases — afaik it doesn't touch the actual data but just the metadata, but read the man pages for more intricate details on that :)).


+1 for administrative pains with postgres. My favorite example is that if you want to run a multi-node highly available postgres cluster (read: with automatic failover) you're going to have to use 3rd party software, whereas with mysql it's part of the core (group replication). I'm also not a fan of the difficult-to-remember backslash codes in the psql cli used to inspect databases (like \d+, and others) whereas mysql just uses queries ("describe table").


I'm a bit behind on modern PostgreSQL so I might be wrong, but I believe MySQL has better support for what they call "online DDL" (modifying tables without blocking simultaneous queries).

Last time I checked, MySQL supported it in more cases. MySQL can also be explicit about it via "lock assertions":

  ALTER TABLE ..., LOCK=NONE;
will give an error if the requested operation can't be performed while still allowing concurrent reads and writes to the table (if you're fine with preventing writes you can use LOCK=SHARED).

The LOCK clause isn't just an assertion and can actually affect how MySQL performs the operation, but I tend to think of it as asserting "this won't cause downtime by locking this multi billion-row table while it spends an hour rewriting it".


Not the parent, but I have a friend who knows both Postgres and MySQL and prefers MySQL because "it is simpler and hence there are fewer ways things can get broken". (Not that I necessarily agree, but he seems to have a point.)


How is MySQL simpler? I'm a big fan of simplicity even at the cost of features, but they seem like the same level of complexity.


Personally as someone who has only ever used SQL Server, once I accepted that Postgres is different in certain ways, it has been great. Stable, predictable and I have had no performance surprises.

For me, the big win is to have a cluster without having to pay for SQL Server Enterprise edition therefore having money to invest into the community by having a Postgres consultant help configure it for me.


This shouldn't be downvoted. If someone can give you a particular reason to use Postgres, maybe this is the place. Personally I enjoy Postgres the most after having used MySQL before, but it's only for little reasons like the EXPLAIN output being nicer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: