Hacker News new | past | comments | ask | show | jobs | submit login

> Setting acquired_at on read guarantees that each event is handled only once. After they've been handled, you can then delete the acquired events in batches too (there are better options than Postgres for permanently storing historical event data of unbounded size).

This bothers me. It's technically true, but ignores a lot of nuance/complexity around real-world event processing needs. This approach means you will never be able to retry event processing in case it fails (or your server is shut down/crashes). So you either have to update the logic to also process events where "acquired_at is older than some timeout", which breaks your "handled only once" guarantee, or you can change to a SELECT FOR UPDATE SKIP LOCKED approach which has its own problems like higher database resource usage (but at least it won't process a slow job twice at the same time).




Yep, a few people have mentioned this to me here and on Reddit. I didn't know about the issues with the approach I proposed, so was pleased to read the comments. Will add a correction to the post as soon as I have a sec, thanks.


Thank you! It was a great article, and definitely pointed out a few things I'm doing wrong in my Postgres setups.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: