Hacker News new | past | comments | ask | show | jobs | submit login

I might have lost count but I think they reimplemented the file storage twice in local Docker installs and twice in Kubernetes... it’s at a point now where if you trust cloud network storage performance guarantees, you can trust Postgres with it. Kubernetes and Docker don’t change how the bits are saved, but if you insist on data locality to a node, you lose the resilience described here for moving to another node.

Here’s a different point to think about: is your use of Postgres resilient to network failures, communication errors or one-off issues? Sometimes you have to design for this at the application layer and assume things will go wrong some of the time...

As with anything, it could vary with your particular workload... but if I knew my very-stable-yet-cloud-hosted copy of Postgres wasn’t configured with high availability, well, you might have local performance and no update lag but you also have a lot of risk of downtime and data loss if it goes down or gets corrupted. The advantage to cloud storage is not having to read in as many WAL logs, and just reconnect the old disk before the instance went down, initialize as if PG had just crashed, and keep going... even regular disks have failures after all...






That's great to know. Thank you. The stuff in your 3rd paragraph is something I have always tried to do, but it largely depended on the largely reliable single node instances. I tend to (currently) handle single disk failures with master-slave configs locally, and it has worked at my volumes to date, but I am trying to learn how to grow without getting increasingly (too) complicated or even more brittle.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: