We've been using WAL-E for years and this looks like a big improvement. The steady, high throughput is a big deal – our prod base backups take 36 hours to restore, so if the recovery speed improvements are as advertised, that's a big win. In the kind of situation in which we'd be using these, the difference between 9 hours and 36 hours is major.
Also, the quality of life improvements are great. Despite deploying WAL-E for years, we _still_ have problems with python, pip, dependencies, etc, so the switch to go is a welcome one. The backup_label issue has bitten us a half dozen times, and every time it's very scary for whoever is on-call. (The right thing to do is to rm a file in the database's main folder, so it's appropriately terrifying.) So switching to the new non-exclusive backups will also be great.
We're on 9.5 at the moment but will be upgrading to 10 after it comes out. Looking forward to testing this out. Awesome work!
Is it possible to run a continuous restore in parallel with normal operation so that there's a warm standby (almost) ready to go? Especially in another data center?
We use a combination of streaming replication and wal-e backups. A separate machine performs multiple restores per hour and verifies restores work ok and that the data is recent.
Thanks! Does WAL-G provides some kind of "continuous backup" where changes committed to the database are continuously streamed to the backup storage? Or does it work "step by step", for example by backing up every 5 minutes or every 10 MB?
Both back up PG's WAL files (Write Ahead Log) and allow restoring your database state as it was at a specific time or after a specific transaction committed. This is known as point-in-time recovery (PITR) [0]
Users and admins make mistakes, and accidentally delete or overwrite data. With PITR you can restore in a new environment, just before the mistake occurred and recover the data from there.
What I meant is that the archive_command is run only when a WAL segment is completed or when archive_timeout is reached. In the meantime, nothing is backed up. On a low traffic database, this can be a problem. I'm wondering if there is a way to continuously stream the WAL to an object storage like S3, without waiting to have a complete segment.
You can open multi-part transfers and close out the transfer when you're ready, which can be used so that it is very close to streaming; for this case perhaps it's close enough to try with wall-g if it otherwise supports it.
That's the usecase for archive_timeout. I set it to 60 seconds. So at most I'll have lost 60s + the time to transfer the file to s3, which shouldn't be more than a couple seconds.
According to PostgreSQL documentation, "archived files that are archived early due to a forced switch are still the same length as completely full files".
I'm afraid to use a lot of storage for WAL segments that are mostly empty:
16 MB per segment x 60 minutes x 24 hours x 7 days = 161 GB/week
Seriously awesome work on this! I was expecting some solid improvement when I heard you were rewriting this in Go, but this is beyond what I could have expected. 7x improvement on high end instance types!
Also, what an impressive project to have on the resume as a college intern. I don't think many interns get to tackle something so meaningful.
Thanks for making this! To someone who's unfamiliar with Postgres tooling, what's the difference between WAL-G and Barman? What're the advantages of using one over the other?
In summary, WAL-E is simpler program all around that focuses on cloud storage, barman does more around inventories of backups and file-based backups and configuring Postgres. There are integrative downsides to its span. WAL-E also happens to predate Barman.
WAL-G (and WAL-E) are expected to run next to the main database, while barman is to be run on a separate machine. Barman can also backup many databases. It is essentially the difference between a central backup service and local backups.
We've currently been testing it at Citus, but have not flipped it to be live for our disaster recovery yet.
We're going to start rolling it out for Forks/point-in-time recoveries first, which present less risk to start. Later we'll explore either parallel restores from WAL-E and WAL-G or possibly just flip the switch based on the results.
On restoration there's really no risk to data. Further we page our on call for any issues that happen such as WAL not progressing, or servers not coming online out of restore.
WAL-G is not yet production ready, but it has been used in a staging environment for the past few weeks without any issues. Once fdr adds parallel WAL support, he plans to take it into production.
Neat. My concern out of the gate is what would be the perf hit.
I assume I am switching from WAL-E to WAL-G for more perf. But WAL-E speaks GCS. If WAL-G needs an extra hop to do so, may lose some of the point of it..
Yeah, no idea personally. Haven't used the gateway functionality in Minio at all.
That being said, the Minio team seem pretty good with writing performance optimised code. Frank Wessels (on Minio team), has been writing articles about Go assembler and other Go optimisation things recently. eg:
There was some mention about resumable uploads in the blogpost which sadly each provider handles differently (that is the GCS layer that supports the S3 API does not accept resumable uploads).
Disclosure: I work on Google Cloud (so I'd love to see this tool point at GCS).
WAL-G has a number of unit tests, and has been tested manually in a staging environment for a number of weeks without issues. We are looking to implement more integration tests in the future.
I've used WAL-E (the predecessor of this) for backing up Postgres's DB for years and it's been a very pleasant experience. From what I've read so far this looks like it's superior in every way. Lower resource usage, faster operation, and the switch to Go for WAL-G (v.s. Python for WAL-E) means no more mucking with Python versions either.
Great job to everybody that's working on this. I'm looking forward to trying it out.
Wow, great work! I am definitely going to test this out over the weekend. However AFAICT the `aws.Config` approach breaks certain backwards compatibility w/how wal-e handles credentials. Also wal-g does not currently support encryption. FWIW, I would love to simply drop-in wal-g without having to make any configuration changes.
Do you want GPG based or some other client side encryption, or S3's encryption support? The latter could probably just be turned on. The former is a feature requiring code.
The RSA keys (or path to them) would be passed as environment variables. It would be a little easier to setup than gpg (especially for automatic backup restoration).
Please consider libsodium or a similar "modern" crypto library instead. There's a lot of ugly 90s crypto in GPG and the API is terrible.
Libsodium makes it hard for non-crypto devs to shoot themselves in the foot, and is much less code to write.
As I'm probably the steward on this going forward: unknown. I don't intend to implement them unless I need them. Would I take a patch with good coverage that implemented those.
Would you be willing to own the abstraction for multiple backends? The code is currently only a bit hardcoded to S3/AWS, but I assume most of the "work" will be discussing how to abstract different transports, exponential backoff, resumable uploads, and so on.
Fwiw, the GCS client for go (import "cloud.google.com/go/storage") is very straightforward. Though as others have pointed out, it might be worthwhile to just try to use minio-go if you want to gain Ceph as well.
Disclosure: I work on Google Cloud (and if the way is paved, we will contribute here; seems like a great project)
I don't know if you're being sarcastic, because the point of the article is that the Unix philosophy was a performance bottleneck and they replaced all the Unixy stuff with Go libraries.
We've been using WAL-E for years and this looks like a big improvement. The steady, high throughput is a big deal – our prod base backups take 36 hours to restore, so if the recovery speed improvements are as advertised, that's a big win. In the kind of situation in which we'd be using these, the difference between 9 hours and 36 hours is major.
Also, the quality of life improvements are great. Despite deploying WAL-E for years, we _still_ have problems with python, pip, dependencies, etc, so the switch to go is a welcome one. The backup_label issue has bitten us a half dozen times, and every time it's very scary for whoever is on-call. (The right thing to do is to rm a file in the database's main folder, so it's appropriately terrifying.) So switching to the new non-exclusive backups will also be great.
We're on 9.5 at the moment but will be upgrading to 10 after it comes out. Looking forward to testing this out. Awesome work!