Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What do you use to backup your VMs?
74 points by bykof 36 days ago | hide | past | favorite | 57 comments
How do you backup VMs with installed Postgres, MariaDB instances and local files?



ZFS Offsite Backup configured using Syncoid [1] in pull mode. My backup server is started weekly using a pre-configured Shelly Plug. After boot, it connects to the main server, pulls latest ZFS Snapshots, does a ZFS scrub, sends an email for confirmation to me, and shuts down. The Shelly Plug-S is configured in a Home Assistant automation to shut off power completely if under 10W for 10 Minutes.

The advantage of this setup is that the Backup server does not need to know the encryption keys for ZFS. It can pull snapshots without knowing keys (i.e. zero trust). The main server also cannot reach the Backup server, only the other way around works (configured in opnsense that connects my offsite backup via IPSEC). The backup server is locked down in its own subnet and can only be reached from a few selected points. This is possible because there is no interaction needed due to the Shelly Plug S self-starting automation.

ZFS also doesn't care about filesystems (etc.) - it can incrementally pull ext4/xfs filesystems of VMs, without touching individual files or the need for individual file hashes (such as with rsync).

[1]: https://github.com/jimsalterjrs/sanoid/blob/master/syncoid


For me, it's case-by-case. I don't back up the VMs directly, just the data of the stateful applications running on the VMs (or bare metal servers, I do identical stuff for them).

For postgres, I used to just have a systemd timer that would `pg_dumpall` and throw it in s3.

Now I use https://github.com/wal-g/wal-g to backup my postgresql databases.

For other local files, I use borg backup for personal files and services I just run for myself, and I use restic to backup server files to s3.

The operating system's configuration is all stored in git via the magic of NixOS, so I don't have to worry about files in /etc, they all are 100% reproducible from my NixOS configuration.


shameless plug

wrote a elaborate doc on using wal-g on nixos, might be useful if someone finds themselves in same boat. has some catches.

https://github.com/geekodour/nix-postgres-docker


Fwiw wal-e has been deprecated for a while - I’m a big fan of pgbackrest


Edited to wal-g, since that is what I'm using now; I forgot I had to switch over, but indeed I did.

Did you jump straight to pgbackrest or compare it to wal-g? I didn't compare em at all, so I have no clue if I'm missing out on something nice or important there


I wonder if we should ditch pgbackrest now that PostgreSQL 17 has the new backup thing.


I think the next few years it will not be dead, there are a lot of instances who can not easily be ported to 17. But yeah, i also read about that new feature a few days ago.


Yeah i can also recommend, i even use it to push incremental backups to ssh (nas is to old to install minio), other servers use s3.

I have it included in timescale image.. so full restore is: spinning up a temporary image, restore, restart compose. So do i with with incremental. File based restore is pretty fast.

For me its a setup and forget thing.

For etc i use gitkeep sometimes (machines who are also configured by devs itself), to have an audit like thing, with a central gitea repo in the project cluster to see changes.

For files i use restic to ssh, also s3 possible. i like the deduplication they do.

And otherwise i use proxmox, doing snapshots on HDDs at night, and at the day i move those sliwly to external nas again, keeping the last few snaps on ssd,hdd to be able to recover fast. and if the machine got dead, i have the external backups.

I also do this the same way on hetzner with my bare metals, storing an encrypted backup (restic deduped and encrypted) to external storage. Server is also proxmox with encrypted disks, so to boot up, you need to send encryption keys via secure ssh.

Also another aproach with configs is, having a clean deployment/infra config (ansible, pyinfra, terraform...), and then only restoring data, so everything is reproducable and consistent.


environment: KVM VMs running on physical hardware managed by us.

we have a belt & suspenders approach:

* backups of selected files / database dumps [ via dedicated tools like mysqldump or pg_dumpall ] from within VMs

* backups of whole VMs

backups of whole VMs are done by creating snapshot files via virsh snapshot-create-as, rsync followed by virsh blockcommit. this provides crash-consistant images of the whole virtual disks. we zero-fill virtual disks before each backup.

all types of backups later go to borg backup[1] [ kopia[2] would be fine as well ] deduplicated, compressed repository.

this approach is wasteful in terms of bandwidth, backup size but gives us peace of mind - even if we miss to take file-level backup of some folder - we'll have it in the VM-level backups.

[1]: https://www.borgbackup.org/

[2]: https://kopia.io/


I have a VM running Windows Server that hosts MS SQL Server and websites under IIS. I use Robocopy and PowerShell to copy daily and weekly file backups, as well as SQL backup scripts for the databases. The backups are then stored on a separate drive, which has a nightly copy to remote storage and deletes files older than 30 days. Occasionally, I manually copy the backups from remote storage to a local computer.

It only takes a minute to restore if needed, but the problem is if the OS goes down (which has happened once in the last 10 years due to a disk failure), it takes several hours to reinstall everything from scratch.


May export an snapshot of the VM as it is running in production (depend on the virtualization, and how unsure I am of manual changes or tweaks were done on it in the past. I can do that once or after big changes are done (configurations, distribution/db versions updates, things like that).

For databases they all have programs to export their databases, like pg_dump or mysqldump, and compressed. Is good to keep some historic backups (not just the last one) do you can go back in case to before some not yet detected harmful change. How much? It depends on for what you use them, some may be required by law for years, or be totally meaningless for you in a few days. Backing up the transactions that happened since the last backup to do a point in time recovery, even having a running slave with that information, will be good too.

About files and directories, borg backup is great. You can have a remote server with ssh as repository, and you may use it to have very efficient historic backups too.


I haven't administered such a system, but I have good experience with Veeam. Deleted the wrong SQL table by accident? Absolutely no problem to restore it without even needing to reboot any server or restart services. There are countless option for import/export that Veeam provides.

We also use Nimble for snapshots every 15 minutes on some servers, although not our databases afaik. Pretty effective if a worst case ransom attack was successful on a user with too many access rights.

These solutions aren't cheap though.

Privately I just use multiple hard drives and for database servers I would use the internal backup functionalities, which are pretty good on most databases.


Veeam is really beast when it comes to Microsoft Tech... they even are able to recover mails back to users Inbox in Exchange directly on the server. But very expensive, yes.


I backup my home server VMs, running on Proxmox, to working storage and separate storage on the same system.

Working storage is a pair of n TB SSD in RAID0 where the VMs live. Nightly backups are stored there.

A single n*2 TB disk for backups and long term media.

When I lifecycle disks, I move them to my desktop which just has a cronjob to scp from server storage.


Databases have their normal backup tools, which are fine for the data. Important intense-load databases can have one-way replication set up to a separate box that can do proper point-in-time backups without slowing production. Raw (uncompressed) backup data would be efficiently turned into incremental / full backup mix with something like Restic or Kopia, and shipped to something like Backblaze or S3 Glacier.

Configuration and other stuff lives outside the VMs, in some git repo, in the form of terraform / saltstack / shell scripts / what have you, that can rebuild the VM from scratch, and does it regularly. Git has very good built-in "backup tools".

What else needs backing up in a DB server?


ZFS snapshots and replications, seperate volumes/disks for the data as opposed to the OS.


I really liked BackupPC when it was being worked on: https://backuppc.github.io/backuppc/ and https://github.com/backuppc/backuppc

Sadly, despite being pretty great (doing deduplicated and compressed incremental backups across a bunch of nodes, allowing browsing the backups and restoring them also being easy), the project is more or less abandoned: https://github.com/backuppc/backuppc/issues/518

Nowadays it's mostly just rsync and some compression for archives, sometimes just putting them in an S3 compatible store, or using whatever the VPS provider gives me. I'm not really happy with either approach, they feel insufficient.

It's not like there aren't other options out there: UrBackup, Bacula, Duplicati, BorgBackup, restic, Kopia and others, it's just that I haven't had the time or energy to truly dive into them and compare them all, to make the decision of what I should move to.


I have 2 VM with Debian on Contabo in 2 datacenters in 2 different physical locations connected via VPN with Wireguard. The two servers are in sync for high availability : MariaDB servers are replicated in a master-master configuration, Unison keeps in sync the files. In case one server is not available, ClouDNS automatically switches the DNS records to the other server.

I created a mesh VPN with Wireguard between the 2 VMs and my office and home serversc (Minisforum PC with Debian) (this is important for my backup and to not open external ports. The VPNs at home at office are provided by 2 Fritz!Box routers)

Just finished yesterday to re-engineering my backups as follow :

- Contabo provides 2 snapshots per VM (on my current plan), I create manually a snapshot every week (for the future I'd like to make this automatic)

- I use bot restic and borg for redundancy as follow :

-- An automysqlbackup cron job creates the encrypted data dumps every day

-- I created two bash scripts that make archives of /etc, /root (contains acme.sh ssl certificates). These archives are encrypted with my public GPG key. They are scheduled via a cron

-- The others directories I backup are /var/vmail (mail files already compressed and encrypted by vmail) and /var/www (websites and web applications), but I don't create archives of them

-- I created different repositories for : /etc, /root/, databases, vmail, sites. In case a repository or a single backup fails for every reason I don't loose everything, but only a part of the full backup

-- Restic sends the backups to iDrive e2 S3 storage : I'll replace part of the procedure using resticprofile in the near future

-- Borg sends the same backups both to home and office servers : I use borgmatic, but have different configurations, and cron jobs, for every destination, otherwise, if a backup fails on one destination, fails on both

-- If one backup fails fo every reason I receive an alert via Pushover : I'd like to substitute it with a self hosted instance of ntfy.sh

I have two additional offline external disks I sync (every week manually) with the main backup disks using rsync

Still to improve / change :

- automatic snapshots creation (via cron jobs)

- use resticprofile for restic

- use backrest to check and access the restic backups

- use Vorta to check and access the borg backups

- use self hosted ntfy.sh for notifications


For my personal stuff it's VMs on Proxmox to Proxmox Backup Server (PBS) running in a VM on my NAS, and NAS to offsite.


If you use elastic block storage in the cloud, many times they will let you do snapshots and backups of that.

If you can't do that, make a script to make a .tgz backup and have cron do it every day and copy it up to an s3 bucket. Have the s3 bucket configured to delete items that are over a certain age.


But what you need to consider often is, having a consistent state in those snapshots, this can be a nasty thing for examble with all sorts of databases or configuration as database (boltdb, sqlite,...)


Turning on WAL for SQLite mitigates most concerns with inconsistent block storage state.


Yes, but i meant in general, idk if ALL those systems support wal archiving.


The raw VM files are stored in ZFS nested datasets (datapool/vms/{vm1,vm2,...}).

A cronjob makes a daily snapshot of the running machines (of the datasets), maybe resulting in slightly corrupted VMs, but every now and so often I shut down all vms, make a scripted snapshot of each one and restart them.

Then I extract the raw VM files which are in the snapshots onto a HDD and run an md5sum on each of the source and target files and get the results sent to me via email.

All of this is automated, except for the shutdown of the VM and the triggering of the snapshot-making script which follows the shutdowns.

Apparently there's "domfsfreeze" and "domfsthaw" which could help with the possible corruption of daily snapshots.

It's all not really a good or solid strategy, but it's better than nothing.


I don't, really. I have a text file on GitHub explaining how I set them up, and occasionally I'll grab a tarball of /etc on the more complicated ones. Data worth preserving doesn't live on the VM-- that's what NFS is for.


LVM snapshots + rsnapshot, saved on btrfs disks on each server, then btrbk[1] for pulling backups to remote storage.

[1] https://github.com/digint/btrbk


Answer to this will totally depend on the scenario. Figure out what your RPO and RTO is on each thing and build out backups and failovers accordingly. If this is anything but a personal lab, you will likely need both db level backups (ie get this db back to this time) and vm level (the datacenter went down, good thing we have all our vms backed up offsite). Veeam comes to mind and one tool I’ve seen work well. Backup services built into cloud providers is what I use for most things these days


VMs running Windows 2019. Host running Debian 12 with ZFS volumes, snapshot every 10 minutes. Full database backup using Kopia.io into Backblaze S3 compatible bucket.

I love Kopia + Backblaze


For us it’s XtraBackup for our Percona DBs - s3.

If you’re setup to quickly provision new VMs there’s no state other than that to backup in most cases. Files are in s3 as well.


PBS. Works great (given that you are using proxmox) and it supports file recovery from individual VMs. For individual server VMs (which are not in proxmox), I use restic


We don’t use mySQL / Maria but I wouldn’t bother backing up OS files every time so you should have a script to boot up a fresh VM instance and restore data at the db level- perhaps use replication if you need fast RTO. This is a fairly solved problem as far as databases are concerned. If you run on AWS or can afford at least 3 nodes running you can even avoid downtime altogether.


I don't. Everything is and should be temporary.


We leave backing up of the VMs themselves to our infrastructure team, but we also make sure that we have logical backups of our databases, so worst case we can restore our databases to a known point.

Our local files can either be pulled from git repositories or rebuilt by running an Ansible playbook against the server. Luckily we don't have an user uploaded files on our servers to worry about.


I wrote a tool to automate this. I point it to DB and folders I want backed up, it automatically takes backups, encrypts and uploads to R2, and verifies it both automatically and on demand.

Consider using a backup SaaS if you want a hassle free experience, there are a lot of moving parts to make it reliable.


For backing up VMs, it's possible to use Xopero Software https://xopero.com/


Homelab/tiny use case: I don’t. Everything is provisioned via Ansible and is just Docker images and config. The config gets backed up, images are already stored on docker hub.


To protect both your VMs and the critical data inside them it's possible to use Xopero Software. The solution helps to ensure flexibility and resilience against data loss.


I have had good luck with duplicacy for cloud backup: https://duplicacy.com/


ZFS snapshots, db backups get uploaded to dropbox, while it wouldn't be good to loose everything, it wouldn't be the end of the world either.


Proxmox Backup Server


IBM Storage Protect (used to be called IBM Spectrum Protect, Tivoli Storage Manager, ADSM) used to be a thing, probably still is. Not cheap though.


What’s the use case? Production system with minimal downtime? Replicate and backup the systems individually most likely. Or snapshot the fs.


lsyncd for live replication to warm standby, LVM+rsync for offline backup, restic for keeping file-level version history.


Same here on using lsyncd and restic. Curious about the LVM+rsync method for nightly? My two cents: ReaR (Relax and Recover) for Linux for raw moving servers has saved me a couple of times.


What cloud are you using? My VMs in the cloud are being backed up by a service called snapshooter since I am using DO.


Why presume they are using a "cloud"?


duplicacy :)

cd <whatever root you want to be backup)

duplicacy init -e <storage-name> <storage-url>

# set password for storage if neccessary

duplicacy set -key password -value <storage-password>

# edit .duplicacy/filters to ignore/add files to backup

vim .duplicacy/filter

# check which files will be backed up

duplicacy -d -log backup --enum-only

# run your first backup

duplicacy backup -stats

# add the following command to your crontab

duplicacy -log backup -stats


VM Backup plugin in UnRAID. It automates regular backups and trimming of older ones.


I have inotify-like daemons shipping important folders to AWS S3.


veeam is great i wish they offered some kind of personal license


rsnapshot + cron for LXC container backups


rsync in crontab


Jesus Christ who uses Maria db


Are you asking or, mmm, describing? :)


Our father, Who art in heaven,

Hallowed be Thy DB_name


Not you, but a few million other people


Slight overreaction




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: