
Restic – Backups Done Right - tambourine_man
https://restic.net/
======
hardwaresofton
Also check out Borg: [https://www.borgbackup.org](https://www.borgbackup.org)

And some resources on how they're different:

\-
[https://github.com/restic/restic/issues/1875](https://github.com/restic/restic/issues/1875)

\- [https://stickleback.dk/borg-or-restic/](https://stickleback.dk/borg-or-
restic/)

\-
[https://sysadministrivia.com/episodes/S4E5](https://sysadministrivia.com/episodes/S4E5)

The general concensus seems to be that restic is borg with more whistles
(backing up to various places), but borg is the more trusted tool with the
longer history (just use SSH and be done with it). I personally recently used
borg for a migration between computers and it worked great for me.

~~~
blattimwind
Borg's encryption is questionable at best. Performance wise it's not
particularly good and probably will never get better because of the large,
somewhat complex Python codebase.

~~~
lima
What's questionable about it? It seems that they use modern AEAD ciphers in a
reasonable way.

I'm more concerned about the repository format and config file (i.e. attack
surface, since the repo is potentially untrusted).

Performance is actually better than Restic, and performance-critical parts of
Borg are written in C or use C libraries.

~~~
blattimwind
> What's questionable about it? It seems that they use modern AEAD ciphers in
> a reasonable way.

No, using one key per repository and a persistent message counter is not a
reasonable design.

[https://borgbackup.readthedocs.io/en/stable/internals/securi...](https://borgbackup.readthedocs.io/en/stable/internals/security.html)

~~~
Erlich_Bachman
For what threat model does this matter?

~~~
blattimwind
"When the above attack model is extended to include multiple clients
independently updating the same repository, then Borg fails to provide
confidentiality (i.e. guarantees 3) and 4) do not apply any more)."

Edit: I've posted this a bunch of times here, pretty much every time it caught
my eye when someone said this tool has good crypto, and by now I'm used to
people just downvoting it and saying it doesn't matter because obviously no
one ever would use it like that and the design is fine etc. (isn't the point
of deduplication to save disk space?)

------
mike-cardwell
I started using Restic recently. It's good and I'm going to continue using it.
That said, there are a couple of bad problems with it:

Firstly, if you want to prune old backups, e.g keep the last N1 hourly
backups, and the last N2 weekly backups, etc, then it has that ability,
however whilst it's doing it, the client has to download and upload a tonne of
data in order to repackage the backup files that contain some data that needs
removing and other data which doesn't.

Secondly, I've set up an "append only" system, where my various hosts can
append to their own backups, but not overwrite or delete them. I wanted the
backup server to be unable to read the backups (easy enough, don't supply the
encryption keys to the backup server), however at the same time I wanted the
backup server to be able to automatically prune old backups. It can not do
that without the key. I don't want to give it the keys to the backups as then
a compromised backup server means _all_ of my hosts data are suddenly
compromised.

~~~
rcthompson
How would you expect the backup server to be able to prune old encrypted
backups if it can't decrypt them? How should it know what to delete?

~~~
Nican
ZFS does that with snapshots already. (ZFS is not encrypted, but the principle
is the same)

I believe immutable trees would be the correct algorithm here.

The algorithm relies on a tree structure, where the leaves are the files.
Every time a new snapshot is written, every leaf node changed writes a new
path to the root node, while leaving the rest of the tree intact.

When a snapshot needs to be deleted, you just remove all the old root nodes
and have a garbage-collection like-process delete all the old files.

EDIT: Look at example here:
[https://en.wikipedia.org/wiki/Persistent_data_structure#Tree...](https://en.wikipedia.org/wiki/Persistent_data_structure#Trees)
. When xs is deleted, nodes a,c,f can also be deleted.

EDIT2: I feel like I can never make posts like these while keeping it short
and sweet, and adding enough details. ZFS can be encrypted, but accessing any
files is right at its fingertips, not over a slow network connection. And
there are caveats about how to encrypt the metadata, but you do not need to
decrypt the whole backup to figure out what to do delete.

~~~
tynorf
AFAIK that is exactly how restic works. You use `restic forget` to remove the
roots (snapshots), then `restic prune` garbage collects unreachable blobs.

ETA: However, nearly all data in restic is encrypted. This includes the index
files. So you still need to have the encryption key to look at snapshots and
walk their trees.

------
ahnick
Restic is probably my favorite pure open source backup solution, but I've been
using Duplicacy for years now and have been very happy with it. On the GitHub
page
([https://github.com/gilbertchen/duplicacy](https://github.com/gilbertchen/duplicacy))
for Duplicacy you can find a comparison to Restic (along with other backup
solutions), which I found informative.

------
Scaevolus
Unfortunately the primary author has mostly moved onto other things, and
hasn't elected any maintainers. It's been three months without any commits,
and there's 60+ open PRs.

The biggest missing feature of Restic is no support for compression.

Ah, well.

~~~
aleph-
As far as I'm aware from the IRC channel the primary maintainer is mainly busy
moving.

I do concede it is annoying having to fold in upstream PRs to my own build of
it.

------
PureParadigm
This seems very similar to Borg Backup [1], so I'm interested to hear from
others who have used both on how it compares.

More generally, I've been looking for a solution that helps distribute backups
in a peer-to-peer way. I have a few friends with their own home servers, and
we want to replicate backups across each other's servers for geographical
redundancy. Currently, I have a script that uses rsync to copy some tar
archives over daily, but this doesn't scale well as more peers want to join
our backup-sharing group, since it requires them granting me SSH access.

What I need is a decentralized network to share and retrieve backups from
peers. I tried using dat [2] with a Borg Backup repository inside it, but ran
into some nasty issues with dat which would cause it to regularly crash and
one time even corrupt the data.

Does anyone have any suggestions for such a situation?

[1] [https://www.borgbackup.org/](https://www.borgbackup.org/) [2]
[https://github.com/datproject/dat](https://github.com/datproject/dat)

~~~
kevlar1818
This might not be quite what you're looking for, but Syncthing[1] is a popular
P2P file sharing solution. You could use Restic to make backups to a shared
Syncthing folder.

[1]: [https://syncthing.net/](https://syncthing.net/)

~~~
PureParadigm
Thanks for the suggestion. I've tried Syncthing, but it seems to still require
that users are explicitly added (i.e., no public access [1]). I'd prefer a
solution where anyone could decide to start helping replicate backups, without
me having to add them in some way.

[1]
[https://github.com/syncthing/syncthing/issues/1942](https://github.com/syncthing/syncthing/issues/1942)

~~~
kevlar1818
Then you should look into IPFS :)

[https://github.com/ipfs/ipfs](https://github.com/ipfs/ipfs)

~~~
PureParadigm
As far as I know, IPFS objects are immutable. Is your suggestion that I
publish some sort of index containing all of the IPFS links to my backups, and
then my friends can automate pinning those links? I think that could work, but
it would be pretty bandwidth intensive since there would be no deduplication
(I'd have to also encrypt since I wouldn't want everyone to have my files).

~~~
kevlar1818
I'm not suggesting anything specific. Personally, I think allowing permanent
public access to your server backups sounds like a terrible idea, but it's
your data and you choose your threat model.

------
cfallin
I've been using Restic recently for backups online (to Backblaze, which it
natively supports) after using Duplicity for a while. I'm really, really
impressed so far; the hash-based deduplication means:

\- I don't have to think about deduplicating my own data, which works well
with my packrat tendencies (e.g., multiple copies of music library on
snapshots of old laptops);

\- Full-disk backups of multiple machines will share a lot of storage (all
system files from my desktop and laptop, both running the same Linux distro);

\- Restarting a long upload doesn't depend on some finicky state regarding an
interrupted previous backup; it can just rescan the whole disk and skip
uploading blobs that are already there, which feels much more robust to me.

~~~
Erlich_Bachman
FYI those are all functions of any deduplicating backup software, of which
there are several out there, Restic being only one of them.

~~~
cfallin
Yes, indeed; Restic just happened to be an easy-to-use deduplicating backup
program that supports various cloud storage backends, including the one I
happen to use (Backblaze). Not needing to patch together some sort of pipeline
of dedup --> upload to remote storage is a nice selling point.

------
h1d
There's also Duplicacy which I feel is a bit more mature than restic (I felt
it more performing and it also has a web interface but the lack of mount
capability that restic has is what I miss and retention policy is quite easier
to specify for restic too), though it's a paid software for non personal
usage.

But paid is good that the author has less reason to move away from it. (As
mentioned elsewhere, restic hasn't been updated in a while but they recently
changed their pricing to be much more agressive which I hated. I wish they
give it a softer pricing model than cost per machine.)

[https://duplicacy.com/](https://duplicacy.com/)

Their comparison of cloud storage providers gave me a good insight on what to
choose in terms of performance.

[https://github.com/gilbertchen/cloud-storage-
comparison/](https://github.com/gilbertchen/cloud-storage-comparison/)

I use both restic and Duplicacy, so my backups are done by multiple
implementations toward multiple destinations not to get bitten by one of their
bugs to avoid the saying "backups aren't working when you need it the most".

------
greggman2
This looks great. I've been using Arq for years and no complaints but I might
switch.

I've also been using Syncthing for syncing machines. It's pretty great. i've
got it setup so 2 laptops sync to a server and no issues so far.

~~~
LVB
I'm a long time Arq user too. I looked at Restic but was a little disappointed
it didn't support compression. There is an old, still open issue with a ton of
comments:
[https://github.com/restic/restic/issues/21](https://github.com/restic/restic/issues/21)

------
Nican
Glacier Storage/Cold storage, which usually takes ~6h to retrieve files, on
Cloud providers are really cheap (I know Azure has $0.99/TB/month), but I have
a hard time finding any free backup solutions to support the very-high latency
natively.

I remember reading a long time ago that Restic was not going to natively
support cold storage solutions. Has anything changed?

~~~
adontz
You should not backup directly to Glacier, you should backup to S3 and move
from S3 to Glacier by retention policy configured for an S3 bucket.

~~~
Nican
And when I have to do incremental backups or retrieve a single file, do I have
to move the whole backup from cold storage, and then back again?

~~~
adontz
I did not understand "back again" part. Glacier allows you to retrieve
individual files.

------
JoshTriplett
Restic works very well for me, and I've successfully used it to restore lost
files.

The one issue I've encountered with it: it uses a _lot_ of memory,
proportional to the size of the repository indexes, so if you're backing up a
lot of data on a machine without a lot of RAM (such as a virtual server), you
may run out of memory. Setting GOGC=20 can help slightly, but ultimately,
restic needs fixing to support working on indexes larger than memory.

------
psanford
Filippo Valsorda (Go's current crypto maintainer) took a little bit of time to
look at the cryptography used by restic: [https://blog.filippo.io/restic-
cryptography/](https://blog.filippo.io/restic-cryptography/)

------
gsmecher
Borg Backup (for similar reasons as Restic) essentially shifted backup from my
"I-don't-but-I-should" list to my "solved problems" list. Is there anything
similar for binary distribution?

Here's what I mean. I develop a software/firmware stack that is typically
delivered to users as a set of large (100M) tarballs and binary images. Even
though the vast majority of the payload does not change from release to
release, it really has proven necessary to distribute just the final blobs.

Technologically, it's almost identical, but distribution implies a different
set of access controls and an Internet-friendly user interface.

I see the idea floating around in e.g. NextCloud forums but this seems like a
relatively compact problem without a obvious candidates to solve it.

~~~
curt15
Perhaps something like OSTree?

~~~
gsmecher
Thanks, OSTree sounds like a good place to start.

------
pabs3
There are a few downsides to using restic:

No support for compression yet

No support for deleting data from snapshots

No support for continous backups (restic walks the directory tree for each
backup).

No support for resilience from disk errors using par2 or similar

Backing up millions of small/empty files uses a _lot_ of memory

~~~
cies
Some more:

Not too many storage backends.

No GUI tool.

Hard to implement lifecycle policies like "keep N last hourly backups and M
last weekly backup"; and not optimized for this usecase (needs quite a bit of
data transfer to accomplish due to the zero-knowledge server-side encryption).

------
brunoqc
I use and like Restic but I wish I could easily have "write only" keys like I
do with tarsnap so an intruder can't delete all my backups.

I heard there's a way to have an "append only" backup or something like that.
Is it possible to still prune old backups from time to time?

------
rsync
Restic can operate over (among other methods) plain old SFTP[1]. Therefore, it
does - and always has - work perfectly with an rsync.net account[2].

I personally find it reassuring that even though I might be creating and
maintaining backups with sophisticated tools like borg or restic or duplicity
or rclone ... at any time, and _from any system_ I can grab those backups with
dumb old SFTP.

[1]
[https://www.rsync.net/products/sftp.html](https://www.rsync.net/products/sftp.html)

[2] [https://forum.restic.net/t/restic-commands-for-rsync-
net/216...](https://forum.restic.net/t/restic-commands-for-rsync-net/2162)

------
Erlich_Bachman
For a very good introductory read on what all this deduplicating backup
software is all about and how it is a "new school" to the "old school" of
incremental backup softwares (where duplicity (which you might want to either
also check out, or migrate from) is the king) and its full/incremental model,
and about the advantages and disadvantages of the two, I recommend this
surprisingly well written guide from Backblaze:
[https://www.backblaze.com/blog/backing-linux-
backblaze-b2-du...](https://www.backblaze.com/blog/backing-linux-
backblaze-b2-duplicity-restic/)

------
lasftew
I moved from borg to restic because of the native support for B2 buckets (a
lot cheaper than dedicated rsync/ssh type cloud file systems). My use case is
backing up daily snapshots of my Onedrive (synced locally via rclone). As my
important files are mostly immutable (photo library, pdfs), I don’t have to
prune snapshots. Pruning is really slow compared to borg.

I never backup workstations or home directories, as I can always generate them
again using NixOS and home manager. For source code, nothing beats git repos.

------
jszymborski
I've been using Duplicati, which has been doing a great job at keeping my
backup sizes low and has a handy interface.

I've been keeping encrypted remote backups for ~300 Gb worth of data, which
occupies ~150 Gb. On Backblaze B2, it's costing me ~$10CAD/mo, which is a
_lot_ cheaper than Tarsnap.

I might try Restic (always good to have a backup backup system), but I'm not
sure how ergonomic it'd be on Windows.

[https://www.duplicati.com/](https://www.duplicati.com/)

~~~
mmastrac
My duplicati experience has not been great - I found that it couldn't recover
well from backup corruption (multi-day recovery times for ~1TB of data when
everything was local and USB-connected!).

The web interface is what keeps me on it.

~~~
tryptophan
Same here. Duplicati works great until you actually have to restore. It took
me 3 hours to restore a 12GB backup. After that I quit using it.

Now I just made my home folder a shared folder on Syncthing, in send only
mode, so it constantly backs up to my Nas. Much faster and the backup is
directly readable.

~~~
jszymborski
Agreed, backup is slow (although recent versions are leveraging multi-core
CPUs, so that's given a modest speed-up).

Syncthing in send-only is not a sufficient back-up strategy for 90% of use-
cases, however. Without incremental back-ups, you're still susceptible to
ransomware and data-loss should syncthing update after the event. I do,
however, like to keep a "verbatim" copy in addition to my incrementals since
they are less likely to have a "restoration" problem.

------
Sphax
I've been using restic to backup my workstations for quite some time now and
it works well enough.

At work I have a systemd service which runs every 10 minutes and use a nfs
repository. Performance is good. It has saved me once already after a botched
Ubuntu upgrade.

At home I have a identical service but it runs every 6 hour and use a
backblaze b2 repository. Performance is not great. However I've been backing
up ~20 GB for over a year now and it has cost me less than $2 _in total_ so
I'd say it's worth it.

------
dewey
I recently set up Restic to backup my server to my local Synology and it was
actually surprisingly easy. Before that I was using Duplicity which broke
after a while and it all felt a bit more tricky to get going.

Documented it a bit here, mostly so I can go back and copy / paste commands if
I have to:

[https://blog.notmyhostna.me/backup-cloud-server-to-
synology-...](https://blog.notmyhostna.me/backup-cloud-server-to-synology-
nas/)

------
jaequery
I've been looking for a solution for something like this and not sure if
Restic can do this, so please chime in.

Does anyone know of an open-source solution that acts just like Dropbox/GDrive
where it detects for any changes in a specified folder and then once detected
it automatically uploads to an S3 folder?

~~~
rhizome
Syncthing and s3-fuse to create a local pipe to s3.

[https://github.com/s3fs-fuse/s3fs-fuse](https://github.com/s3fs-fuse/s3fs-
fuse)

~~~
ramses0
S3-Fuse has been really unreliable for me (hard-hangs, and similar) ... is
that your experience as well, or has it been reliable?

~~~
rhizome
I haven't used this combination, I haven't used FUSE in awhile, and I forgot
to add a disclaimer. FUSE in general has been flaky for me, too. Syncthing has
worked pretty well for me though, so I looked to see if Unix Philosophy&trade;
could be used to smash a couple things together. I think that at the end of
the day, they are both deceptively complicated technologies.

------
booi
The biggest problem I've had with backup software is being able to verify when
using aws s3 glacier with and without a data pull. Is this possible and
efficient with restic? The only software I've found that does this well is Arq

~~~
seized
I've been researching this use case and it sounds like Restic doesn't handle
it well without adding more layers.

[https://forum.restic.net/t/restic-and-s3-glacier-deep-
archiv...](https://forum.restic.net/t/restic-and-s3-glacier-deep-
archive/1551/2)

I've largly decided on Duplicacy over Restic, Borg or Duplicati.

------
lucb1e
I've been using this and it is the best I have found so far.

The only issue I encountered was with trying to back up 2.5TB of new data over
a shitty German internet connection, which first took forever (restarting 20
times because they force your IP address to change once a day), and then when
trying to prune the backup, ran out of RAM memory and actually corrupted it.

I'll continue to use it, just not repeating those specific steps, but it was
not immediately apparent that the backup was completely unusable. _Always test
your backups_ (regardless of whether it's about restic, borg, examplecorp
expert enterprise backup, or something else)!

~~~
bdibs
Restic has that option with “restic check”, I have it run weekly and email me
the results.

~~~
lucb1e
That sounds like a good system! I check every now and then by mounting and
checking that a recent file is there, not sure why I didn't think of doing
this instead.

------
anticristi
Is backing up still a thing? I prefer to keep my laptop stateless:

* I use Ansible to configure a vanilla Ubuntu 18.04 into my workstation [1]

* I keep everything that is "source code-y" in some Git repo

* I keep non-source-code-y stuff in Dropbox or Seafile (both have a restore to previous version)

I prefer everything else to be lost (e.g. some AWS, K8s credentials).

I wipe my laptop between customer projects and it works great.

[1] [https://github.com/cristiklein/stateless-workstation-
config](https://github.com/cristiklein/stateless-workstation-config)

~~~
cactus2093
I do this too, and am surprised not to see more recommendations for it. It's
simpler than a traditional full system backup, lessens security risks like
copying credentials into a backup, and IMO keeps things better organized as
well. I know Dropbox or git repos are just syncing the files I care about, in
a way that I want them organized and can browse on a web interface, on mobile,
sync to another device even if it doesn't have all the same software
installed, etc. Whereas with a full system backup, there's not always a clear
line between system configuration and data files, so depending what apps
you're using the data that you're backing up might only be able to be accessed
again on an exact copy of the machine running all the same versions of the
same apps, which is often not what I actually want.

~~~
pnutjam
I don't back backup my machine states, just home directory and other data.

------
ofrzeta
I understand that Restic is using deduplication but what's the advantage over
rsnapshot? There's AES encryption but how do you handle the credentials with
automated backups?

~~~
ansible
I've been using rsnapshot for a decade or more. For a modest amount of data (a
few TB) the performance is adequate, and it is easy to verify the backups. It
is also easy to restore files, just go into the directory corresponding to the
date of the backup. I could try to run compression in the filesystem itself to
save space, but haven't done that so far.

------
vasili111
What you think about Restic vs Duplicati? Any personal experience?

~~~
jeffdubin
I've been using Duplicati 2.0 beta (aka "stable") builds, not canary/testing,
for 2+ years, and at least three times, on three different machines each with
different backends (two using ZFS, scrubbed regularly with no data issues),
I've encountered issues with database corruption which wasn't fixable with
repair and rebuild. There are lots of things I really like about Duplicati
(OSS, backend flexibility, dedup, encryption, active development, etc.), but
I'm going to be moving to another solution for a little while until a super
stable release is available. Restic was at the top of my list even before this
most recent HN nod.

~~~
seized
Check out Duplicacy. It seems very solid, none of the Duplicati like issues.

------
shekhar101
Tangentially related but I’ll lay out a problem I have for a small office I
run. I want to backup and sync my files cheaply somewhere, say GDrive. I want
a Uzi that gives me snapshot/versions of files it backed up. Is there a good
open source solution for this? Restic and friends are a little too technical
for my non-technical office. Thanks

------
ibic
How does it compare to Rclone? [https://rclone.org/](https://rclone.org/)

~~~
dsego
Afaik, restic does incremental backups (diffs) and encryption. This is great
if you need to go back in time and it's safer if something for example
corrupts your files (the backups don't get overwritten with corrupt versions).
But it is more difficult to restore, the files are split into chunks for
deduplication, so you are not able to actually see your backed up files — you
need to run the restore process first. I am using rclone right now, and have
it set up to archive old file versions (--backup-dir). It's simpler and covers
my needs for now.

------
_ink_
I really would like to use Restic, but it seems that their repos randomly
break and it seems that it is still not resolved.
[https://forum.restic.net/t/recovery-options-for-damaged-
repo...](https://forum.restic.net/t/recovery-options-for-damaged-
repositories/1571)

Any news on this?

------
madjam002
Solid piece of software, I use this every day!

One small issue which I need to figure out is the hosts directory being empty
when using “restic mount”:
[https://github.com/restic/restic/issues/1869](https://github.com/restic/restic/issues/1869)

------
acje
But does it come with sharepoint integration? Because if you claim to do that
right, then this is surely snake oil.

------
alexellisuk
I put a tutorial together for this not too long ago, a very good tool and
great when paired with Minio/s3 - [https://github.com/alexellis/restic-minio-
civo-learn-guide](https://github.com/alexellis/restic-minio-civo-learn-guide)

------
mike-cardwell
I recently set this up using Nginx as my restic backend server. Not as a proxy
to a backend server, but the actual backend server. No other applications
required:

[https://www.grepular.com/Nginx_Restic_Backend](https://www.grepular.com/Nginx_Restic_Backend)

------
noja
Fantastic software. Fast, locally encrypted before being shipped to the cloud
(or a usb disk), just works.

~~~
brensmith
I've been using it for about 6 months now, backing up my home directory 3-4
times per week to an external HD. Would also agree that it's fast, easy to
use, and also easy to recover files. I'd add that the documentation is really
well written.

------
hrdwdmrbl
I would argue that the 2nd most important part of a backup is the restoring.
Many products don't focus on that enough because it's not the part that most
customers will experience until there is an emergency.

------
throw7
from what i read restic has memory scaling issues for certain areas (large
amounts of small files, pruning operations).

that said, i do use it personally for private and small stuff.

------
denkmoon
I like HashBackup. It's not open source unfortunately, but I like the
incremental encrypted backups while also being able to pull out a single file
easily.

------
Krasnol
> Easy: Doing backups should be a frictionless process, otherwise you are
> tempted to skip it. Restic should be easy to configure and use, so that in
> the unlikely event of a data loss you can just restore it. Likewise,
> restoring data should not be complicated.

So does it have a GUI Version?

~~~
boromi
Yeah also interested in a GUI?

~~~
Krasnol
I was unable to find one so I guess it's just "easy" for a certain narrow
group of people.

------
sogubsys
Learned about this program when pwning Registry on HackTheBox

------
transfire
Metadata?

------
swedtrue
Just so I understand right and out of curiosity, what does that offer more
than the likes of Nextcloud: [https://nextcloud.com/](https://nextcloud.com/)
or Duple: [https://www.duple.io/en/](https://www.duple.io/en/)

Since if I understand correctly both provide backup in addition to other
functionalities?

~~~
noja
Nextcloud is a different product. restic is for backup.

~~~
swedtrue
May you elaborate? What does it offer more? Since Nextcloud and Duple do offer
backup as well.

~~~
noja
Nextcloud is a web app that lets you work with files and offers lots of other
plugins too, like e-mail, calendaring, office, contacts. It's not a backup
program. See [https://nextcloud.com/](https://nextcloud.com/) or
[https://en.wikipedia.org/wiki/Nextcloud](https://en.wikipedia.org/wiki/Nextcloud)

~~~
NoGravitas
You might, for example, use restic or borg to back up your Nextcloud's
installation and storage.

