Attic is one of the new-generation hash-backup tools (like obnam, zbackup, Vembu Hive etc). It provides encrypted incremental-forever (unlike duplicity, duplicati, rsnapshot, rdiff-backup, Ahsay etc) with no server-side processing and a convenient CLI interface, and it does let you prune old backups.
All other common tools seem to fail on one of the following points
- Incremental forever (bandwidth is expensive in a lot of countries)
- Untrusted remote storage (so i can hook it up to a dodgy lowendbox VPS)
- Optional: No server-side processing needed (so i can hook it up to S3 or Dropbox)
If your backup model is based on the old' original + diff(original, v1) + diff(v1, v2).. then you're going to have a slow time restoring. rdiff-backup gets this right by reversing the incremental chain. However, as soon as you need to consolidate incremental images, you lose the possibility of encrypting the data (since encrypt(diff()) is useless from a diff perspective).
But with a hash-based backup system? All restore points take constant time to restore.
Duplicity, Duplicati 1.x, and Ahsay 5 don't support incremental-forever. Ahsay 6 supports incremental-forever at the expense of requiring trust in the server (server-side decrypt to consolidate images). Duplicati 2 attempted to move to a hash-based system but they chose to use fixed block offsets rather than checksum-based offsets, so the incremental detection is inefficient after an insert point.
IMO Attic gets everything right. There's patches for windows support on their github. I wrote a munin plugin for it.
Sorry, but "Untrusted remote storage" and "No server-side processing" are exactly the opposite of what I need.
If the original box is ever compromised, I don't want the attacker to gain any access to the backup. If you use a dumb storage like S3 as your backup server, you need to store your keys on the original box, and anyone who gains control of the original box can destroy your S3 bucket as well. Ditto for any SSH-based backup scheme that requires keys to be stored on the original box. A compromised box could also lie about checksums, silently corrupting your backups.
Backups should be pulled from the backup box, not pushed from the original box. Pushing backups is only acceptable for consumer devices, and even then, only because we don't have a reliable way to pull data from them (due to frequently changing IP addresses, NAT, etc).
The backup box needs to be even more trustworthy than the original box, not the other way around. I'm willing to live with a significant amount of overhead, both in storage and in bandwidth, in order not to violate this principle.
The backup box, of course, could push encrypted data to untrusted storage, such as S3. But only after it has pulled from the original box. In both cases, the connection is initiated from the backup box, not the other way around. The backup box never accepts any incoming connection.
Does Attic support this kind of use case? The documentation doesn't seem to have anything to say about backing up remote files to local repositories. I don't see any reason why it won't be supported (since rsync does), but "nominally supported" is different from "optimized for that use case", and I suspect that many of the latest generation of backup tools are optimized for the opposite use case.
I really try to restrain myself when a backup article pops up on HN, but there are two things you raise here that I'd like to address ... first:
"Ditto for any SSH-based backup scheme that requires keys to be stored on the original box. A compromised box could also lie about checksums, silently corrupting your backups."
This is a good thought - you should indeed be thinking about an attacker compromising your system and then using the SSH keys they find and wiping out the offsite backup. All they need to do is look into cron and find the jobs that point to the servers that ...
So how do we[1] solve this ? All of our accounts have ZFS snapshots enabled by default. You may not be aware of it, but ZFS snapshots are immutable. Completely. Even root can't write or delete in a snapshot. The snapshot has to be deliberately destroyed with ZFS commands run by root - which of course, the attacker would not have access to. It's a nice safety net - even if your current copy is wiped out, you have your ZFS snapshots in place.[2]
"Backups should be pulled from the backup box, not pushed from the original box."
This was the tipping point - I had to comment. Since day one, we have, free of charge, set up "pull jobs" for any customer that asks for it. Works just like you'd like it to on whatever schedule they can cram into a cron format. It's a value add we've always been happy to provide.
[1] You know who we are.
[2] Yes, if you don't notice for 7 days and 4 weeks that the attacker has wiped you out, at that point your snapshots will all rotate into nothingness as well. Nothing's perfect.
Is it possible to be notified if a certain percent of the backup is changed? Something that would let me tell if something like 50% of the bytes or 50% of the files are different between snapshots? Just a simple 'zfs diff | wc -l | mail' in cron?
By pulling backups, you're giving the backup box full control over your computer, meaning that yes it must be more trustworthy. Push backups can indeed allow the initiator to wreck the remote state.
But in the modern commercial market, who are you going to trust as your backup box provider? A USA company subject to NSLs? Run your own in a rack somewhere? Having an untrusted server greatly decreases cost and increases the chance that you can actually produce a successful backup infrastructure at all.
It is possible to do it safely.
Since you say you're "willing to live with a significant amount of overhead", i would suggest a two-tier push/pull configuration. Desktop pushes to site A; then site B pulls from site A. This also increases redundancy and spreads out the attack surface.
Append-only is another good solution - i don't believe attic formally supports this today but it should be as simple as patching `attic serve` to ignore delete requests. Good first patch.
(Also if you really trust your backup server, then you don't need encryption anyway and can just run rdiff-backup over ssh.)
> By pulling backups, you're giving the backup box full control over your computer
Not really. On my production boxes, I usually set up a "backup" account and give it read-only access to paths that need to be backed up. Nothing fancy, just standard POSIX filesystem permissions. The backup box uses this account to ssh in, so it can only read what it needs to read, and it can never write anything to the production box. I wouldn't call that "full control".
> Desktop pushes to site A; then site B pulls from site A.
What you described is similar to my own two-tier configuration, except I pull first and then push to untrusted storage like S3 (using encryption, of course). The first step uses rsync over ssh. The second step is just tar/gzip/gpg at the moment, but if I want deduplication I can easily switch to something like tarsnap.
I guess it depends on your security model. With one single pull backup,
- if your backup box is on another network then it can be coerced into malicious reads (leaking private information, trade secrets, your competitive advantage etc).
- if it's on the same network then it's subject to your same failure patterns.
Push backup has some disadvantages, but there's a lot of peace-of-mind in never (intentionally) granting additional users access to the unencrypted data.
Two-tier is one approach. There's another comment in this thread about snapshotting filesystems (ZFS, or i suppose LVM snapshots might be easier) which would be another method of addressing concerns about the client tampering with the backed up data.
You can have a self-controled intermediary machine that pulls backups, encrypts, and then pushes them to the untrusted cloud.
When I had no resources for this (eg: low income student), I had a server at my mom's place that did this for me. Low-cost offsite, trustable backup server for personal usage.
If you use a dumb storage like S3 as your backup server, you need to store your keys on the original box
I believe this isn't strictly necessary if you use asymmetric cryptography (e.g. curve25519). For a file, generate a temporary key pair, use it and the backup's public key to encrypt the file, then throw out the private key and send the encrypted file + public key to the server.
Apple uses this technique to move files to the "Accessible while unlocked" state without having the key for that state (i.e. while the device is locked).
Just for the record, asymmetric cryptography is not efficient for encrypting content. What you should do is:
- Generate a temporary key
- Symmetrically encrypt with that key
- Encrypt that key with your long-term assymetric private key, and send the encrypted version along your backups.
And before you hack around your own version, I'd like to point out this is exactly what PGP (and really, any crypto scheme that involves asymmetric keys) does. So, basically, just GPG your backups.
I use duplicity + rsync.net and have wondered about this attack vector. My solution (which admittedly only protects against remote backups being deleted by an attacker, not read) is:
1. Use sub accounts on rsync.net so backups from different parts of the system are isolated from each other.
2. Use a different GPG keypair and passphrase for each host being backed up.
3. Have an isolated machine out on the internet somewhere (that, importantly, isn't referenced by anything in the main system including documentation / internal wikis i.e. so the attackers don't know it exists) that does a daily copy of the latest and previous full backup plus any current incrementals directly from rsync.net's storage. This way I'm still covered (and can restore relatively quickly) if an attacker gets in to the system and deletes the rsync.net hosted backups for lulz.
If you're truly paranoid or need to protect backups going back over months you could also introduce a final routine that duplicates the data from the ghost machine to Amazon glacier (and then optionally pay for an HDD to be shipped periodically to your offices).
Your rsync.net account has ZFS snapshots enabled - at the very least, the smallest default is 7 daily snapshots.
The ZFS snapshots are immutable. Completely. Even root can't write or delete in a snapshot. The snapshot has to be deliberately destroyed with ZFS commands run by root - which of course, the attacker would not have access to.
Ha! Well that simplifies things for me :) Honestly, your service is so solid I can't remember the last time I actually logged in to check something. It really is backup as a utility (you should consider re-branding as "BaaU" lol)
Better, but it might still expose to the attacker more data than he would otherwise have access to. For example: production box only contains data from the last 3 days, but the backup contains data from the last 12 months.
Even stricter access controls (write once, no read) might help with that. Not sure if you can do that with S3 though.
>production box only contains data from the last 3 days, but the backup contains data from the last 12 months
Even if the source can only perform new backups, it's a timing attack with a deduplicating system. The attacker can attempt to back up chosen data to infer properties of the existing backups.
You can remove this only by removing deduplication (or by crippling deduplication to work only at the server-side, and incur wasteful network requests)
Which is why tarsnap says about your keys: STORE THIS FILE SOMEWHERE SAFE! Copy it to a different system, put it onto a USB disk, give it to a friend, print it out (it is printable text) and store it in a bank vault — there are lots of ways to keep it safe, but pick one and do it.
Indeed, that's exactly what I do (with a small script that wraps rsync): the backup server PULLS data from my machine, and saves incremental backups (uses hardlinks, see --link-dest). In case my machine is compromised, the backup server is still inaccesible.
"Attic can initialize and access repositories on remote hosts if the host is accessible using SSH."
Fantastic. Will work perfectly here. We[1] are happy to support this just like we've supported duplicity all of these years. EDIT: appears obnam also works over plain old SSH. Can't tell about zbackup, however...
As always, email us to discuss the "HN Readers" discount.
If someone is looking at trying attic and wants something a little nicer to configure and run than the very basic shell script from attic's Quick Start[1], check out the wrapper script I wrote to make this a little easier[2]. It's still somewhat "1.0" right now, but it does the basics for me. See the included sample config files for an idea of how the configuration works.[3]
i'm curious: why isn't this part of attic in the first place? i have been working with the bup folks for a while to try to make a similar interface, and i was wondering what was your experience with upstream...
Great question. I do think this should be part of what attic provides out of the box, but I still really wanted to use attic despite the fact that it doesn't include this sort of functionality. I'll try contacting the attic devs and see what they say about it.
Your wrapper is only lacking one critical feature I'd love. I am currently using rsnapshot and while its big issue is lack of encryption, it is able to run scripts on remote hosts to pull backups from them. This is a big deal to me since I can then script things like MySQL/Postgres backups, etc. on my master server, rather than having to configure each host individually.
It's possible that this is a bad way to run things since my master server is then a SPoF. I do trust this server more since I monitor it much more closely, than I would a dozen random VPS's with a half-dozen different providers.
Push vs. pull for backups is an interesting philosophical issue, and there's some good discussion of it below. For many setups, pull really doesn't make sense. For example, one of the machines I backup is my personal laptop, and I back it up to a completely untrusted VPS. Therefore I want to be able to encrypt locally and push that encrypted data to the remote VPS. Pulling wouldn't work here, because then I'd have to hand the keys to my laptop to the VPS.
The scenario you're describing, however, sounds like the opposite in terms of trust. And in that case pull may make sense. However it doesn't sound like attic itself natively supports that sort of config. I could envision a sort of hybrid approach where the local machine encrypts to a local attic repository, and then the remote backup server pulls a copy of it. There's nothing stopping you from setting that up, either with attic as-is or with this wrapper script.
The only point that stuck in my eye was atime modification from attic but I realized that in my environment I would have to mount all the backup clients remotely on the backup server so I can get around this by mounting sshfs with noatime.
from what i can tell here: https://github.com/c4rlo/attic/commit/f4804c07caac3d145f49fc... ... attic updates the atime when opening files right now... that patche fixes the problem, but i have to wonder how fast it actually and accurate is. bup has all sorts of helpers and tweaks to make it really fast, but also to ensure that it doesn't touch the filesystem while making backups.
i've been amazed to notice how all my files have atimes from july, when i stopped using rsync to make backups and switched to bup. :p
You might want to checkout Arq from Haystack software - http://www.haystacksoftware.com/arq/ - bring your own storage, GUI client, but with open source client in case they disappear, good support, etc.
Being interested in Linux/Unix, file systems, crypto or whatever other technical topic doesn't necessarily mean you are also interested in developing graphical interfaces. That's an art form and discipline in itself. I suppose the people who make these tools are more interested in the tools' respective technical aspects than in interface or graphical design.
Moreover, many of these backup tools run on servers (or as an automated background process) where a graphical interface is more of a handicap than an asset.
I think that in open-source software "pretty interface" sounds often like "customer-oriented" which in turn sounds like "getting paid".
as is the case with most 100% volunteer-based free software: lack of time and/or ppl to help.
There are currently multiple interfaces available to bup, but each one has some quirks. bup's web interface is still very embryonic and would need the magic touch of some designers / integration specialists to make it fun to work with.
I've long been a huge fan up bup, and have even contributed some code. I might be by far their single biggest user, since I host 96748 bup repositories at https://cloud.sagemath.com, where the snapshots for all user projects are made using bup (and mounted using bup-fuse).
Elsewhere in this discussion people not some shortcomings of bup, namely not having its own encryption and not having the ability to delete old backups. For my applications, lack of encryption isn't an issue, since I make the backups locally on a full-disk encrypted device and transmit them for longterm storage (to another full disk encrypted device) only with ssh. The lack of being able to easily delete old backups is also not an issue since (1) I don't want to delete them (I want a complete history), and (2) the approach to deduplication and compression in bup makes it extremely efficient space wise, and it doesn't get (noticeably) slower as the number of commits gets large; this is in contrast to ZFS, where performance can degrade dramatically if you make a large number of snapshots, or other much less space efficient approaches where you have to regularly delete backups or you run out of space.
In this discussion people also discuss ZFS and deduplication. With SageMathCloud, the filesystem all user projects use is a de-duplicated ZFS-on-Linux filesystem (most on an SSD), with lz4 compression and rolling snapshots (using zfssnap). This configuration works well in practice, since projects have limited quota so there's only a few hundred gigabytes of data (so far less than even 1TB), but the machines have quite a lot of RAM (50+GB) since they are configured for lots of mathematics computation, running IPython notebooks, etc.
I've also been using bup to replace (in some areas) my use of rdiff-backup.
It's a great tool, but since you contributed already, I cannot understate the importance of pruning old archives. For online/disk-based backup solutions, space is always going to run out eventually.
I'm using bup where I already know the backup size will grow in a way that I can manage for the next 1-2 years.
For "classical" backup scenarios though, where binaries and many changes are involved and the backup grows by roughly 10-20% a week due to changes alone, I have to resort to tools where I can prune old archives because I would either have to reduce the number of increments (which I don't want to do) or increase the backup space by a factor of 50x (which I practically cannot do either).
Others have complained here that bup doesn't support deleting old backups. ddar doesn't have such an issue. Deleting snapshots work just fine (all other snapshots remain).
I think the underlying difference is that ddar uses sqlite to keep track of the chunks, whereas bup is tied to git's pack format, which isn't really geared towards large backups. git's pack files are expected to be rewritten, which works fine for code repositories but not for terabytes of data.
It's true that git's pack files are made for being rewritten, but bup doesn't do that. Every new run will create a new pack along with its .idx (which means that some packs may be quasi-empty) and the size of packs is capped at 1GB (Giga, not Gibi).
The real struggle of bup is how to know whether a hash is already stored, and how to know it screaming fast. It could be interesting to compare bup style and standard sqlite style, as you do in ddar.
Also, it seems ddar stores each object in its own file, like git loose objects. SQLite has a page [0] that compares this and storing blobs in SQLite, and I don't know what's the median size of your objects but if it's < 20k it seems to be better to just store them as blobs.
> I don't know what's the median size of your objects but if it's < 20k...
I can't remember the exact number off the top of my head, but I designed the average size of each object to be much bigger - more like 64-256M than kilobytes. IMHO this works far better for backups. So I just use the filesystem to store the blobs, which I think works better.
Correction: I'd forgotten the details. Looks like I aimed for 256k, and it is this size that works well. I did consider filesystem performance when I chose the size, intending flat file blob storage here.
Doesn't that alleviate the benefit of deduplication, if you're working on multi megabytes objects ? You'll end up copying a lot, unless I'm missing something obvious...
Most backups have very large areas of duplication. If a small file has changed, chances are that the small files around it have changed also. So de-duplicating with a larger chunk size seems to work fine in practice.
Is there anything out there that does continuous incremental backups to a remote location (like obnam, attic, ...) but allows "append only" access. That is, you are only allowed to add to the backup, and the network protocol inherently does not allow past history to be deleted or modified? Pruning old backups might be allowed, but only using credentials that are reserved for special use.
Obnam, attic and similar use a normal read/write disk area, without any server side processing, so presumably an errant/malicious user is free to delete the entire backup?
Haven't seen this mentioned - but, since bup de-duplicates chunks (and thus may take very little space - e.g., when you backup a 40GB virtual machine, each snapshots takes little more than the actual changes inside the virtual machine), every byte of the backup is actually very important and fragile, as it may be referenced from thousands of files and of snapshots. This is of course true for all dedupping and incremental backups.
However, bup goes one step farther and has builtin support for "par2" which adds error correction - in a way, it efficiently re-duplicates chunks so that whichever one (or two, or however many you decide) break, you can still recover the complete backup.
I saw "par2" and got all excited to see if lelutin re-implemented all the Galois field goodness from scratch, looked at the source and - no, bup merely spawns the par2 binary. Damn :)
I was wondering if someone's done a side-by-side comparison of the various newer open-source backup tools? Specifically, I'm looking for performance, compression, encryption, type of deduplication (file-level vs. block-level, and dedup between generations only vs. dedup across all files). Also, the specifics of the implementation, since some of the tools don't really explain that too well, along with any unique features.
The reason I ask, is I had a difficult time finding a backup tool that suited my own needs, so I wrote and open-sourced my own (http://www.snebu.com), and now that some people are starting to use it in production I'd like to get a deeper peer review to ensure quality and feature completeness. (I actually didn't think I'd be this nervous about people using any of my code, but backups are kind of critical so it I'd like to ensure it is done as correct as possible).
Like any good hacker I got tired of other solutions that didn't quite match my needs and made my own dropbox-like backup/sync using only rsync, ssh and encfs.
- only runs on machines I control
- server requirement is only rsync, ssh and coreutils
- basic conflict detection
- encfs --reverse to encrypt locally, store remotely
- history is rsnapshot-style hard links
- inspect history using sshfs
- can purge old history
shell aliases showing how I use it are in my config repository
encfs isn't ideal but it's the only thing that does the job. Ideally I'd use something that didn't leak so much, but it doesn't exist.
Same principle just different mechanics and assumptions.
I can't work (very effectively) in two places at once, so I don't need robust merging, just CYA synchronisation. Using only rsync features I can do a full 2-way rsync merge and catch potential conflicts, erring on the conservative so I have reasonable confidence I don't lose any work.
Minimal workstation dependencies: only bash, encfs, rsync, ssh, coreutils/findutils and optionally atd for automation. encfs is optional, too.
Instead of dvc-autosync and XMPP I just use periodic execution. I partition my stuff into smaller/high-frequency vs larger/lower-frequency to keep this efficient. These are triggered from bash (PROMPT_COMMAND, in the background) and recursive at (atd).
The local data is unencrypted on disk from this tool's POV. I use encfs --reverse and rsync the result. To browse the history, I combine sshfs with encfs in forward mode.
Linux only because that's what I use, but it should be possible to support OSX.
All in all I'm pleased I'm able to use such minimal tooling for such a successful result.
I tried some backup software (of the rdiff variety, not the amanda variety) last year when I set up a small backup server for friends and family.
Obnam and bup seemed to work mostly the way I wanted to but obnam was by far the most mature tool, so this is what I chose in the end.
On the plus side, it provides both push and pull modes. Encryption and expiration works. The minus points are no Windows support, and some horror stories about performance. Apparently it can slow to a crawl with many files. I haven't run into that problem despite hundreds of gig in the backup set, but most are large files.
On the whole it's been very stable and unobtrusive during the time I've used it, but I haven't used it in anger yet. So a careful recommendation for obnam from me.
Does anyone use zpaq[1]? It has compression, deduplication, incremental backup, encryption, backup versioning (unlike bup, with the ability to delete old ones), and its written un C++. But im not sure about performance over network and how its compare with bup or rsync.
some ppl have already started working on this and there's been activity on the mailing list lately about this topic.
however it's a dangerous feature to add in (backup tools should never screw up their storage, and this feature goes and removes things) so a lot of care is needed.
the boring answer is: it's coming, and we need a lot of help for vetting patches and testing them out.
This is interesting, because the size of a file's encrypted chunks now leaks information about the file's plaintext. I suppose you have some minimum chunk size, and that's one way to keep from leaking too much information as a fraction of the overall file size. But if a file is modified many times, it seems to me that you'd have to be very careful not to leak a substantial amount of data to a clever attacker.
Have you thought about how to quantify this tradeoff?
I suppose you could pad each encrypted chunk so they're all the same size, but then if you don't want to waste a ton of space you'd have to restrict your chunking algorithm to output chunks with relatively similar sizes, at which point you lose some of the benefits of chunking.
The chunking is done using parameters generated from a secret key, and I haven't been able to see any way for it to be computationally feasible to extract meaningful information from the resulting block sizes.
That doesn't mean that it's impossible, of course; just that it would require someone smarter than me. ;-)
git annex is for more than just backups. In particular, it lets you store files on multiple machines and retrieve them at will. This lets you do backups to e.g. S3, but it also lets you e.g. store your mp3 collection on your NAS and then easily copy some files to your laptop before leaving on a trip. Any changes you make while you're offline can be sync'ed back up when you come back online.
You can prune old files in git-annex [1], and it also supports encryption. git-annex deduplicates identical files, but unlike Attic &co, it does not have special handling of incremental changes to files; if you change a file, you have to re-upload it to the remote server.
git-annex is actively developed, and I've found the developer to be really friendly and helpful.
[1] You can prune the old files, but because the metadata history -- basically, the filename to hash mapping -- is stored in git, you can't prune that. In practice you'd need to have a pretty big repository with a high rate of change for this to matter.
Is there an easy way to have the backups encrypted at rest? That's a nice feature of Duplicity. I don't have to worry about someone hacking my backup server or borrowing my USB drive having access to my data.
currently bup doesn't implement encryption (since it's a pretty hard feature to get right and we do want to finish coding other key features -- like old backup removal -- before we get to that)
some ppl have reported using an encrypted storage backend like ecryptfs to store their bup repositories in. that option shouldn't be too hard to put together.
This seems like a fantastic tool, and I would love to try this out. And, it's free!
My personal obstacle in using a tool like bup is the back-up space. I could definitely use this for on-site/external storage devices, but I also like to keep online/cloud copies. I currently use CrashPlan for that which affords me unlimited space. If CrashPlan would let me use their cloud with bup, wow, I would switch in a heartbeat. Perhaps cloud backup tools could learn some tricks from bup.
Aware of Tarsnap and looks very attractive. I'd rather pay a flat-rate out because I store a lot of video (Musician) and other large files. I have about 750GB pre-deduplication stored. That's a lot more $ when I go to Tarsnap.
It is really easy to set up what folders to backup and where, and I use it whenever a backup is simply take all files from X, do the rolling backups at Y, and done.
I can see how this would be theoretically possible in the same way I could see using `git filter-branch` to remove one or more commits from a code repository. But as it requires walking back up the tree to recalculate all of the commit hashes based on the new state of your files, I suspect it would be an extremely slow/expensive operation in bup's case. Someone who knows more about bup's internals can correct me if I'm wrong.
true. and actually bup can't use git's tooling directly because of the different use cases it's optimized for. git won't make any use of bup's optimizations: midx (combination of multiple .idx files into a handful of bigger ones to reduce page faults during binary searches) and bloom filter.
git's tools, namely filter-branch and gc have been reported to work on limited-size bup repositories, but it very quickly eats up all ram and cpu and never finishes because of the sheer amount of objects that are usually stored in a bup repository
I've been using duply http://duply.net/ for a while. It is a simple frontend for duplicity http://duplicity.nongnu.org/.
I find it very easy to setup. It also provides encrypted backups trough GPG.
> That is a dataset which is already deduplicated via copy-on-write semantics (it was not using ZFS deduplication because you should basically never use ZFS deduplication).
"Basically never" is an overstatement, but it is true to the point of "Never unless you already know why I said 'basically never'"
It boils down to the fact that ZFS maintains a mapping from hashes to LBNs. This allows write-time deduplication (as opposed to a scrubber that runs periodically and retroactively deduplicates already written blocks). This is somewhat memory intensive though. For smaller ZFS pools you can get away with just having lots of RAM (and with or without dedupe ZFS performs better the more RAM you have). For larger ones, you can add a SSD to act as additional disk cache.
Note in this example that they were already showing 128GB of RAM for a 17TB pool; the L2ARC was to augment that. In general, ZFS was designed with a much higher RAM/Disk ratio than a workstation typically has.
ZFS is also very far away from the state of the art in online dedup. For instance, http://users.soe.ucsc.edu/~avani/wildani-icde13dedup.pdf has a theoretical dedup regime that needs only 1% of the RAM for 90% of the benefit.
I'm not an expert by any means, but the most cited reasons are that it requires a very big amount of ram, and it depends a lot (obviously) on the type of data.
In addition to the memory requirements, I seem to recall that it works at the block level, as opposed to the file level. So you could have two of the same file, but maybe one copy is written at the start of a block and one is written in the middle of a block. Same file, different blocks, so no deduplication.
This looks very interesting as a replacement for rdiff-backup. Hopefully the missing parts aren't too far away (expire old backups, restore from remote).
do you mean something that would automatically update the backup when a file is changed on disk?
bup currently doesn't do that. but there's been some talk of using inotify or another such method of knowing exactly which files are modified when they are so that bup could instantly work on those.
in theory it should be feasible, it's not implemented yet however
I had my Ph.D. student (Andrew Ohana) spend a while last summer implementing exactly this using python-inotify, since I wanted it to greatly improve the efficiency of https://cloud.sagemath.com, which makes very frequent snapshots. It's pretty solid and is on github: https://github.com/ohanar/bup/tree/bup-watch He's been busy with his actual math thesis work and teaching, so hasn't got this upstreamed into bup. It also depends on changes he made to bup to store the index using sqlite instead of some custom format.
Attic is one of the new-generation hash-backup tools (like obnam, zbackup, Vembu Hive etc). It provides encrypted incremental-forever (unlike duplicity, duplicati, rsnapshot, rdiff-backup, Ahsay etc) with no server-side processing and a convenient CLI interface, and it does let you prune old backups.
All other common tools seem to fail on one of the following points
- Incremental forever (bandwidth is expensive in a lot of countries)
- Untrusted remote storage (so i can hook it up to a dodgy lowendbox VPS)
- Optional: No server-side processing needed (so i can hook it up to S3 or Dropbox)
If your backup model is based on the old' original + diff(original, v1) + diff(v1, v2).. then you're going to have a slow time restoring. rdiff-backup gets this right by reversing the incremental chain. However, as soon as you need to consolidate incremental images, you lose the possibility of encrypting the data (since encrypt(diff()) is useless from a diff perspective).
But with a hash-based backup system? All restore points take constant time to restore.
Duplicity, Duplicati 1.x, and Ahsay 5 don't support incremental-forever. Ahsay 6 supports incremental-forever at the expense of requiring trust in the server (server-side decrypt to consolidate images). Duplicati 2 attempted to move to a hash-based system but they chose to use fixed block offsets rather than checksum-based offsets, so the incremental detection is inefficient after an insert point.
IMO Attic gets everything right. There's patches for windows support on their github. I wrote a munin plugin for it.
Disclaimer: I work in the SMB backup industry.