Of particular concern is that Obnam has a theoretical collision
potential, in that if a block has the same MD5 hash as another
block, it will assume they are the same. This behaviour is the
default, but can be mitigated by using the verify option. I tried
with and without, and interestingly did not notice any speed
difference (2 seconds, which is statistically insignificant) and
also did not encounter any bad data on restoration. So I don't
know why it's off by default.
Worrying about this violates Taylor's Law of Programming
Probability[1]:
The theoretical possibility of a catastrophic occurrence in your
program can be ignored if it's less likely than the entire
installation being wiped out by meteor strike.
I've seen a lot of sysadmins or programmers nitpick systems that have
the theoretical possibility of md5 or sha1 collisions, but it's
amazingly unlikely to happen in something like a backup system where
you're backing up your own data, and not taking hostile user data
where the users might be engineering collisions:
"Quite". Let's look at the potential attack. You're running a backup system with user-supplied data, fair enough, and one of your users has:
1) Access to an existing object, or its checksum.
2) Can write a *new* object where they intentionally
produce a collision with an existing object.
There's a trivial way to get around this attack in practice, which is that you just lazily write objects and don't re-write an object that exists already. This is what Git does with the objects it writes, which insulates it more from future SHA-1 collision attacks than just the security you'd get from SHA-1 itself.
This means that you've changed an attack where someone can maliciously clobber an existing object to an edge case where their object just won't get backed up.
Assuming of course that the object they want to clobber is either already backed up or processed before the malicious object. They can still attack a new object.
I don't like duplicity very much, it requires you to reupload everything every so often (because it uses base backups and then diffs on top of that), which won't work for my slow connection and large dataset.
As a heavy btrfs user backups have always been on my mind. I run a lab with a handful of busy VMs, all using btrfs. I was frustrated that there were no backup solutions (at the time) which leveraged btrfs, so I created snazzer [1] (one day soon it will support ZFS).
You might scoff, but... btrfs send/receive is insanely fast and painless. To mitigate btrfs shenanigans, snapshots end up on non-btrfs filesystems too. I wrote a tool [2] which produces PGP signatures and sha512sums of snapshots to achieve reproducible integrity measurements regardless of FS.
Of course, in the time it took to polish up snazzer a bit for public release, many [3] other [4] cool [5] solutions [6] have materialized [7]... :)
> Isn't the real conclusion that all three tools failed the test case at some point
No backup solution is perfect, anything including your backups can fail. The three rules of backups:
* If it isn't backed up you don't really care about it
* If you really care about something back it up using at least two unrelated systems, one or more remote and one or more offline (soft-offline will do).
* If you haven't tested the backups, you don't have backups.
> Which backup tools should we use for linux?
I'm still using simple hand-rolled scripts to manage backups via rsync (for an old but still good tutorial on that sort of thing see http://www.mikerubel.org/computers/rsync_snapshots/) and (where consistency and/or downtime between backup start and end times might be an issue) LVM snapshots. I've needed tweak them a bit over the years as my needs have changed and as I've become more paranoid about the "testing backups" thing, but I'd have had to do that with other tools too.
1. http://www.miketaylor.org.uk/tech/law.html