Interesting that this came up today; I finally finished getting real backups going for my laptop's homedir.
Basically, I ended up using duplicity [1], which can back up to a mounted filesystem, WebDAV, ftp, ssh, or S3. It handles encryption and incremental backups, and compresses everything on the backup medium (in encrypted 5M archives, with a seprate index file). This is why I used it; davfs2 is very slow at creating files, and my homedir is mostly very small files. I intially tried rdiff-backup, but it was just way too slow creating all those files. Duplicity, however, is nice-n-fast; 1G of data compressed down to 85 5M files and the initial backup only took a half an hour (plus an hour or so for davfs2 to sync the data up to the server; damn slow DSL...). The incremental backup is also quick, usually only a few megs of data transfer and it takes about a minute to run.
Anyway, I don't see much reason to pay someone else to do this for you. Get duplicity, get an S3 account, and enjoy.
Duplicity provides incremental backups; it doesn't provide snapshotted backups. Snapshotted backups are the "best of both worlds" between full backups and incremental backups -- you get the semantics of full backups (each individual backup can be restored or deleted without touching the other backups) along with the efficiency and performance of incremental backups (if only a few files have changed since your last backup, taking a new backup is very fast).
It may be that duplicity is adequate for your needs, but I did consider it before I started writing tarsnap, and decided that duplicity wasn't good enough for me.
I'm not sure exactly what you're asking here... but I'll try to answer some of the questions which you might be trying to ask. :-)
Yes, I am aware of brackup. Yes, brackup appears to do snapshotted backups. No, tarsnap is not brackup. Yes, there are some similarities in the implementations. Yes, I believe that tarsnap is superior to brackup; there are several reasons for this, including being better at recognizing unmodified data within modified files. There are also several other points where I believe that tarsnap is likely to be superior to brackup, but I haven't read enough of the brackup code to be certain exactly how brackup does everything.
If that doesn't answer your question... could you elaborate a little bit about what you'd like to know? (Maybe two words instead of one...)
You don't have to back up all of your data -- many beta testers have said things like "I have 400 MB of source code and documents which I'm going to back up, and 5 GB of music which I might back up... and 100 GB of ripped DVDs which I'm not going to bother with".
You can do incremental backups using tarsnap if you really want, using the --newer-mtime option (tarsnap can do anything that tar can do); but you don't actually want that. Tarsnap automatically does snapshotted backups (I posted some explanation of the difference a few minutes ago in http://news.ycombinator.com/item?id=183213) -- so all you need to do is tell tarsnap that you want to create another backup archive, and it will magically avoid storing multiple copies of the same data.
What happens if the backup service goes down? Does the client have a way to recover backups from S3 directly (or from the data files I've pulled down from S3)?
If the backup service goes down, you can't get your backups. I have no intention of the backup service ever going down; and the data which I have stored on S3 is enough that I can bring a new server instance up even if I lose everything else.
As long as S3 doesn't lose any data, your backups are safe.
Basically, I ended up using duplicity [1], which can back up to a mounted filesystem, WebDAV, ftp, ssh, or S3. It handles encryption and incremental backups, and compresses everything on the backup medium (in encrypted 5M archives, with a seprate index file). This is why I used it; davfs2 is very slow at creating files, and my homedir is mostly very small files. I intially tried rdiff-backup, but it was just way too slow creating all those files. Duplicity, however, is nice-n-fast; 1G of data compressed down to 85 5M files and the initial backup only took a half an hour (plus an hour or so for davfs2 to sync the data up to the server; damn slow DSL...). The incremental backup is also quick, usually only a few megs of data transfer and it takes about a minute to run.
Anyway, I don't see much reason to pay someone else to do this for you. Get duplicity, get an S3 account, and enjoy.
[1] http://duplicity.nongnu.org/
Now it's time to rsync up my music collection. I have 50G of storage to burn through, and the 200M full backup of my homedir just isn't doing it :)