Thanks for asking the question, it's right, there are thousands of backup programs already.
The "key thing" that lead me to code my own is that with nearly all the solutions I tried, data was resent over network when a big file is moved to another dir+renamed, WHEN used in encrypted-at-rest mode.
It's the case for rsync (the --fuzzy only helps when renamed but stays in same dir), duplicity, and even rclone when in encrypted mode (see https://forum.rclone.org/t/can-not-use-track-renames-when-sy...: "Can not use –track-renames when syncing to and from Crypt remotes").
> The "key thing" that lead me to code my own is that with nearly all the solutions I tried, data was resent over network when a big file is moved to another dir+renamed, WHEN used in encrypted-at-rest mode.
restic (and i suppose borg which is similar) solves this problem and goes even further by chunking your files, hashing and deduping the chunks - chunks with the same hash aren't resent. Great e.g. for backing up VM images or encrypted containers where only a small part of the file change - only that small part will be resent between snapshots. Chunking algo is "content-defined", can probabilistically quite efficiently detect shifted chunks and duplicated chunks across different files.
(Naturally, all this machinery will also handle the simple cases of renamed and duplicated files on your filesystem)
The "key thing" that lead me to code my own is that with nearly all the solutions I tried, data was resent over network when a big file is moved to another dir+renamed, WHEN used in encrypted-at-rest mode.
It's the case for rsync (the --fuzzy only helps when renamed but stays in same dir), duplicity, and even rclone when in encrypted mode (see https://forum.rclone.org/t/can-not-use-track-renames-when-sy...: "Can not use –track-renames when syncing to and from Crypt remotes").
So I wanted to solve this problem, and it works. See point #3 of https://github.com/josephernest/nfreezer#features.