Rclone is now very prevalent in my infrastructure. Almost all my websites are updated by a CI job that builds the website from a repo and pushes it up to the hosting server. There's an encrypted Rclone config in the repo and the password for it is a really long randomly generated string that gets saved in the CI as a secret. Rclone with Restic is how most of my servers get backed up, Rclone is how I access my Nextcloud and Google Drive, I have a containerized s3 compatible storage system that actually stores it's data on a Rclone remote (I hope `serve s3` gets implemented soon this setup can be simpler ), and much more.
I'm using it so much that I'm running up against the Google Drive API limit even though I'm using my own key.
*  https://github.com/rclone/rclone/issues/3382
I'm planning to upload files from my app and let an rclone cronjob occasionally sync between my backends.
restic is somewhat sensitive to the latency of the backend it uses.
The workflow here is a little different than at Amazon or Backblaze, as we actually built the rclone binary into our environment.
So you can run 'rclone' without actually having it installed yourself:
ssh email@example.com rclone s3:/some/bucket /rsync.net/dir
You can also, if you choose, do nothing but "dumb syncs" with rclone since retention can be handled with an arbitrary day/week/month/quarter/year snapshot schedule.
That way, you can change the password by just re-encrypting that file, though you can't do much in case the attacker got that decrypted key. You'd have to re-upload all the data for that.
So while rclone "supports" amazon clouddrive technically, you can only use it if you already have your own API key that is not revoked.
If people switched to the 'proper' backup service they offer on AWS, that would be glacier, and they would get significantly less money!
Regardless, I haven't had S3 perf be a problem yet for me.
Hah, I love this. I was having Dropbox sync issues a while back which support couldn't help with, and used rclone to prod and poke the state of my files and folders back into a state the DB client wouldn't choke on.
I am very curious what you use rclone for (how it fits into your backup routine)
With rclone, you can simply upload without having to create a full local 'snapshot' of the 'files-to-upload' first.
Yes, I have a remote VM where I can ssh and use it as a borg server.
The advantage I see in this vs a copy of the repo is that I have two independent backups. If one fails for some reason (defective disk for instance) then I do not copy a faulty backup further.
Makes a snapshot so you can see how things where at X moment. The only new data are changes made since.
Then rclone copies that new data over
BTW how does the sync work? Does for example AWS expose a (free or supercheap) way of getting the SHA1 of a file on S3?
It would perhaps be useful with a mode where a repo of file hashes could be kept in parallell at the cloud provider, as just a file that could be downloaded every day, all the logic is done locally and then the files uploaded that differs, then the file hash repo is updated.
You can also use --fast-list to use less transactions at the cost of memory (basically just GETs to read 1000 objects at a time).
You can also do top up syncs using `rclone copy --max-age 24h --no-traverse` for example which won't do listings then do a full `rclone sync` once a week (say) which will delete stuff on the remote which has been deleted locally.
There is also the cache backend which does store metadata locally.
This isn't that surprising in the case of restic and git-annex, both of which transport over SFTP, but we have actually built into our environment the 'rclone' and 'borg' binaries such that you can execute them remotely:
ssh firstname.lastname@example.org rclone s3:/some/bucket gdrive:/what/ever
As bad practice as it is, I'm guessing I'm not the only one with a super weak recycled password for instances like that.
It’s also a great advertisement for Go. A single binary installation. Fast. Runs on every platform.
Even if you're not using any cloud storages, rclone works with local filesystem, WebDAV, FTP, SFTP, Nextcloud and Minio. Rclone is the only free-software solution which implements some features of MergerFS for Windows to my knowledge.
"Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers."
The following cloud (and other) storage providers are supported:
Google Cloud Storage
Microsoft Azure Blob Storage
The local filesystem
Program is Open Source Go aka Golang
A good document indexer should help me further, if anyone has suggestions...
Might check out something like ripgrep, but that's for searching code and other text-like files. Wont help you out with MS Word docs or anything like that.
rg I use hourly for code-related navigation & searches tho. Good stuff
One of the wonderful things about rclone is that it's the right tool for so many jobs. Much like grep, sed, awk, etc, it is as simple or as complicated as you want it to be.
My primary concern is that it works so well that I forget about it. This is often the case with many automations.
That is why I write documentation for myself.
I'm wondering what cloud provider² that works with rclone offers the most storage per dollar? Caveat: I only need 100GB.
² ideally in Europe
- Scaleway's new AWS S3 Glacier-like storage (75GB free, in Paris)
- OVH's regular object storage vs. "cloud archive" (multiple EU locations)
- Wasabi (S3 compatible, but prepays for 1TB) (in Amsterdam)
The "cold" storage options are obviously cheaper, but have bigger problems in my experience of playing nice with backup and sync solutions like rclone.
I personally use b2 with restic.
If you're not familiar with duplicity, it uses GPG to encrypt tarballs of your backup data locally and keeps an index of each tarball separately (also encrypted) which it caches. This reduces the provider API calls to just the encrypted bundles (payload, indexes), the software then works on the indexes locally to do comparisons of what it needs to back up and makes new bundles to upload. (restores as well, it's searching the locally decrypted index caches, not making API calls etc.)
Other than that, it awesome how it works with so many data storage providers. If it helps anyone, the last time I looked at pricing (a couple months ago) IBM had the cheapest per GB cloud storage of all the providers. I already have 1TB of Google Cloud for $2/mo, so I'm using that to keep my life simple, however, it's slow as far as data stores go.
>Can rclone do bi-directional sync?
>No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it
The difference between "copy" and "sync" is also a bit subtle:
> Copy files from source to dest, skipping already copied
> Make source and dest identical, modifying destination only.
That is, while "sync" may delete files on the destination so that it mirrors the source, copy will never do so.
I feel a better name for "sync" would have been "clone", conveying both the unidirectionality and the fact that it will make the destination like source.
You're technically not supposed to do this with a NAS but I only have about ~40GB of files so far. If/when I get into the "Many TB" territory, I'll figure something else out..
Is anyone aware of a way to sync a remote directory to a local filesystem without authenticating, either with S3/drive or another cloud storage provider (and using rclone or another CLI tool)?
All of my workstation (laptop/desktop) have dropbox installed and yet dropbox is not available for FreeBSD.
This is where rclone come to the rescue. There might be other software that can take the role of dropbox client but rclone gives me the flexibility of switching to other provider if I decided to do so in the future.
Bless the author of rclone.
Web search shows that a bunch of Android clients exist for Rclone: the project's wiki points to ‘RCX’. will have to try it and see.
I dream occasionally of an encrypted file-level-Raid-5 made from free storage that the services have already bestowed on me.
I use Rclone on my Android device through Termux which works pretty well. Termux includes Rclone in its repo. I just set it up to `serve webdav` then access it through my file browser app which includes WebDAV support.
*  https://github.com/rclone/rclone/issues/1647
*  https://github.com/termux
*  https://github.com/termux/termux-packages/tree/master/packag...
I've thought about including Cryptomator support in the app by reusing the java libaries, but licensing issues make that hard. Support in rclone itself would be really great.
BTW: RCX exposes serve as well, and will get Storage Access Framework (SAF) support in the near future. When it works, SAF seems like magic.
*  https://imgur.com/a/cppw03f
I also have this RAID idea, because the free offers are a marketing tool, and providers have no problem shutting off rclone access even for paying customers (happened with Amazon Cloud Drive, Yandex Disk).
There is currently a beta of new remote type "multiwrite union" (RAID-1 style). But for RAID-5, storage is just too cheap - some providers are as low as $5/TB-month.
Seems like both support deduplication, which I thought was the advantage of Borg
Borg is for backups. It doesn't support any storage provider.
One can use both of them jointly, creating borg backups and saving them on GDrive for example.
You do something like this
rclone sync /path/to/source remote:destination/current --backup-dir remote:destination/$(date -I)
So for historical backups you can have deduplication, but rclone doesn't support deduplication within the sync though, so if you have two identical files within a directory, rclone will upload them both.
I don't know Borg very well but you can use it to back up to an rclone mount. I did look into making a borg server for rclone so it could speak borg protocol directly over ssh. It wouldn't be too hard, but the protocol isn't documented so it would mean reverse engineering the python code.
rclone also works smoothly with CI servers.