Hacker News new | past | comments | ask | show | jobs | submit login
Syncthing – a continuous file synchronization program (syncthing.net)
545 points by tambourine_man 14 days ago | hide | past | favorite | 230 comments

Syncthing works brilliantly! The web UI is excellent, warns you when you're about to do something ill-advised, and stuff like QR codes makes adding clients and folders fairly easy. The separation between folders and devices is handled well. You can easily have half a dozen shared folders between several computers with any mix-and-match combination including which one is 'authoritative' and so on.

I used to use it on Android to keep my 'camera roll' synced to my desktop. I preferred setting the ignore file to exclude android's thumbnails directory, but otherwise it worked great. I'd take a photo with my phone and almost before I could open the shortcut on the desktop, boom, the photo was there, since the android client detects changes in the filesystem.

I also used it to sync my password database, and a cross-platform notebook app's database. Worked flawlessly for all of them.

Ditto for a folder shared between my MacOS laptop and Windows desktop.

Lots of controls, especially in the forked android client. Don't want the sync client to run on any network other than your home network? Done. Don't want your syncthing clients to try and do NAT transversal / discovery outside whatever network it's on? No problem.

The biggest bummer is that there's no iOS client. Had to switch to Nextcloud, which so far has been working OK, but the number of people that have problems with the iOS client is quite high (problems such as syncs being so slow it takes days to sync a camera roll), and it's generally sluggish and doesn't work in the background (yes, I know, Apple's fault.) However, Nextcloud does allow me to sync contacts and calendars (I think. I haven't tried to set it up yet.)

One warning: Sycnthing really does not like it when you delete the dotfile folder inside the shared folder(s). Don't do that :)

I would not qualify the web UI of "excellent" : understanding the relations between paired devices, remote devices, folders you want to sync, and so on is harder than what it should be.

Likewise, the android application is so bad the best way is to open the web UI on your phone to do stuff (and sometimes, it's the only way to achieve some goals).

There is definitely a lot of room for improvement for easier adoption, yet, this is one of the few sync app that work for me, even behind NAT, with closed ports, and so on.

The first thing I really like about syncthing is that it does what it is supposed to do very well (besides very seldom glitches), while not being dropbox, nextcloud, owncloud, etc. but p2p and entirely independent of any external tools/devices/etc.

The second thing I like is that it is a successful FOSS project, meaning it works and is being regularly and often improved by its users.

Your comment, as helpful as you may feel it is, is contradictory to my second - very strong - feeling.

I really think to UI works quite good, and by god, if a better solution is so obvious to you, why haven't you proposed it yet, or - god forbid - made a merge/pull request??

> and by god, if a better solution is so obvious to you, why haven't you proposed it yet, or - god forbid - made a merge/pull request??

Because there isn't any obligation to? Why do you think you would be entitled to ask for that?

"Entitled to ask"?

I think freedom of speech allows for it.

Can I have a million pounds, from you personally, please?

Also, that might be the small nudge which potentially has a user actually contribute in some circumstances.

I'm not sure your question is fair.

Perhaps "entitled to ask" is a poor choice of words. Yes, freedom of speech allows bipson to ask about contributing. It still comes across as entitled. The question was clearly laden with some expectation that Reventlov should go do work for syncthing instead of sharing an opinion on Hacker News.

We're just here having a conversation. The expectation that a user MUST do something for an open source project because they have an issue is a tired take. It's not as though Reventlov is blowing up Syncthing's development team with demands about how to make it better. And, for all any of us know, Reventlov may have already submitted PRs or opened issues for Syncthing. Or, it could be the case that there's a different piece of software suiting their needs better, and that's fine. Or, Reventlov maybe just doesn't possess the technical skill to contribute in that way. And that's okay too.

Bludgeoning people with an open source virtue-stick for sharing an opinion isn't helpful.

Hmm, fair point.

But just as much as you consider an "expectation that a user MUST do something for an open source project" is a tired point, I consider the endless ranting by entitled FOSS users an obnoxious trend.

And btw. I never said something about "must", I asked why not choosing - OK, maybe strongly suggesting - another option.

Maybe you took my question the wrong way, but (maybe just as much misguided) I took the "opinion"/"discussion" not as opinion, but for the lack of any helpful suggestion as merely a rant.

I was raised not to needlessly complain about free things, without considering to take things into my own hands.

And even if you don't know how to code, suggestions and discussions are better had in an issue tracker, right? Otherwise, what's the point?

If one is not a user, and considers something else to be better, why voice such a strong opinion?

> I consider the endless ranting by entitled FOSS users an obnoxious trend.

There is a massive difference between voicing your opinion about something in a public discussion forum and hounding the developers of a project because they don't fix bugs or implement new features on your say-so. One of those is making conversation, the other is entitlement.

> Maybe you took my question the wrong way, but (maybe just as much misguided) I took the "opinion"/"discussion" not as opinion, but for the lack of any helpful suggestion as merely a rant.

Sometimes, you can recognize that something is bad without knowing the best way to fix it.

> I was raised not to needlessly complain about free things, without considering to take things into my own hands.

Okay, so that was how you were raised and how you operate. I don't see the reply as needless complaining. It is a critique of some specific issues. It was constructive criticism, because it presented a specific set of things that could be improved upon. I happen to agree with those criticisms, even though I generally think Syncthing is a great piece of software.

> And even if you don't know how to code, suggestions and discussions are better had in an issue tracker, right? Otherwise, what's the point?

What's the point that any of us are here talking about anything?

> If one is not a user, and considers something else to be better, why voice such a strong opinion?

Because people make conversation and have opinions. Are you familiar with socializing? It's okay to not like something. It's okay to not like _parts_ of something.

Aren't they just suggesting to open a bug/issue in place of complaining on HN? Seems a fair, small ask to me.

There is a discussion being had. This guy was sharing his opinion about syncthing. That still doesn't obligate him to open a bug or do any work for the project.

Never really had any problem understanding the relationships, and haven't met anyone that has so far. Then again I'm a programmer.

How would you make things clearer?

> Lots of controls, especially in the forked android client.

Here's that fork: https://github.com/Catfriend1/syncthing-android

You can also search up Syncthing-Fork on Google Play.

Yes that was annoying to me at first. The official client seems to still get updates, so you don't immediately see anything wrong with it, but has a couple really annoying issues that have been fixed in Syncthing-fork for years.

>The biggest bummer is that there's no iOS client.


Okay, I guess I should qualify that and say that there's no iOS client that is actually practical to use.

NextCloud integrates into iOS's Files app pretty tightly as a 'storage provider' or whatever Apple calls it. End result is that my password database is magically up-to-date whenever I go to use it in my password manager, probably because Files sees an app trying to access the file, pings NextCloud to say "yo, is this shit up to date?", and then my password manager opens the file. I don't have to worry myself about background sync and such.

I didn't have to do jack shit to open the database the first time; the built-in iOS file-picker that came up let me select nextcloud as a source, and then my password database. Done. It is two taps to get that database open now - one to hit "passwords" in the keyboard area, and the second is me TouchID'ing to unlock the database.

Looking over the Mobius faq, it appears to be vastly inferior, with no guarantee your files are synced at any point in time, and you have to manually push files from Mobius to Files, and then access them in apps from there:

> iOS apps cannot access each others’ files. This means you will need to copy files in and out of Syncthing using the Apple Files app.

I wouldn't do that even if the software was free. And they want me to pay for that? No?

I sometimes have the same problem with Android apps that store their files into their private folders, protected by the OS. I can't use Syncthing to backup those files to my computer. Either the app lets me configure the storage location to something that the other apps can access or I uninstall it and use a different app that lets me do that. If the OS gets into the way of what I want to do, oh well, that's one of the reasons why I'm using Android and not iOS. Android gets in the way but not at much as iOS seems to do.

A better way should be creating a whitelist of apps that can access the storage of other apps. Gpx tracker app? Add Syncthing to its whitelist, and the file manager I use instead of the system one.

It's Apple. You are of course welcome to pay their exorbitant development fees yourself, go through the insane review process and publish your own competing app!

You can't publish your own competing app. The desired behaviour requires running a background service (not allowed) that watches arbitrary folders (not allowed).

Third party iOS apps do in fact have access to background tasks. There’s even a variant dedicated to longer-running tasks, like lengthy file syncs.

The difference is that instead of having an ever-running daemon, the developer schedules tasks with the system, and system decides when to run them based on network availability, battery charge, etc as well as the app’s behavior (badly written/inefficient background tasks and frequent high intensity task requests are penalized).

I think you lost the plot - you can publish a competing app that uses the Files app for sync, just like NextCloud does.

That is a somewhat combative thing to say.

We see a NextCloud dev explaining that background uploads don't work herehttps://github.com/nextcloud/ios/issues/215

And we can see syncthing devs lamenting the many issues with an iOS client here https://github.com/syncthing/syncthing/issues/102

It may be the case that file eventing now works, but a quick check with an iOS dev friend suggests that the filesystem sandboxing is too restrictive to be meaningful anyway.

Further on this issue, consider that a functioning syncthing client is a node in a p2p network, so must be able to advertise and listen to requests in the background as well as the background jobs that NextCloud requires (as NextCloud is centralised it doesn't need node level co-ordination) - so a partially working NextCloud client is good enough, but a partially working Syncthing client is woefully broken.

I guess I'll just have to pay $49.99 a year for iCloud!

Why isn't this a lawsuit yet?

Lawsuit for what? Not providing an API for a feature you want?

It's not like Apple specifically advertised background services, then lied about it.

The device is specifically built like that, and you can buy a different phone if it's that big of an issue.

It feels like complaining your toaster doesn't have a dark enough toast setting that you want. Is there a lawsuit in that?

> you can buy a different phone if it's that big of an issue.

- I can't change my car's satnav. You can buy another car if it's that big a deal.

I think we can agree, the above statement reads silly.

Sometimes, no matter the deal size - buying a new thing isn't always a reasonable option.

Full agreement the above sounds silly. So really it comes down to line between the responsibility of the consumer, and the duty of the product creator to say what the product can or can not do.

If you really care about replacing parts in your car like the satnav, should you need to do research before buying instead of assuming it works that way?

Or should we have rules that all sat navs need to be replaceable so consumers don't need to do that?

I honestly can't draw the line myself, I just try my best to identify what I want in a product before purchasing. Especially huge purchases like a car.

There are cars where you can change the built-in sat nav?

Nearly all of them. Changing the media center is a well sypported third party customisation for most vehicles.

Yup, almost every single car has this ability, sat nav or not.

For engaging in anti-competitive behavior. Forcing a single App Store on an os is much less draconian than offering a broken interface to basic OS functionality in order to prop your own product above others. Imagine if Windows downclocked whenever you wanted to use an office alternative. This is not much different.

Because you can buy an android device?

Be the change you want to see in the world.

$99/year is hardly exorbitant.

Plus ~30% of your revenue.

correction: 30% of your revenue for certain business models. Still too much, but let's at least be somewhat accurate. Plenty of apps make tons of money on the App Store without every paying a dime to Apple beyond the $99.

Perhaps you missed the ~

Files sees an app trying to access the file, pings NextCloud to say "yo, is this sh* up to date?"

What does it do if you're not connected? Don't mind it checking whether the local copy is up to date, but I'd be concerned if the sync trigger is you opening the app.

Second this.

Works flawlessly, is rather cheap, and the developers encourage donations to the main SyncThing project.

How is "you have to manually copy files around every time you want to use them" "works flawlessly"? Have you not seen how stuff is supposed to work with cloud storage providers in iOS? Because it's indistinguishable in function from local on-device storage.

My use cases may be simpler than yours, but I don't copy files around.

I usually sync PDF and epub files to my iPad and open them directly from the sync folder in Apple Books.

Maybe it works flawlessly inside the limitations that non-Apple iOS sync apps have?

Not available in my region, sadly

Same here in Europe, wtf?

Odd, it appears to be create by a London-based company: https://www.pickupinfinity.com/about-us/

Yes this. Best money I’ve ever spent on software

I tried Syncthing, then I tried Nextcloud, and for the goal of just syncing camera roll and backing up my password manager, they were both a bit of a hassle.

Now I use Minio with FolderSync (Android App, I use the paid one, but the free is perfectly capable) to backup my camera roll and I wrote a very simply WebDAV server in Go to backup my (Android) password manager DB which only supports WebDAV, I sit NGINX in front of them both to terminate TLS and to handle basic auth for the WebDAV server, though I could easily implement basic auth in that too (again, my password manager only supports basic auth).

If someone can suggest a better password manager for Android with a good UI, good integration (e.g. auto-fill), has a Linux desktop app and can backup to WebDAV or Minio I'd jump in a heartbeat.

I considered Bitwarden, but I don't want to run the Mono/Windows server container, and I don't want to rely on the Rust port which is behind in features and is susceptible to the upstream breaking APIs.

Keepass2Android/KeepassDX with Nextcloud? I don't understand how running Minio with a custom built webdav server is easier than Nextcloud has integrated WebDAV access.

I used to run Nextcloud. Nextcloud did a bunch of stuff I didn't need, is a larger attack vector and usually took an entire afternoon/evening every time I needed to do an upgrade. So far I spent less time setting all this up, and writing a WebDAV server than any one upgrade of Nextcloud took me.

Minio is zero-maintenance, if it needs an upgrade I just pull a newer Docker image. My WebDAV server is fewer than 100 lines of Go and extremely easy to maintain.

I run Nextcloud in a docker image (on unRaid). Upgrades are a docker pull away too, never had any issues.

I use Nextcloud on my phone (Android) to backup my camera too, the automatic sync is wonderful and works well. It does stuff I don't need too, but it's pretty easy to use and setup and it integrates well with the OS (Windows and Mac at least).

> I considered Bitwarden, but I don't want to run the Mono/Windows server container, and I don't want to rely on the Rust port which is behind in features and is susceptible to the upstream breaking APIs.

"The server project is written in C# using .NET Core with ASP.NET Core." https://github.com/bitwarden/server

Seems like alternative DB providers is in alpha stage right now.

Mono and netcore is not the same, netcore is one of the more wonderful things I've worked with if I'm to be completely honest. Then again, I also kinda like PowerShell

The thing with the .stfolder file is annoying as hell. I use it to sync the build output folder for my Android builds but sometimes I need to run a clean and it wipes out the folder. Syncthing's developers for whatever reason just can't seem to grasp that sometimes I could careless about a folder's history and just need my files synced.

My fix it to run `touch .stfolder` in the build directory which fixes it but seriously Syncthing? Just put in some reasonable defaults and call it a day.

Otherwise it's perfectly fine.

yes the .stfolder is a minor nuisance most of the time, but it's a lifesaver if for some reason or other your folder gets removed or a mounted device is no longer there, or one of a hundred other things that can go wrong with a folder.

then you _really_ want syncthing to stand back and hold its hands up, instead of syncing deletion events to all connected devices or sending all your friends your entire home directory or something like this.

imo, it's a good safety tradeoff, and like so many of those (e.g. short vs long passwords) they might seem annoying most of the time, but prevent incredible damage that _one_ other time.

I understand all these things because I've done this professionally for quite some time. However I've just explained that this is my build directory which means it can be deleted and I won't care. Every build quite literally overwrites all the output files so it's destructive by it's very nature. Syncthing already has an option where it won't sync deletes as well. I just don't understand why it must absolutely have an stfolder path. There should be a destructive sync option where it won't care and will just sync.

I suggest you put your build folder inside a synced folder, instead of syncing the build folder directly.

This is actually exactly what I started to do. Was just kind of annoying :)

If you put a non-empty placeholder file in that folder, it won't be empty any more, so it shouldn't be removed.

Yea but it doesn't matter. The clean from gradle deletes the folder entirely and creates a new one in it's place. I don't care. Just sync the path. That's all I need. Instead Syncthing literally stop syncing ALL folders because "oh no something went wrong".

For whatever reason open source software always has these weird edge case engineering "solutions" that really aren't that great. If someone was actually a paying customer and asked for this the engineers would just figure something out instead of making excuses for why it is the way it is.

> The clean from gradle deletes the folder entirely and creates a new one in it's place.

Maybe it's possible to .. not do that?

You expect me to not use a basic feature of a compiler?

o_O weird take but k

If a build system produces an adverse outcome, it is often possible to customize it so that it does not delete things which you would rather not have deleted. (I've no idea whether this is specifically easy or not in gradle)

It's easy to open a pull request instead of complaining

Commercial products usually aim to satisfy its paying customer's needs. FOSS is, not always but most times, about satisfying one's own needs. So it won't be that easy to change a developer's mind who is determined about their way of doing things even if it is not the best way for others.

It's also kind of arrogant of FOSS advocates to say "just make a pull request" whenever there's criticism of anything FOSS. You could spend weeks working on something only to have the maintainer say no and I've seen this happen and every time it does I just think to myself "welp there's another person who will never make another pull request again".

As a FOSS maintainer I always try to make it clear that before trying your luck with a PR, you should always first engange in a conversation with me.

Seems kind of obvious to my eyes, asking, talking, discussing about some change before doing the actual work. But as it seems, "obviousness" is in the eye of the beholder...

That's fine and all except that the typical refrain from internet folks is that if something is free then you are not allowed to critique, comment, or request changes. If you really care about freedom you can handle a little bit of online banter.

No one is saying you're not allowed to critique the software. You're critiquing the people who create that software for free

Then fork it? No one is stopping you.

I assume you don't care about missing out on updates since that's the free work you're complaining about

> The clean from gradle deletes the folder entirely and creates a new one in it's place.

That's really bizarre behavior. If the folder has no files or zero byte files then it might make sense, but if gradle's deleting non-empty files that don't concern it, that seems to be more of a problem with gradle.

I get the point though; if there's no versioning, why bother having an empty folder?

To add there is also Syncthing for Android TV, makes so much easier transfering files between devices, like to a seedbox.

Oh really? Didn't know that.. very handy!

> The web UI is excellent

The web UI is perhaps the only pain point of syncthing. It's very confusing around folder paths VS IDs, especially around the "Default Folder".

Functions like Actions > Show ID are presented poorly.

Importantly, the UI is not intuitive at all. You need to read the docs to understand what you are supposed to do.

There's Mobius, a third party client for iOS: https://www.mobiussync.com/ . I've never used it, so I don't know how good it is.

My iOS Nextcloud is stealthily paused when in the background (a few kb/s e.g. just enough for notices) as I think most iphone apps. So it is not competitive with apple backuo.

GitHub repo: https://github.com/syncthing/syncthing

Some previous discussions:

* "Syncthing: Open Source Dropbox and BitTorrent Sync Replacement" https://news.ycombinator.com/item?id=7734114 (7 years ago|184 comments)

* "Syncthing is everything I used to love about computers" https://news.ycombinator.com/item?id=23537243 (1 year ago|159 comments)

* "Syncthing: Syncing All the Things" https://news.ycombinator.com/item?id=27929194 (3 months ago|172 comments)

I have switched from Nextcloud to Syncthing. The former was a resource hog that was eating up too much everything on my modest VPS.

I share the files on my local network at home between phone, laptop and desktop. My only issues so far:

- I need to manually start Syncthing on the devices, whereas Nextcloud was always running on boot. Not sure if I should start it every time, especially on the Android phone which cannot sync while away from home. I do not want to empty the battery while outside of my LAN, + the persistent notification "Syncthing is running" is annoying.

- I need to power up at least the laptop or desktop to backup my phone data. One solution that I will try is to install a headless Syncthing instance on my NAS. This way the phone can always sync to the NAS every time, and then laptop/desktop can sync from NAS when needed.

The joys of peer2peer! At least, I am sure that my moderately sensitive files are not leaving my local network to be stored on my VPS, which can be hacked someday.

I also need to figure out how to install Wireguard on my router to allow to access my NAS from my phone while on a trip, but that's another story.

On Ubuntu (and probably other distros that use Systemd as well) you need to enable the user service manually once for your user:

    systemctl --user enable --now syncthing.service
This will start syncthing whenever your user session is active. By default that means when you login. If you run this command:

    sudo loginctl enable-linger $USER
Then your user service manager will be started at boot and run regardless if you are logged in or not. (The default behavior makes more sense on desktop and laptop computers.)

More information: https://docs.syncthing.net/users/autostart.html#using-system...

The Debian/Ubuntu packages ship all the needed files. You only need to activate them.

Thanks! I will use the enable-linger on the NAS running 24/7, and configure it using a remote browser (probably via SSH tunnel). This way I can always sync to NAS even if everything else is off, and the laptop/desktop will sync from the NAS when required.

I saw just now that there are also system units (started with "systemctl enable --now syncthing@USER_NAME_HERE.service", note the lack of --user). If you use those then you don't need to activate linger. Using those is probably considered the more standard way of doing things on an unattended server.

Nice catch, this usecase seems common enough.

Supervisord also works really well for this purpose https://docs.syncthing.net/users/autostart.html#using-superv...

I've dealt with the Android issue by configuring Syncthing to only run when the phone is charging. I'm in the habit of leaving my phone to charger every night anyway, so I'm regularly synced. It would be great if there would be a 'Sync now' function though, that would run the sync and then go back to ordinary mode afterwards, for the rare case that I need my folder synced in the instant.

I have a hybrid, where I don't use the sync features of next cloud, but do use the contacts and calendar. Syncthing is setup to sync to the Nextcloud files folder, and a nextcloud config option (forget the exact option) allows Nextcloud to immediately pick up the files that syncthing dropped.

For the contacts and calendar I still use the VPS, but with radicale [0] instead of Nextcloud. The resource usage is minimal, probably the attack surface too. And I like the do one thing and do it well mantra (in this case, CalDAV/CardDAV).

[0] https://radicale.org/3.0.html

Another alternative for contact, calendar, and task syncing is EteSync, which is end-to-end encrypted:


I access radicale via an nginx proxy with proper HTTPS (LetsEncrypt) and an HTPASSWD over that, so the setup is secure. However radicale default setup is not secure, I agree.

Radicale is a great FOSS solution for syncing via the CardDAV and CalDAV protocols. I simply prefer using end-to-end encryption for extra protection where possible, which is why I think EteSync is a good alternative. E2EE means that even if the server is compromised, my contacts, calendars, and tasks won't be.

Baikal is also really nice and comes with a handy GUI


> [Nextcloud] was a resource hog that was eating up too much everything on my modest VPS.

I've seen comments like this in multiple occasions. Is it really that bad? I guess if there was a flagrant performance bug, it would have been sorted out with time, so the only remaining explanation is that the code is really poorly laid out, or with design flaws that cause so many performance problems for users.

However next to each "it's a resource hog" there is always another comment saying "it works fantastically well", so I never know what to think (sort of trying it myself, but I'm not that interested in the issue)

The Nextcloud Windows desktop client is very poorly written. We needed an in-house file hosting solution to replace really old Samba shares and Nextcloud seemed like a great option because we also were looking for a non-Google alternative to Google Docs/Drive (one of our clients has anti-Google policy).

After moving all files to Nextcloud server everyone (company of 20+ people) installed Nextcloud Windows Desktop client and connected to it with only virtual files option set (nothing is transfered until opened/edited). The initial "sync" after first seup took about 2-3 days to complete for everyone. Reason? Probably too many small files. The share contained about 300+GB of data in around 500k files (not sure about that number only a guess). Nextcloud checks every single file with server which computes hash for it which is stored locally. You can find threads on their community forum or reddit by people experiencing same slow behavior when they have a lot of small files. There were some promises from devs to resolve it with the latest client update but we didn't verify it yet.

The hogged resource for me is disk space. The more recent versions are bundling more and more features like a web-based word processor etc. that are just useless baggage for me. (I do like the calendar and contact sync.) I have a small hosted site with a tiny personal Nextcloud instance, just for sharing a handful of files, plus a small Wordpress site, and I had to upgrade from 500 MB to 1 GB to fit it in and also be able to run the upgrader.

I tried both and stayed with Nextcloud because of some features (especially sharing and specifically private links).

I have a ~5 years old server (Intel Skylake, 8 GB RAM) on which I run a bunch od docker containers (about 30). One of them is Nextcloud and the load average is 0.8-1.0. This is for an average of 2 users (me and the rest of my family that in total uses the server as I do myself).

Not a very scientific comparison, but I never had any performance issues.

try out Synctrayzor for the laptop and desktop. it lets you set it to auto start on boot.

Thanks for the suggestion. This is an app for Windows [0] but everything I use is Linux, so the setup to start SyncThing it at boot should be straightforward.

I am more worried about the Android part. The best solution would be to start SyncThing when on my LAN via matching SSID, and disable it when I am out of range.

[0] https://github.com/canton7/SyncTrayzor

Syncthing Fork has this feature, and others enhancements [0].

[0] https://github.com/Catfriend1/syncthing-android-fdroid

Honestly, that my bad, I didn't check before suggesting it and had it in my head it was cross platform.

> One solution that I will try is to install a headless Syncthing instance on my NAS

I've got an old computer which I run things on occasionally. I installed syncthing on it for a while and it worked well. Stopped leaving that computer on so now when there's something that needs to be synced between my desktop and laptop I just use my phone as the middle always on server.

Are you using Mobius Sync for this? I'm tempted to try it for the above reasons, but I wish it was open source.

If you just need to access your NAS from outside your home network so your phone can syncthing to it, you can use syncthing in global mode -- it will use public relay servers to sync between phone and NAS. But maybe you need more than that / knew this already.

> I also need to figure out how to install Wireguard on my router to allow to access my NAS from my phone while on a trip, but that's another story.

I can recommend https://tailscale.com/

It just works.

I'm not affiliated with the channel in any way, but Lawrence Systems did a really good overview. It's what convinced me to switch to it and use it for LAN sync with my server keeping everything from my phones, laptops and desktop. It's also good for those who might have trouble understanding how it's supposed to work. I think my biggest difficulty was with the fact that normally it doesn't necessarily have the concept of a "server" per say, but once I understood this and organized it in such a way that my server WOULD behave like a server, the rest was pretty easy.


And same url on an Invidious instance - https://yewtu.be/watch?v=O5O4ajGWZz8

This guy works on his channel "Raid Owl" and uploads very nicely put videos, I found the channel while searching for homeserver technology videos, and find that he does a great job (but somehow hasn't been discovered by a greater public, judging by the smallish subscribers number)

He's using Syncthing for backups (by enabling a feature that keeps N copies of each file), which while possible, I hadn't thought of as a possibility


Lawrence Systems videos are great! I've learned so much from them and have implemented quite a few things at $WORK or home due to their videos.

Syncthing is great, but bear in mind in the default configuration (unless you set STNOUPGRADE=1) it will automatically download a replacement binary and exec it, granting the developer remote code execution on your machine at any time, Solarwinds-style.


No-interaction autoupdate really should be opt-in. It's super dangerous software otherwise. Anyone with release credentials could own many many thousands of machines with a single action.

Not on Debian though.

...if you install via apt. Binaries downloaded from the website still have this issue.

PS: Thank you Debian maintainers.

Is this something to worry about if installing via the package manager of a different distribution?

I have to pitch Resilio Sync (formerly Bittorrent Sync) here.

It's a proprietary, more polished alternative that offers a freeware license for personal use.

My biggest issue with Syncthing was that there was no auto-discovery, if you had 25 or so computers across a group of friends, you needed to set up 25^2 connections, which wasn't really feasible. Not sure if this is still a thing, but Resilio solves this perfectly, you just input a secret key and computers just connect automatically. The developers also provide turn servers, which is useful in restrictive NAT scenarios. End-to-end encryption is supported, both in the free and the pro versions, although the latter gives you much more control over who can do what to the folder.

I use Resilio as a "big Dropbox" with dozens of gigabytes of data per folder, shared across a small-ish group of trusted friends. It doesn't cost us anything and works really well.

The largest folders I've seen had over a terabyte of data in them, but you probably want the pro version (with selective sync) for those.

Syncthing supports introducer devices. If you mark a device as introducer, you'll automatically add all devices the introducer knows to your list. So basically adding a new device to an existing group requires everyone to agree on one introducer, and everyone will add the new device automatically.

NAT traversal and relay servers exist too. So seems it's pretty much on par with Resilio, plus open source.

For my specific use case, there's probably no device that could be an introducer.

It's all PCs, none of them online 24/7.

Besides, setting all that up requires way more technical knowledge than most people have. Sure, I could do it I guess, but explaining Resilio configuration to complete non-techies is hard enough.

Introducer devices don't have to be online all the time. There is also no need to limit the number of introducer devices.

I have a folder that's synced between my laptop, desktop and phone, and each has the other 2 devices as introducer.

Syncthing has been end-to-end encrypted for years. And now also supports encrypted at rest.

+1 for Resilio Sync. Purchased a license 7 years ago and it still works today - how refreshing is that?

Lots of people love Syncthing and that is great for them, but I've always found btsync to be easier to use.

You know btsync is just reskinned IPFS.

Considering BittorrentSync had 10 million installs before IPFS was even born, I'd say you're wrong

Used Resilio for a long time and it would constantly start to beachball on my Macs. Syncthing has been pretty flawless by comparison.

I repaced Dropbox with it a year ago and I'm satisfied, it's a great project. But atm I couldn't recommend it to even knowledgeable friends, let alone family members:

- Sometimes long-deleted folders pop up when older version clients connect

- Syncing or connecting takes sometimes ages to start for no obvious reason, client not giving enough info on what is happening

- Relation between clients is not obvious, the powerful configuration options are hard to decipher after some time

- The project could start a (freemium?) cloud client option, maybe leveraging other providers for storage. Most people will not be able to set up a server, and without that it's just way less practical for the layman than any mainstream option.

Add to that:

- The handling of ignored files is confusing, to say the least.

I happened to comment about it just some day ago, so to not repeat the comment -> https://news.ycombinator.com/item?id=28760365

Sometimes long-deleted folders pop up when older version clients connect

What causes that?

I really like Syncthing, I have it installed on my computer, phone, and VPS, but it's definitely not made for non-technical users. I have a "family" folder with a few important documents that are also shared on my girlfriend's computer and phone, but I use Resilio Sync for those.

While Resilio is slightly less secure (debatable) than Syncthing, their UI and UX are vastly better, and virtually no different than something like Dropbox or Google Drive. I fully understand that Syncthing's developers have different priorities than UI and UX, but I'd love to see it become more mainstream.

Syncthing is one of the biggest reasons why I use Android today.

That sounds strange, but when you have old spare phones, they're very useful for this. Even for the daily driver phone that I use, it's so good at keeping files in sync between my other PCs.

I use this for syncing game files (like MMO extensions, ui files. WTF folder when I used to play WoW or the FFXIV folder in my documents) and my code workspace folder between 4 devices, one being a homelab server.

For the game files it's just my gaming laptop and gaming desktop

I setup a connection for the folder from every device to every other device. Syncthing has a unique ID for the share, so they all share the same ID for that folder and it does all the work syncing. none of my project files are ever out of date on any device. Lets me sit down on any device and just work. (Desktop, windows laptop, Linux laptop, or ssh into server)

Another great thing is is you set it up in WSL2, it just works, and all it has to do is be on a different port. Lets me sync my workspace files into there and get full Linux filesystem file watching ( the \\wsl$\ path doesn't work well from windows side)

With Android 11, I think this breaks my use case. Android 11 apps, with a few exceptions, can only access files they create. You can add in a "grant all" access, but google limits that to certain apps, like file sharing apps.

My use case is I want to "de-cloud" my life, and use syncthing to push things like wireguard keys to my phone, along with backing up photos and pushing down audiobooks and music. I bought a fairphone 3+ with the idea of it being a forever phone, but I couldn't get things working on liniageOS 11 or /e/ os on android 11. I could stay on android 10, but things will eventually migrate to target the new SDK.

I also have the issue with syncthing not being able to write to an external SD card, which was one of the selling points for me for the Fairphone.

I would love a solution to this, because I'm really stuck in the water on this.

Maybe I'm not reading it right but I've Android 11 and can see with my gallery app all image files saved to sdcard from all sorts of programs which contradicts your statement. There is still a lot of shared space on the file system.

It's a good thing in general that other apps cannot access other's app internal files.

I'm not an Android dev, but IIUIC your photos and images are made accessible to the gallery app through the ContentProvider API, i.e it does not have direct access.

Only specific apps will get approval to offer full access to the disk. Gallary app, maybe. Audiobook app from some unknown developer? Maybe not.

I wouldn't expect apps to require android 11 anytime soon (years). Android 4.4 was supported forever. Things get ugly when 10 doesn't receive any updates anymore.

But even with the versions before 11 things were a chore with many vendors putting aggressive power saving features in their phones that are hard if not impossible to disable for some chinese brands, sometimes the app not restarting after the app store pushed an update, etcpp.

I tried to de-cloud my dad with it, but eventually the app would stop running for some reason after a few days or weeks and nothing would get synched. Because otherwise, the technical foundation, especially the nat traversal is superb. I ran a mini pc in his home with ST that would receive all the photos from the phone. Zero maintenance, no port forwardings that I would have to re-do if the provider switches routers or the settings reset or what not, no dyndns for reachability. He was travelling in thailand, vietnam and elsewhere and the pictures just kept coming in. A reliable version of PhotoSync that uses the syncthing protocol under the hood would freaking kill it.

I don't think the changes in Android 11 are catastrophic. I'm using Syncthing on Android 11, and Syncthing can read/write to any directory in the filesystem under /sdcard, with the exception of the /sdcard/Android/data directory (private app data). Syncthing does request the "All files access" permission that gives it access to most of the filesystem.

I'm not sure about external SD cards, though, since I haven't tested this.

Thanks for the tip. I was using that data directory, assuming it was representitive of the full access.

What if you use termux and backup from its storage?

Since recent Android updates my termux setup hasn't been able to access other application's data. I use rsync via that for backing up content, it still works for things like camera output, but not things like the files & config from my podcast app.

Though it turns out I've been negligent and not noticed the Play installed versions are no longer maintained, so they may have worked around whatever the issue is. I must get around to reinstalling via f-droid...

I can confirm that the F-Droid version has fixed this issue a while ago.

Hmm. Removed and reinstalled from f-droid and android/data is still inaccessible. Will do a bit more research and see if there are known problems with it on my device...

syncthing is excellent for sharing also between groups.

I wish it worked better for occasional file sharing (for example to transfer files between phone/pc). I don't want to keep it running in the background - just sucks too much power, and on the phone I never need to keep other stuff synched. But the time it takes for the first connect to happen after starting it can be in the order if minutes on a known network, dozens of minutes if not. Not usable for this purpose.

The android client is also not very well thought out. You cannot "share to syncthing" until it's active, so if you want to put a file for sharing until your network connectivity is up, you can't. You have to start it, share, then shut it down. If you want to do that on an unknown network you didn't whitelist, you have to whitelist the network as well. Kind of pointless. You'd think you can also start it when it prompts you to change the settings, but this also doesn't work.

I ended up disabling all these advanced settings, as they clearly end-up being an hindrance. I just start it manually and quit it, kind of defeating the entire purpose of them.

for fast occasional sharing you have KDEConnect

I don't see a command-line or daemon backend. Not really comparable, even if I happened to use kde.

Well, it's not comparable, you wanted different use case there's kdeconnect-cli kdeconnect seems daemon like enough to me since it's entirely detached from its myriad UIs

Glad to see another option out there for file sync. Browsing the FAQ I noticed:

To further limit the amount of CPU used when syncing and scanning, set the environment variable GOMAXPROCS to the maximum number of CPU cores Syncthing should use

Are they using an environment variable shared by other Go programs? Seems like bad practice... why aren't they using a variable or or switch that's unique to Syncthing instead?

> Are they using an environment variable shared by other Go prpgrams? Seems like bad practice...

Huh? So only set it for syncthing.

Easy on Linux, not so much (and messier) on Windows.

If you want to set it just for syncthing, just type this when you run it from the command line:

GOMAXPROCS=2 syncthing

If you would use that variable, you wouldn't set it globally. You would set it only for the syncthing process.

For example, when you use systemd to run syncthing, you can configure the environment for syncthing in the unit file.

Other Go programs on your machine won't be affected.

I wonder if my annoyance stems from a difference between Windows vs. Linux conventions?

I hark back to when environment variables were set globally on a machine (back in DOS days).

I realize NTVDM emulation introduced the ability to configure more granular ones (eg. system vs. user) and you can now create a shortcut[1] that launches an intermediate step which tailors the environment for a single process. But always viewed that as a cludgey convenience for users, not a design paradigm for developers. (Also with at least six[2] different places to check for where Windows envvars come from, I typically favor the expliciteness of command line switches).

What happens if someone's creating a script that will shell out simultaneously to multiple Go programs? Do they all inherit the parent's environment variables? Does the script now have to tailor the same-named variable for each one?

Am I doing something wrong or is this easier on Linux and us Windows schmucks got left behind?

Thanks for any elucidation...

[1] https://stackoverflow.com/questions/3036325/can-i-set-an-env...

[2] https://flylib.com/books/en/

> What happens if someone's creating a script that will shell out simultaneously to multiple Go programs? Do they all inherit the parent's environment variables?

By default, every process inherits the environment of its parent process. So if you set an environment variable (eg. with the EXPORT command in bash), then every process you launch from the shell will have the same value for the environment variable.

However, you can set environment variables for each command that you execute.

So for example in a shell script you could do something like this:

    # set the environment variables value to 4
    # valid for all subsequently executed commands
    export GOMAXPROCS=4
    # run syncthing but limit it to 2 processors
    # defining the variable like this it will only affect this command
    GOMAXPROCS=2 /usr/local/bin/syncthing

    # run some other go program
    # the environment var will be set to the value 4 again
The beauty of using environment variables is that they are much more flexible than command line options.

For example, assume I execute a script from someone else, who did not set GOMAXPROCS. Then I could just set the GOMAXPROCS variable and then execute the script. I could change the config without needing to edit the script!

It's a GO runtime setting. Has nothing to do with Syncthing.

I'm not a Go programmer, but couldn't they achieve the same thing without name collision by reading a custom environment variable at startup and setting runtime.GOMAXPROCS?

Why would they do it?

This setting is managed by Go runtime, and Go runtime defines a specific env var for customization. If you want different setting for different Go programs, no problem, it is not a global config, but per process, so set the env var to desired value only for syncthing process.

Syncthing has been working for me flawlessly over the last year. Very impressed by it, especially compared to Dropbox, which is a CPU hog and has become a nagging salesperson on my computer.

Highly recommended.

On Android, Syncthing-fork is a bit nicer because it has a native app interface instead of the embedded web interface. However, on Android 11 the Play Store release doesn't work with folders other than /Android/media or the app's data folder or something like that—because of Google's continuous crusade against file access, effective for the newer API level that Syncthing-fork is targeting. But the F-Droid version doesn't have this problem.

Note that the mainstream Syncthing on Android works so far only because they're targeting an older API version. They will hit the same restriction if they migrate to the new API, unless they implement SAF or whatever.

Syncthing-Fork can't send broadcast mDNS packets on Android 11. As a result, two Android 11 devices can't even connect to each other over a LAN, and are reduced to sending all traffic to the Internet and back, which wastes bandwidth, probably hurts privacy, and throttles transfer speeds. The workaround I found was to hard-code one device's IP address in the other. Perhaps Syncthing is unaffected; I didn't test.

Link: https://forum.syncthing.net/t/local-discovery-problem-on-and...

Hmmm, I guess it only affects discovery between phones, because I just recently re-set up syncing with two laptops, and it worked fine without entering the IDs or ips or anything—just clicked on the discovered IDs.

Of course, this issue still highlights how the ‘open-source’ and ‘customizable’ OS does only what Google allows it to do.

I would like to give a shout out to `unison` for syncing needs. It can be used with a daemon, but I use it happily with cron and it has been absolutely bulletproof* for years. I have never noticed it's resource usage at all after running the initial sync.

I am never modifying the same file on two systems simultaneously, so I can afford to sync hourly with no issues. While not everyone can do this, I suspect many are in the same boat.

* Try to use literally the exact same binary where architectures allow, due to build dependencies.

I have used both. I'm a BIG Unison fan.

It refused a few files with unusual characters in their file name, so you’ll want to periodically check the logs, as it is stricter than the host OS. Apart from that, all good

Do you happen to have a list of these unusual characters ?

It seems to be anything which isn't either ASCII or UTF-8. My father's PC had files written by old versions of OpenOffice (when it was called StarOffice), with "native" names but before his PC was set up to use UTF-8. Early versions of Syncthing didn't have problems with those files, but later versions refused to back them up. I had to rename all those files one by one to get it working again.

If you ever have to do to this again checkout convmv - this can convert for example latin1 encoded file names into utf8 encoded file names.

wish I'd known about this sooner - thanks

Syncthing is not as good as Resilio Sync at syncing huge folders quickly. Resilio Sync can sync 10 MB/s on my wired home internet, but Syncthing usually uses only about 1-8 MB/s. Not sure why.

I started using Syncthing a while ago and its blazing fast. If you do LAN syncs, try setting fixed IPs to your devices in your router and when pairing them in Syncthing use their ips in the address field instead of using Dynamic.

So in my case my server would be and I have two devices, let's say and, I would putmy servers IP in as tcp:// in the settings page.

You can then even disable most of the doscovery options (if not all). Curios if you will see improvements!

Regarding backups... Syncthing is great for propagating changes to all devices, but does not offer much in terms of backups and previous versions of files. (And I'm fine with that. I consider that to be out of scope for Syncthing.)

I've been thinking about solving the backup part by letting one of the devices perform btrfs snapshots on its storage. Does anyone know about any write-up that describes or compares such solutions?

Setting up a cron job / timer unit that commits the synced folder to a git repo is also an alternative, but has some disadvantages: stores contents twice (once in the sync folder, once in the git object database), and pruning older versions is difficult (git is mainly built to keep all history).

Syncthing has many options for versioning. Ability to keep versions of files is THE reason I use it.

> I've been thinking about solving the backup part by letting one of the devices perform btrfs snapshots on its storage.

I do zfs snapshots on my storage server (home desktop), which is in Syncthing's "receive only" mode. I use it to sync my camera roll and Whatsapp media files from my Android phone. I use pyznap for snapshot management.

> Setting up a cron job / timer unit that commits the synced folder to a git repo is also an alternative

Another option would be to backup the synced folder using a backup tool like borg, restic, duplicacy etc. They have options to prune the snapshots.

I have a script that takes a btrfs snapshot, uses borg to back up the consistent snapshot (to rsync.net), then removes the snapshot again. It then instructs Borg about which snapshots to retain, so my space shouldn't grow without bound.

For extra extra snapshotting fun, rsync.net then takes ZFS snapshots of the borg backup. So even if my snapshot cleaning gets over-enthusiastic, I'll still have a snapshot of the snapshot.

I’ve set up a single always on desktop computer as my backup hub. Syncthing is used to sync everything I care about to that machine, and backblaze pulls everything to the cloud as versioned backups. I use a .stignore file to exclude some things from syncing (like node_modules) and similarly backblaze is configured to exclude some things from the cloud backup. There’s also a regular local backup to an external drive (time machine). The setup took some work to figure out, but it is effectively zero maintenance and robust.

I recently started using this to sync a few folders full of some code and documents between three of my computers. So far, it works very nicely, to the point I don't think about which machine I sit down at, because I know it'll already have the stuff I'm working on, all ready to go, without any fussing around or needing to running `git pull` in a bunch of different folders to get caught up to where I left off. I like that there's no cloud dependency and no single point of failure.

I am a huge Syncthing fan. I even recommend it to non-techie friends, and (after a bit of initial hesitance from having to learn the concepts) they are quite fond of it too, especially as a smartphone backup solution.

It's especially useful for our band, where we share large amounts of .wav files and recording projects and so on. It even works in the rehearsal room (which has no internet) through an old wifi router I set up there, that each of our laptops connects to.

Love it. Have an "always-on" raspberry pi with usb drive just running syncthing in my home network. That way, I don't have my data "in the cloud" somewhere, I don't have to open my home network for external access to my NAS etc, and always have an up to date "source" when I start syncthing on a tablet or mobile (where i dont have them running all the time).

I stopped taking them seriously when they wrote - “ I saw that it complained that the .stfolders did not exist. So I went ahead and created them, but by mistake, I created empty text files instead of empty folders. Did not realise it and left it to work in the background.”

First, backups! Never trust any software to not have a bug.

Secondly, if you don’t know what you are doing, may be don’t create random config files and hope the software deals with it. That’s in the “neither power user, nor dumb user” category that most software will not be designed for.

> Secondly, if you don’t know what you are doing, may be don’t create random config files and hope the software deals with it.

This sounds like a variant of the old "you're holding it wrong" blurb.

The problem you're commenting on actually describe a multitude of problems with the application, specially how it causes data loss if it's config happens to get corrupted.

The comment about the need to back up data also seems to not take into account that the service is already used to back up data across multiple devices.

It may have already been used as a backup system, but it was not made for that, and they even answered that in their FAQ. It was this question that stopped me setting it up, because it made me realise I was after a backup solution, not just a synchronisation tool

" Is Syncthing my ideal backup application?

No. Syncthing is not a great backup application because all changes to your files (modifications, deletions, etc.) will be propagated to all your devices. You can enable versioning, but we encourage you to use other tools to keep your data safe from your (or our) mistakes. "


this is why I prefer the 'conservational' sync option from Google. If you delete a file it prompts you if you also want to delete remotely. Sure, it doesn't prevent bad overwrites but that's what git is for.

No, I wasn’t saying the thing about creating config files in the same vein as “you are holding it wrong^tm”. Any software would crap out if you manually give it wrong/empty config files - that is only reasonable. Sure, if you’re worried about the config file getting corrupted without user interference, that’s a bug. I’d also be pissed if I were the author of the project and someone complained by saying - “I deleted the config file your software uses, and I am not a fan of your software because it doesn’t magically heal itself and protect against users such as myself”.

No! Syncthing is _not_ a backup system. They explicitly say so, and it should also be obvious that a “syncing system” may try to backup some stuff, but you cannot rely on anything that can _delete_ stuff to be called a backup.

I'm the author of that blog post. So,

- I did not use it as a backup system. Due to circumstance, my backups were borked and I was too busy/poor to fix them up. Syncthing was used to sync my passwords, Org mode notes, and a specific two-way sync folder with the phone. Importantly, the first two were read only, i.e. the phone only had read access. Regardless, stuff on the remote, i.e the computer were deleted.

- I don't blame Syncthing because I lost my stuff or I had no backups. I blame them however because their software failed destructively. It should of course fail when the config file is weird, but deleting files on two devices despite permissions only allowing readonly access is tho unacceptable. This is kinda equivalent to nginx deleting your htdocs because a symlink in sites-enabled was broken or /.../sites-enabled was a file instead of a folder. If you don't have a backup of htdocs, that's your fault, but that doesn't mean it's a sane way to fail. The best way is to panic and tell the user: "I'm not touching files before this is fixed". Even rm(1) in coreutils has this sort of precautions, not allowing you to run "rm /" willy nilly.

- I should admit that I don't really make that distinction very clear in the blog post itself tho. It was written with anger after the incident, so I was not as nuanced/clear as I could've been.

- The point of the blog post was to advice against relying on Syncthing and to point out that its failure modes were coded sloppily. Tbf it was more like a note to self because never had analytics on my website so I had no idea if anybody read them with any frequency. I've seen it be linked from a couple places recently tho, and I haven't changed my mind, partly because I haven't been keeping up after seeing attitude like https://github.com/syncthing/syncthing/issues/4345

Afterwards I did go back to using syncthing for a while to sync files b/w laptop&phone but confined to a specific share folder only. After that brief period I've been relying on KDE Connect to push files between a Linux laptop and an Android phone. I botched backups a couple more times tho, as the big banner on the website testifies to.

That's a tragic incident. However, it doesn't look like the author enabled file versioning in Syncthing, which would have retained a copy of all of the deleted files on the computer:


I used to be a user of both Syncthing and Resilio Sync, but i've replaced both with regular cloud storage with Cryptomator or rclone+crypt instead (Cryptomator for clients).

It turns out that the power consumption of keeping redundant hardware running at home is about twice as high as just buying the same storage in the cloud, and then you need to figure in the cost of hardware as well.

I.e. Microsoft Family 365 can be had for €60/year (with HUP), and offers 6x1TB OneDrive storage. With power at €0.44/kwh, you have a power budget of 15W before your home server costs more than the cloud offering.

I run a NAS with syncthing and a dappnode at home. The dappnode runs Storj, which I allocated about 2TB. It is paying me now about 6€/month, which is barely enough to cover the electricity costs and the cost of the drive. It certainly hasn't helped to cover any of the cost from the NAS.

Decentralized solutions are always going to be less cost-effective than central services. We still should do it anyway. Optimizing for cost is a mistake. It makes us overly dependent on the big companies. It makes us fragile and susceptible to any change in their terms.

> It makes us fragile and susceptible to any change in their terms.

Which is why you should backup your data locally (or somewhere else), but even if self hosting, you should still have a backup of your data.

I personally backup everything at home, and if/when their terms change, it's simply a matter of restoring data onto the new solution (local or remote), and i'm up and running again.

Yes, of course. Syncthing is not a backup solution. Luckily the amount of data that I have that is considered private and valuable is not that much (~100GB of family photos and videos, some scanned documents) so my backup strategy has been good old SneakerNet. I have a couple of thumb drives that I leave with family, and occasionally get them back to check/refresh.

I keep my data in the cloud, a local (versioned) backup, and for irreplaceable data like photos, i also burn identical M-disc media at least every year. One stored at home, another in a remote location.

While i will probably have worse problems to deal with in case the cloud disappears and my local data is gone as well, chances are that once those are solved, my family photos will not be a problem after those :)

I think for me the original comment was not just about the data, it was also about "what if the cloud disappears".

Financially, it might make sense to pay MS a few bucks an year for a service. Philosophically, the idea is much harder to stomach.

I am far from being a card-carrying member of /r/datahoarder, but I don't like the idea of supporting (much less depending on) streaming services, and I would pay for a subscription service if it was based on open source.

I use far less than 15W using an old laptop or RasPi their cost is negligible over their useful lifetime

The 15W doesn't account for hardware. As soon as you add in hardware (and redundancy), you're looking at a lower budget.

If i was to self host something similar, i would use a server (could be RPi4) with a set of 2x6TB drives in RAID1(for availability), and a remote backup somewhere (so another 6TB drive). The hardware cost of a single 6TB (Seagate Ironwolf 6TB, €202 at Amazon) harddrive could pay for 3 years of Family 365.

So we're looking at 6 years of Family 365 to buy the harddrives alone (and 9 months more if you buy a RPi 4). I'm not counting the backup drive as you'll need that as well even if running in the cloud.

Assuming a lifespan of 5 years, and an operating power of 5W average, you're looking at 44 KWh per harddrive per year, so 2x44x5 = 440 KWh over 5 years, or €194 (at €0.44/KWh).

So the total cost of running your own setup over 5 years would be:

€404 2 x 6TB harddrives

€194 power consumption (not counting server power)

Thats €598 for 5 years, or €119.5 per year, and that's also assuming no hardware breaks down.

Compare that to the €60 that Family 365 costs per year.

Are you sure that encryption software has no vulnerabilities?

What if encryption keys are stolen?

I could say that both projects are open source, but no, i haven't audited them. Cryptomator has been audited by Cure53 (https://cryptomator.org/audits/2017-11-27%20crypto%20cure53....)

In any case, i guess it depends on your threat model. I have no illusion that a sufficiently determined attacker will gain access to my files, but that's true no matter if i store the files at home or in the cloud - We're talking files accessible "on the move" here, so it's not like i can just airgap the server.

The cloud offers far better physical security than what i have at home, and if not better, at least equal network security with dedicated teams on board to resolve issues.

The (major) cloud also offers far more geographical redundancy (Google 3 sites, Apple 2-3 sites, Microsoft 2 sites), each with redundancy in power/internet/hardware, as well as fire/flood protection.

And of course, the most common reason for broken encryption is a weak password, so don't do that.

Files encrypted with rclone were actually plaintext for about a year:


Because a developer didn’t know the pitfall of taking a random seed from the current time.

Cryptomator takes encryption from a Java library. I don’t know the quality of this ecosystem. Searching it in HN doesn’t return much.

With cloud, you need to clarify if the provider and governments are in your model or not.

> Files encrypted with rclone were actually plaintext for about a year:

It had a severe weakness yes, but it was not plaintext. From the CVE : "dictionary of all possible passwords with about 38 million entries per password length". For an attacker to attempt to break the encryption, they also still need to defeat the cloud providers security measures like 2FA.

> Cryptomator takes encryption from Java library. I don’t know the quality of this ecosystem.

Considering that it's used by banks/goverments, i'd say that it's probably either full of backdoors, or at least somewhat OK :)

> With cloud, you need to clarify if the provider and governments are in your model or not.

I have no illusion that i can keep encrypted files out of government hands, or any other sufficiently motivated attacker. One could argue that the data might even be better protected in the cloud than in my home. Also : https://xkcd.com/538/

If storing illegal content is "your thing", you'd be better off looking for other solutions.

My personal needs are storing maybe not generally sensitive files, but files i consider sensitive, like photos of my family, tax documents, etc. It's all files that would probably not make much of an impact if they were public available, but i prefer keeping them private.

The thing about encryption is that if you don't trust it, it doesn't make much sense using it at all. That goes for both cloud and private. The threat vectors are different, but one could argue that a datacenter network is probably better protected than the average users Zyxel/Netgear/whatever router that probably has about a dozen unpatched CVEs, along with an ISP "backdoor" if supplied by the ISP.

This seems to rely on inotify which is not always triggered when files change when they are watched.

In the following scenario realtime syncing would then fail silently:

- You have a server A which contains files which you are editing.

- You have setup realtime syncing between server A and a machine B.

- On server A you have a Docker container which runs Samba and is hosting the directories of server A which contains the files you are editing.

- You are editing those files on a machine where you mounted those shares. Upon saving, the files on server A would be modified, but inotify would not trigger and the file would not get synced to machine B.

Syncthing has two methods for detecting file changes: Watching (inotify), and Scanning. Both are enabled by default, with scanning once per hour (+/- a random element). Configurable per directory. https://docs.syncthing.net/users/syncing.html

Excellent, that's good to know.

Did this happen to you?

This seems like a very contrived example of a problem.

It didn't happen with Syncthing because I'm not using it (yet!), but while trying to edit files normally which were watched by inotify.

I'm kind of assuming that it's the containerization of Samba which is causing the problem, since I have not installed Samba directly on the server, but it could also be the case when not using Docker.

Apparently any CIFS mount is affected https://lists.samba.org/archive/linux-cifs-client/2009-April...

I use Syncthing to sync my KeepassXC db file between laptops and my android, always on a local network. For the most part I like it. I get a lot of conflicts though and I can't use ediff (or similar) because the file is encrypted. Curious if anyone has figured out how to fix conflicts with Keypass db files because I don't think Syncthing's conflict resolution can handle them very well.

KeepassXC has built in db file merge support! It's hidden somewhere in the menus. I've never personally had to do something more complicated than "I added a new entry on two different devices before syncing" but it's worked well for me so far.

>It's hidden somewhere in the menus

File > Merge with database

I saved my database's password as an entry just to quickly merge it with the .sync-conflict version. It takes 2 seconds, and I don't worry about losing anything anymore. Works beautifully for me. KeePass could check for such conflicts (syncthing, dropbox or else) in the directory where the open database is stored and suggest to auto-merge.

I really like syncthing. I mainly use it for syncing passwordstore directory and a few other small files that I like having synced. Once set up, I basically forget about it, unless I have to go check why this password is the old one, and in this case it is usually android killing syncthing due to low power more or something, which requires me to manually restart it.

It's sufficient enough for my needs. I use Joplin with it as well as no-frills file syncing across my home network. Just a note on revision history: it's better to configure this feature on the notetaking app alone while disabling it on Syncthing.

My only gripe is that this utility may soon become deprecated as Android continues to update their convoluted storage schemes.

Can you share a reference wrt evolving Android storage schemes?

Android 11+ enforces scoped storage, which restricts access to the filesystem in many cases.[1] Syncthing requests the "All files access" permission[2] to get around this, but the /sdcard/Android/data directory (private app data) is still off limits.[3]

[1] https://source.android.com/devices/storage/scoped

[2] https://developer.android.com/training/data-storage/manage-a...

[3] https://github.com/syncthing/syncthing-android/issues/1638

Syncthing is really amazing, I have been using it for nearly two years to sync notes on all of my device; three laptops, one iPad, raspberry pi and a pixel 2. The most difficult thing was finding a solution to work on the iPad, ended up having to use an app to remotely access and edit files on my raspberry pi, it's kinda clunky but gets the job done.

The only thing that keeps me from using it is the lack of an official iOS client. Once they get that, I'll be all over it.

What about https://www.syncany.org/ It seems to be abandoned, but i really like the idea about using arbitrary storage for bidirectional sync. Eg.: use any SSH (SFTP) server as a dropbox alternative.

Nothing but good things to say about SyncThings. I'm only using it on a couple of computers to keep a small about of things in sync, but it always works flawlessly.

Something that interests me even more is that the company behind, Kastelo Inc., possibly bears its name from the esperantoan word for 'Castle'.

Can anyone confirm?

You're probably right. Kastelo previously launched a service called Arigi that was later discontinued. The announcement notes that "arigi" means "to amass · to gather · to put together" in Esperanto.


or its a tech startup that just named it 'Castle' spelled with a K-elo to make it unique

Is it possible to access SD card folders on the ohne by now? That used to be the major drawback some years ago for me..

A couple of questions.

* Is there an iOS app on agenda?

* Can I choose something like S3 or even Dropbox (yes!) to be a central server (more precisely a peer)?

* There doesn't seem to be much progress on an official iOS app, but it is something that users are asking for.[1] An unofficial closed source app called Möbius Sync does exist for iOS.[2]

* S3 and Dropbox are not officially supported,[3] but you can always sync one of your Syncthing nodes to cloud storage on your own.[4]

[1] https://github.com/syncthing/syncthing/issues/102

[2] https://www.mobiussync.com

[3] https://forum.syncthing.net/t/how-to-sync-files-with-aws-s3-...

[4] https://news.ycombinator.com/item?id=18135903

...the whole point of syncthing is that you avoid "the cloud" and all its privacy nightmares. I'm baffled people do this.

That’s why I put “yes” in parenthesis.

Synchting supports end to end encryption. So you can use cloud storage with encrypted folder, for connectivity (no need for relays NAT traversal etc). Clients sync through an always-on peer.

off-site backup?

I could never get this to work on my home network. ¯\_(ツ)_/¯

If you're using Windows, make sure Windows doesn't think your network is public. My dad's computers thought they were on a public network and so wouldn't see each other directly (they were syncing though, very slowly via an external relay).

You have to force it to use wlan or wireless. It solves the problem. Also, wait for a few moments for it to be able to detect your other device.

Is there an obvious way to use this with rclone?

... I actually use rclone mount _instead_ of sync for cloud storage. Tried syncthing a long while ago after finding nextcloud unreliable. Then stumbled acrosss a comment somewhere that talked about the mount feature in rclone (https://rclone.org/commands/rclone_mount/). This is probably only practical for me now because we've got a 1G/1G pipe to the Internet. On the internal network I'm back to using nfs mounts (or samba for the Windows machines). Syncthing is good, but over time I've found all sync solutions to fall short. For backups I use a combination of rsync to the file server on both Linux and Windows, and then rclone out to the cloud.

I guess it is more one or the other depending on the use case.

I've been synchronizing local data to remote data using NFS and Syncthing for a while now instead of doing batches of rclone, if you were referring to this use case.

What are you trying to do?

How fast is this vs rsync over ssh?

Lack of an iOS app is a dealbreaker.

Switching to Android, then?!

only if I can filter what to sync locally

Here's Syncthing's documentation on specifying files to ignore or include using the .stignore file:


This also works in the web and Android clients, which expose .stignore through the "Ignore Patterns" setting.

Google, really.

Yes you can.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact