I used to use it on Android to keep my 'camera roll' synced to my desktop. I preferred setting the ignore file to exclude android's thumbnails directory, but otherwise it worked great. I'd take a photo with my phone and almost before I could open the shortcut on the desktop, boom, the photo was there, since the android client detects changes in the filesystem.
I also used it to sync my password database, and a cross-platform notebook app's database. Worked flawlessly for all of them.
Ditto for a folder shared between my MacOS laptop and Windows desktop.
Lots of controls, especially in the forked android client. Don't want the sync client to run on any network other than your home network? Done. Don't want your syncthing clients to try and do NAT transversal / discovery outside whatever network it's on? No problem.
The biggest bummer is that there's no iOS client. Had to switch to Nextcloud, which so far has been working OK, but the number of people that have problems with the iOS client is quite high (problems such as syncs being so slow it takes days to sync a camera roll), and it's generally sluggish and doesn't work in the background (yes, I know, Apple's fault.) However, Nextcloud does allow me to sync contacts and calendars (I think. I haven't tried to set it up yet.)
One warning: Sycnthing really does not like it when you delete the dotfile folder inside the shared folder(s). Don't do that :)
Likewise, the android application is so bad the best way is to open the web UI on your phone to do stuff (and sometimes, it's the only way to achieve some goals).
There is definitely a lot of room for improvement for easier adoption, yet, this is one of the few sync app that work for me, even behind NAT, with closed ports, and so on.
The second thing I like is that it is a successful FOSS project, meaning it works and is being regularly and often improved by its users.
Your comment, as helpful as you may feel it is, is contradictory to my second - very strong - feeling.
I really think to UI works quite good, and by god, if a better solution is so obvious to you, why haven't you proposed it yet, or - god forbid - made a merge/pull request??
Because there isn't any obligation to? Why do you think you would be entitled to ask for that?
I think freedom of speech allows for it.
Can I have a million pounds, from you personally, please?
Also, that might be the small nudge which potentially has a user actually contribute in some circumstances.
I'm not sure your question is fair.
We're just here having a conversation. The expectation that a user MUST do something for an open source project because they have an issue is a tired take. It's not as though Reventlov is blowing up Syncthing's development team with demands about how to make it better. And, for all any of us know, Reventlov may have already submitted PRs or opened issues for Syncthing. Or, it could be the case that there's a different piece of software suiting their needs better, and that's fine. Or, Reventlov maybe just doesn't possess the technical skill to contribute in that way. And that's okay too.
Bludgeoning people with an open source virtue-stick for sharing an opinion isn't helpful.
But just as much as you consider an "expectation that a user MUST do something for an open source project" is a tired point, I consider the endless ranting by entitled FOSS users an obnoxious trend.
And btw. I never said something about "must", I asked why not choosing - OK, maybe strongly suggesting - another option.
Maybe you took my question the wrong way, but (maybe just as much misguided) I took the "opinion"/"discussion" not as opinion, but for the lack of any helpful suggestion as merely a rant.
I was raised not to needlessly complain about free things, without considering to take things into my own hands.
And even if you don't know how to code, suggestions and discussions are better had in an issue tracker, right? Otherwise, what's the point?
If one is not a user, and considers something else to be better, why voice such a strong opinion?
There is a massive difference between voicing your opinion about something in a public discussion forum and hounding the developers of a project because they don't fix bugs or implement new features on your say-so. One of those is making conversation, the other is entitlement.
> Maybe you took my question the wrong way, but (maybe just as much misguided) I took the "opinion"/"discussion" not as opinion, but for the lack of any helpful suggestion as merely a rant.
Sometimes, you can recognize that something is bad without knowing the best way to fix it.
> I was raised not to needlessly complain about free things, without considering to take things into my own hands.
Okay, so that was how you were raised and how you operate. I don't see the reply as needless complaining. It is a critique of some specific issues. It was constructive criticism, because it presented a specific set of things that could be improved upon. I happen to agree with those criticisms, even though I generally think Syncthing is a great piece of software.
> And even if you don't know how to code, suggestions and discussions are better had in an issue tracker, right? Otherwise, what's the point?
What's the point that any of us are here talking about anything?
> If one is not a user, and considers something else to be better, why voice such a strong opinion?
Because people make conversation and have opinions. Are you familiar with socializing? It's okay to not like something. It's okay to not like _parts_ of something.
How would you make things clearer?
Here's that fork: https://github.com/Catfriend1/syncthing-android
You can also search up Syncthing-Fork on Google Play.
NextCloud integrates into iOS's Files app pretty tightly as a 'storage provider' or whatever Apple calls it. End result is that my password database is magically up-to-date whenever I go to use it in my password manager, probably because Files sees an app trying to access the file, pings NextCloud to say "yo, is this shit up to date?", and then my password manager opens the file. I don't have to worry myself about background sync and such.
I didn't have to do jack shit to open the database the first time; the built-in iOS file-picker that came up let me select nextcloud as a source, and then my password database. Done. It is two taps to get that database open now - one to hit "passwords" in the keyboard area, and the second is me TouchID'ing to unlock the database.
Looking over the Mobius faq, it appears to be vastly inferior, with no guarantee your files are synced at any point in time, and you have to manually push files from Mobius to Files, and then access them in apps from there:
> iOS apps cannot access each others’ files. This means you will need to copy files in and out of Syncthing using the Apple Files app.
I wouldn't do that even if the software was free. And they want me to pay for that? No?
A better way should be creating a whitelist of apps that can access the storage of other apps. Gpx tracker app? Add Syncthing to its whitelist, and the file manager I use instead of the system one.
The difference is that instead of having an ever-running daemon, the developer schedules tasks with the system, and system decides when to run them based on network availability, battery charge, etc as well as the app’s behavior (badly written/inefficient background tasks and frequent high intensity task requests are penalized).
We see a NextCloud dev explaining that background uploads don't work herehttps://github.com/nextcloud/ios/issues/215
And we can see syncthing devs lamenting the many issues with an iOS client here https://github.com/syncthing/syncthing/issues/102
It may be the case that file eventing now works, but a quick check with an iOS dev friend suggests that the filesystem sandboxing is too restrictive to be meaningful anyway.
Further on this issue, consider that a functioning syncthing client is a node in a p2p network, so must be able to advertise and listen to requests in the background as well as the background jobs that NextCloud requires (as NextCloud is centralised it doesn't need node level co-ordination) - so a partially working NextCloud client is good enough, but a partially working Syncthing client is woefully broken.
Why isn't this a lawsuit yet?
It's not like Apple specifically advertised background services, then lied about it.
The device is specifically built like that, and you can buy a different phone if it's that big of an issue.
It feels like complaining your toaster doesn't have a dark enough toast setting that you want. Is there a lawsuit in that?
- I can't change my car's satnav. You can buy another car if it's that big a deal.
I think we can agree, the above statement reads silly.
Sometimes, no matter the deal size - buying a new thing isn't always a reasonable option.
If you really care about replacing parts in your car like the satnav, should you need to do research before buying instead of assuming it works that way?
Or should we have rules that all sat navs need to be replaceable so consumers don't need to do that?
I honestly can't draw the line myself, I just try my best to identify what I want in a product before purchasing. Especially huge purchases like a car.
What does it do if you're not connected? Don't mind it checking whether the local copy is up to date, but I'd be concerned if the sync trigger is you opening the app.
Works flawlessly, is rather cheap, and the developers encourage donations to the main SyncThing project.
I usually sync PDF and epub files to my iPad and open them directly from the sync folder in Apple Books.
Now I use Minio with FolderSync (Android App, I use the paid one, but the free is perfectly capable) to backup my camera roll and I wrote a very simply WebDAV server in Go to backup my (Android) password manager DB which only supports WebDAV, I sit NGINX in front of them both to terminate TLS and to handle basic auth for the WebDAV server, though I could easily implement basic auth in that too (again, my password manager only supports basic auth).
If someone can suggest a better password manager for Android with a good UI, good integration (e.g. auto-fill), has a Linux desktop app and can backup to WebDAV or Minio I'd jump in a heartbeat.
I considered Bitwarden, but I don't want to run the Mono/Windows server container, and I don't want to rely on the Rust port which is behind in features and is susceptible to the upstream breaking APIs.
Minio is zero-maintenance, if it needs an upgrade I just pull a newer Docker image. My WebDAV server is fewer than 100 lines of Go and extremely easy to maintain.
I use Nextcloud on my phone (Android) to backup my camera too, the automatic sync is wonderful and works well. It does stuff I don't need too, but it's pretty easy to use and setup and it integrates well with the OS (Windows and Mac at least).
"The server project is written in C# using .NET Core with ASP.NET Core."
Seems like alternative DB providers is in alpha stage right now.
Mono and netcore is not the same, netcore is one of the more wonderful things I've worked with if I'm to be completely honest. Then again, I also kinda like PowerShell
My fix it to run `touch .stfolder` in the build directory which fixes it but seriously Syncthing? Just put in some reasonable defaults and call it a day.
Otherwise it's perfectly fine.
then you _really_ want syncthing to stand back and hold its hands up, instead of syncing deletion events to all connected devices or sending all your friends your entire home directory or something like this.
imo, it's a good safety tradeoff, and like so many of those (e.g. short vs long passwords) they might seem annoying most of the time, but prevent incredible damage that _one_ other time.
For whatever reason open source software always has these weird edge case engineering "solutions" that really aren't that great. If someone was actually a paying customer and asked for this the engineers would just figure something out instead of making excuses for why it is the way it is.
Maybe it's possible to .. not do that?
o_O weird take but k
Seems kind of obvious to my eyes, asking, talking, discussing about some change before doing the actual work. But as it seems, "obviousness" is in the eye of the beholder...
I assume you don't care about missing out on updates since that's the free work you're complaining about
That's really bizarre behavior. If the folder has no files or zero byte files then it might make sense, but if gradle's deleting non-empty files that don't concern it, that seems to be more of a problem with gradle.
I get the point though; if there's no versioning, why bother having an empty folder?
The web UI is perhaps the only pain point of syncthing. It's very confusing around folder paths VS IDs, especially around the "Default Folder".
Functions like Actions > Show ID are presented poorly.
Importantly, the UI is not intuitive at all. You need to read the docs to understand what you are supposed to do.
Some previous discussions:
* "Syncthing: Open Source Dropbox and BitTorrent Sync Replacement" https://news.ycombinator.com/item?id=7734114 (7 years ago|184 comments)
* "Syncthing is everything I used to love about computers" https://news.ycombinator.com/item?id=23537243 (1 year ago|159 comments)
* "Syncthing: Syncing All the Things" https://news.ycombinator.com/item?id=27929194 (3 months ago|172 comments)
I share the files on my local network at home between phone, laptop and desktop. My only issues so far:
- I need to manually start Syncthing on the devices, whereas Nextcloud was always running on boot. Not sure if I should start it every time, especially on the Android phone which cannot sync while away from home. I do not want to empty the battery while outside of my LAN, + the persistent notification "Syncthing is running" is annoying.
- I need to power up at least the laptop or desktop to backup my phone data. One solution that I will try is to install a headless Syncthing instance on my NAS. This way the phone can always sync to the NAS every time, and then laptop/desktop can sync from NAS when needed.
The joys of peer2peer! At least, I am sure that my moderately sensitive files are not leaving my local network to be stored on my VPS, which can be hacked someday.
I also need to figure out how to install Wireguard on my router to allow to access my NAS from my phone while on a trip, but that's another story.
systemctl --user enable --now syncthing.service
sudo loginctl enable-linger $USER
The Debian/Ubuntu packages ship all the needed files. You only need to activate them.
I've seen comments like this in multiple occasions. Is it really that bad? I guess if there was a flagrant performance bug, it would have been sorted out with time, so the only remaining explanation is that the code is really poorly laid out, or with design flaws that cause so many performance problems for users.
However next to each "it's a resource hog" there is always another comment saying "it works fantastically well", so I never know what to think (sort of trying it myself, but I'm not that interested in the issue)
After moving all files to Nextcloud server everyone (company of 20+ people) installed Nextcloud Windows Desktop client and connected to it with only virtual files option set (nothing is transfered until opened/edited). The initial "sync" after first seup took about 2-3 days to complete for everyone. Reason? Probably too many small files. The share contained about 300+GB of data in around 500k files (not sure about that number only a guess). Nextcloud checks every single file with server which computes hash for it which is stored locally. You can find threads on their community forum or reddit by people experiencing same slow behavior when they have a lot of small files. There were some promises from devs to resolve it with the latest client update but we didn't verify it yet.
I have a ~5 years old server (Intel Skylake, 8 GB RAM) on which I run a bunch od docker containers (about 30). One of them is Nextcloud and the load average is 0.8-1.0. This is for an average of 2 users (me and the rest of my family that in total uses the server as I do myself).
Not a very scientific comparison, but I never had any performance issues.
I am more worried about the Android part. The best solution would be to start SyncThing when on my LAN via matching SSID, and disable it when I am out of range.
I've got an old computer which I run things on occasionally. I installed syncthing on it for a while and it worked well. Stopped leaving that computer on so now when there's something that needs to be synced between my desktop and laptop I just use my phone as the middle always on server.
I can recommend https://tailscale.com/
It just works.
And same url on an Invidious instance - https://yewtu.be/watch?v=O5O4ajGWZz8
He's using Syncthing for backups (by enabling a feature that keeps N copies of each file), which while possible, I hadn't thought of as a possibility
No-interaction autoupdate really should be opt-in. It's super dangerous software otherwise. Anyone with release credentials could own many many thousands of machines with a single action.
PS: Thank you Debian maintainers.
It's a proprietary, more polished alternative that offers a freeware license for personal use.
My biggest issue with Syncthing was that there was no auto-discovery, if you had 25 or so computers across a group of friends, you needed to set up 25^2 connections, which wasn't really feasible. Not sure if this is still a thing, but Resilio solves this perfectly, you just input a secret key and computers just connect automatically. The developers also provide turn servers, which is useful in restrictive NAT scenarios. End-to-end encryption is supported, both in the free and the pro versions, although the latter gives you much more control over who can do what to the folder.
I use Resilio as a "big Dropbox" with dozens of gigabytes of data per folder, shared across a small-ish group of trusted friends. It doesn't cost us anything and works really well.
The largest folders I've seen had over a terabyte of data in them, but you probably want the pro version (with selective sync) for those.
NAT traversal and relay servers exist too. So seems it's pretty much on par with Resilio, plus open source.
It's all PCs, none of them online 24/7.
Besides, setting all that up requires way more technical knowledge than most people have. Sure, I could do it I guess, but explaining Resilio configuration to complete non-techies is hard enough.
I have a folder that's synced between my laptop, desktop and phone, and each has the other 2 devices as introducer.
Lots of people love Syncthing and that is great for them, but I've always found btsync to be easier to use.
- Sometimes long-deleted folders pop up when older version clients connect
- Syncing or connecting takes sometimes ages to start for no obvious reason, client not giving enough info on what is happening
- Relation between clients is not obvious, the powerful configuration options are hard to decipher after some time
- The project could start a (freemium?) cloud client option, maybe leveraging other providers for storage. Most people will not be able to set up a server, and without that it's just way less practical for the layman than any mainstream option.
- The handling of ignored files is confusing, to say the least.
I happened to comment about it just some day ago, so to not repeat the comment -> https://news.ycombinator.com/item?id=28760365
What causes that?
While Resilio is slightly less secure (debatable) than Syncthing, their UI and UX are vastly better, and virtually no different than something like Dropbox or Google Drive. I fully understand that Syncthing's developers have different priorities than UI and UX, but I'd love to see it become more mainstream.
That sounds strange, but when you have old spare phones, they're very useful for this. Even for the daily driver phone that I use, it's so good at keeping files in sync between my other PCs.
For the game files it's just my gaming laptop and gaming desktop
I setup a connection for the folder from every device to every other device. Syncthing has a unique ID for the share, so they all share the same ID for that folder and it does all the work syncing. none of my project files are ever out of date on any device. Lets me sit down on any device and just work. (Desktop, windows laptop, Linux laptop, or ssh into server)
Another great thing is is you set it up in WSL2, it just works, and all it has to do is be on a different port. Lets me sync my workspace files into there and get full Linux filesystem file watching ( the \\wsl$\ path doesn't work well from windows side)
My use case is I want to "de-cloud" my life, and use syncthing to push things like wireguard keys to my phone, along with backing up photos and pushing down audiobooks and music. I bought a fairphone 3+ with the idea of it being a forever phone, but I couldn't get things working on liniageOS 11 or /e/ os on android 11. I could stay on android 10, but things will eventually migrate to target the new SDK.
I also have the issue with syncthing not being able to write to an external SD card, which was one of the selling points for me for the Fairphone.
I would love a solution to this, because I'm really stuck in the water on this.
It's a good thing in general that other apps cannot access other's app internal files.
But even with the versions before 11 things were a chore with many vendors putting aggressive power saving features in their phones that are hard if not impossible to disable for some chinese brands, sometimes the app not restarting after the app store pushed an update, etcpp.
I tried to de-cloud my dad with it, but eventually the app would stop running for some reason after a few days or weeks and nothing would get synched. Because otherwise, the technical foundation, especially the nat traversal is superb. I ran a mini pc in his home with ST that would receive all the photos from the phone. Zero maintenance, no port forwardings that I would have to re-do if the provider switches routers or the settings reset or what not, no dyndns for reachability. He was travelling in thailand, vietnam and elsewhere and the pictures just kept coming in. A reliable version of PhotoSync that uses the syncthing protocol under the hood would freaking kill it.
I'm not sure about external SD cards, though, since I haven't tested this.
Though it turns out I've been negligent and not noticed the Play installed versions are no longer maintained, so they may have worked around whatever the issue is. I must get around to reinstalling via f-droid...
I wish it worked better for occasional file sharing (for example to transfer files between phone/pc). I don't want to keep it running in the background - just sucks too much power, and on the phone I never need to keep other stuff synched. But the time it takes for the first connect to happen after starting it can be in the order if minutes on a known network, dozens of minutes if not. Not usable for this purpose.
The android client is also not very well thought out. You cannot "share to syncthing" until it's active, so if you want to put a file for sharing until your network connectivity is up, you can't. You have to start it, share, then shut it down. If you want to do that on an unknown network you didn't whitelist, you have to whitelist the network as well. Kind of pointless. You'd think you can also start it when it prompts you to change the settings, but this also doesn't work.
I ended up disabling all these advanced settings, as they clearly end-up being an hindrance. I just start it manually and quit it, kind of defeating the entire purpose of them.
To further limit the amount of CPU used when syncing and scanning, set the environment variable GOMAXPROCS to the maximum number of CPU cores Syncthing should use
Are they using an environment variable shared by other Go programs? Seems like bad practice... why aren't they using a variable or or switch that's unique to Syncthing instead?
Huh? So only set it for syncthing.
For example, when you use systemd to run syncthing, you can configure the environment for syncthing in the unit file.
Other Go programs on your machine won't be affected.
I hark back to when environment variables were set globally on a machine (back in DOS days).
I realize NTVDM emulation introduced the ability to configure more granular ones (eg. system vs. user) and you can now create a shortcut that launches an intermediate step which tailors the environment for a single process. But always viewed that as a cludgey convenience for users, not a design paradigm for developers. (Also with at least six different places to check for where Windows envvars come from, I typically favor the expliciteness of command line switches).
What happens if someone's creating a script that will shell out simultaneously to multiple Go programs? Do they all inherit the parent's environment variables? Does the script now have to tailor the same-named variable for each one?
Am I doing something wrong or is this easier on Linux and us Windows schmucks got left behind?
Thanks for any elucidation...
By default, every process inherits the environment of its parent process. So if you set an environment variable (eg. with the EXPORT command in bash), then every process you launch from the shell will have the same value for the environment variable.
However, you can set environment variables for each command that you execute.
So for example in a shell script you could do something like this:
# set the environment variables value to 4
# valid for all subsequently executed commands
# run syncthing but limit it to 2 processors
# defining the variable like this it will only affect this command
# run some other go program
# the environment var will be set to the value 4 again
For example, assume I execute a script from someone else, who did not set GOMAXPROCS. Then I could just set the GOMAXPROCS variable and then execute the script. I could change the config without needing to edit the script!
This setting is managed by Go runtime, and Go runtime defines a specific env var for customization. If you want different setting for different Go programs, no problem, it is not a global config, but per process, so set the env var to desired value only for syncthing process.
Note that the mainstream Syncthing on Android works so far only because they're targeting an older API version. They will hit the same restriction if they migrate to the new API, unless they implement SAF or whatever.
Of course, this issue still highlights how the ‘open-source’ and ‘customizable’ OS does only what Google allows it to do.
I am never modifying the same file on two systems simultaneously, so I can afford to sync hourly with no issues. While not everyone can do this, I suspect many are in the same boat.
* Try to use literally the exact same binary where architectures allow, due to build dependencies.
So in my case my server would be 10.0.0.1 and I have two devices, let's say 10.0.0.10 and 10.0.0.20, I would putmy servers IP in as tcp://10.0.0.1:22000 in the settings page.
You can then even disable most of the doscovery options (if not all). Curios if you will see improvements!
I've been thinking about solving the backup part by letting one of the devices perform btrfs snapshots on its storage. Does anyone know about any write-up that describes or compares such solutions?
Setting up a cron job / timer unit that commits the synced folder to a git repo is also an alternative, but has some disadvantages: stores contents twice (once in the sync folder, once in the git object database), and pruning older versions is difficult (git is mainly built to keep all history).
I do zfs snapshots on my storage server (home desktop), which is in Syncthing's "receive only" mode. I use it to sync my camera roll and Whatsapp media files from my Android phone. I use pyznap for snapshot management.
> Setting up a cron job / timer unit that commits the synced folder to a git repo is also an alternative
Another option would be to backup the synced folder using a backup tool like borg, restic, duplicacy etc. They have options to prune the snapshots.
For extra extra snapshotting fun, rsync.net then takes ZFS snapshots of the borg backup. So even if my snapshot cleaning gets over-enthusiastic, I'll still have a snapshot of the snapshot.
It's especially useful for our band, where we share large amounts of .wav files and recording projects and so on. It even works in the rehearsal room (which has no internet) through an old wifi router I set up there, that each of our laptops connects to.
First, backups! Never trust any software to not have a bug.
Secondly, if you don’t know what you are doing, may be don’t create random config files and hope the software deals with it. That’s in the “neither power user, nor dumb user” category that most software will not be designed for.
This sounds like a variant of the old "you're holding it wrong" blurb.
The problem you're commenting on actually describe a multitude of problems with the application, specially how it causes data loss if it's config happens to get corrupted.
The comment about the need to back up data also seems to not take into account that the service is already used to back up data across multiple devices.
Is Syncthing my ideal backup application?
No. Syncthing is not a great backup application because all changes to your files (modifications, deletions, etc.) will be propagated to all your devices. You can enable versioning, but we encourage you to use other tools to keep your data safe from your (or our) mistakes.
No! Syncthing is _not_ a backup system. They explicitly say so, and it should also be obvious that a “syncing system” may try to backup some stuff, but you cannot rely on anything that can _delete_ stuff to be called a backup.
- I did not use it as a backup system. Due to circumstance, my backups were borked and I was too busy/poor to fix them up. Syncthing was used to sync my passwords, Org mode notes, and a specific two-way sync folder with the phone. Importantly, the first two were read only, i.e. the phone only had read access. Regardless, stuff on the remote, i.e the computer were deleted.
- I don't blame Syncthing because I lost my stuff or I had no backups. I blame them however because their software failed destructively. It should of course fail when the config file is weird, but deleting files on two devices despite permissions only allowing readonly access is tho unacceptable. This is kinda equivalent to nginx deleting your htdocs because a symlink in sites-enabled was broken or /.../sites-enabled was a file instead of a folder. If you don't have a backup of htdocs, that's your fault, but that doesn't mean it's a sane way to fail. The best way is to panic and tell the user: "I'm not touching files before this is fixed". Even rm(1) in coreutils has this sort of precautions, not allowing you to run "rm /" willy nilly.
- I should admit that I don't really make that distinction very clear in the blog post itself tho. It was written with anger after the incident, so I was not as nuanced/clear as I could've been.
- The point of the blog post was to advice against relying on Syncthing and to point out that its failure modes were coded sloppily. Tbf it was more like a note to self because never had analytics on my website so I had no idea if anybody read them with any frequency. I've seen it be linked from a couple places recently tho, and I haven't changed my mind, partly because I haven't been keeping up after seeing attitude like https://github.com/syncthing/syncthing/issues/4345
Afterwards I did go back to using syncthing for a while to sync files b/w laptop&phone but confined to a specific share folder only. After that brief period I've been relying on KDE Connect to push files between a Linux laptop and an Android phone. I botched backups a couple more times tho, as the big banner on the website testifies to.
It turns out that the power consumption of keeping redundant hardware running at home is about twice as high as just buying the same storage in the cloud, and then you need to figure in the cost of hardware as well.
I.e. Microsoft Family 365 can be had for €60/year (with HUP), and offers 6x1TB OneDrive storage. With power at €0.44/kwh, you have a power budget of 15W before your home server costs more than the cloud offering.
Decentralized solutions are always going to be less cost-effective than central services. We still should do it anyway. Optimizing for cost is a mistake. It makes us overly dependent on the big companies. It makes us fragile and susceptible to any change in their terms.
Which is why you should backup your data locally (or somewhere else), but even if self hosting, you should still have a backup of your data.
I personally backup everything at home, and if/when their terms change, it's simply a matter of restoring data onto the new solution (local or remote), and i'm up and running again.
While i will probably have worse problems to deal with in case the cloud disappears and my local data is gone as well, chances are that once those are solved, my family photos will not be a problem after those :)
Financially, it might make sense to pay MS a few bucks an year for a service. Philosophically, the idea is much harder to stomach.
I am far from being a card-carrying member of /r/datahoarder, but I don't like the idea of supporting (much less depending on) streaming services, and I would pay for a subscription service if it was based on open source.
If i was to self host something similar, i would use a server (could be RPi4) with a set of 2x6TB drives in RAID1(for availability), and a remote backup somewhere (so another 6TB drive). The hardware cost of a single 6TB (Seagate Ironwolf 6TB, €202 at Amazon) harddrive could pay for 3 years of Family 365.
So we're looking at 6 years of Family 365 to buy the harddrives alone (and 9 months more if you buy a RPi 4). I'm not counting the backup drive as you'll need that as well even if running in the cloud.
Assuming a lifespan of 5 years, and an operating power of 5W average, you're looking at 44 KWh per harddrive per year, so 2x44x5 = 440 KWh over 5 years, or €194 (at €0.44/KWh).
So the total cost of running your own setup over 5 years would be:
€404 2 x 6TB harddrives
€194 power consumption (not counting server power)
Thats €598 for 5 years, or €119.5 per year, and that's also assuming no hardware breaks down.
Compare that to the €60 that Family 365 costs per year.
What if encryption keys are stolen?
In any case, i guess it depends on your threat model. I have no illusion that a sufficiently determined attacker will gain access to my files, but that's true no matter if i store the files at home or in the cloud - We're talking files accessible "on the move" here, so it's not like i can just airgap the server.
The cloud offers far better physical security than what i have at home, and if not better, at least equal network security with dedicated teams on board to resolve issues.
The (major) cloud also offers far more geographical redundancy (Google 3 sites, Apple 2-3 sites, Microsoft 2 sites), each with redundancy in power/internet/hardware, as well as fire/flood protection.
And of course, the most common reason for broken encryption is a weak password, so don't do that.
Because a developer didn’t know the pitfall of taking a random seed from the current time.
Cryptomator takes encryption from a Java library. I don’t know the quality of this ecosystem. Searching it in HN doesn’t return much.
With cloud, you need to clarify if the provider and governments are in your model or not.
It had a severe weakness yes, but it was not plaintext. From the CVE : "dictionary of all possible passwords with about 38 million entries per password length". For an attacker to attempt to break the encryption, they also still need to defeat the cloud providers security measures like 2FA.
> Cryptomator takes encryption from Java library. I don’t know the quality of this ecosystem.
Considering that it's used by banks/goverments, i'd say that it's probably either full of backdoors, or at least somewhat OK :)
> With cloud, you need to clarify if the provider and governments are in your model or not.
I have no illusion that i can keep encrypted files out of government hands, or any other sufficiently motivated attacker. One could argue that the data might even be better protected in the cloud than in my home. Also : https://xkcd.com/538/
If storing illegal content is "your thing", you'd be better off looking for other solutions.
My personal needs are storing maybe not generally sensitive files, but files i consider sensitive, like photos of my family, tax documents, etc. It's all files that would probably not make much of an impact if they were public available, but i prefer keeping them private.
The thing about encryption is that if you don't trust it, it doesn't make much sense using it at all. That goes for both cloud and private. The threat vectors are different, but one could argue that a datacenter network is probably better protected than the average users Zyxel/Netgear/whatever router that probably has about a dozen unpatched CVEs, along with an ISP "backdoor" if supplied by the ISP.
In the following scenario realtime syncing would then fail silently:
- You have a server A which contains files which you are editing.
- You have setup realtime syncing between server A and a machine B.
- On server A you have a Docker container which runs Samba and is hosting the directories of server A which contains the files you are editing.
- You are editing those files on a machine where you mounted those shares. Upon saving, the files on server A would be modified, but inotify would not trigger and the file would not get synced to machine B.
This seems like a very contrived example of a problem.
I'm kind of assuming that it's the containerization of Samba which is causing the problem, since I have not installed Samba directly on the server, but it could also be the case when not using Docker.
Apparently any CIFS mount is affected https://lists.samba.org/archive/linux-cifs-client/2009-April...
File > Merge with database
I saved my database's password as an entry just to quickly merge it with the .sync-conflict version. It takes 2 seconds, and I don't worry about losing anything anymore. Works beautifully for me. KeePass could check for such conflicts (syncthing, dropbox or else) in the directory where the open database is stored and suggest to auto-merge.
My only gripe is that this utility may soon become deprecated as Android continues to update their convoluted storage schemes.
Can anyone confirm?
* Is there an iOS app on agenda?
* Can I choose something like S3 or even Dropbox (yes!) to be a central server (more precisely a peer)?
* S3 and Dropbox are not officially supported, but you can always sync one of your Syncthing nodes to cloud storage on your own.
Synchting supports end to end encryption. So you can use cloud storage with encrypted folder, for connectivity (no need for relays NAT traversal etc). Clients sync through an always-on peer.
I've been synchronizing local data to remote data using NFS and Syncthing for a while now instead of doing batches of rclone, if you were referring to this use case.
This also works in the web and Android clients, which expose .stignore through the "Ignore Patterns" setting.