I really like the philosophy of the project and the guys running it, but my recent experience trying to use it as a dropbox replacement has been bad: machines that stayed "syncing..." forever, deleted files that reappeared after a while, renamed folders resulting in duplicated folders with both the old and the new name...
I think it's just that the software is not yet mature and production-ready, and perhaps I was throwing too much data at it at once. Seeing the project progress, I'm confident that when it becomes more stable it will be a very nice piece of software, but it's not there yet.
I've actually found the opposite. One of my coworkers and I are disappointed by a lot of syncing software out there, and began using SyncThing.
We haven't experienced any bugs so long as we both keep our versions in sync. Once one has a later version than the other, all bets are off. That's a problem that's tricky to solve correctly though.
Aside from that, we were disappointed by the constant needs to restart SyncThing when saving configuration changes, some lack of UX features (e.g. being able to move where your sync share is stored -- I made an issue on Github and was told to edit the config file manually and restart SyncThing -- I asked if a pull request would be accepted if I added the feature myself, and got a vague response.)
Overall though, SyncThing has worked excellently for us.
Not sure what is causing your bad experience. Are you using an up-to-date version? I have been deploying it across 4 different computers and it works like a charm.
Yes, I was using the last version in all machines (some mint, some arch).
I was trying to sync seven machines in three different physical locations, some with slow internet connectivity; only one machine in each location had its syncthing ports open to the internet through NAT. I'm not sure it's a supported use case: syncing in the same LAN generally worked fine, but changes often didn't propagate correctly to other locations; for instance a folder name change sometimes was not propagated to the remote locations, and after some time the machine in which the rename had happened detected that the rest of the machines in the syncthing group had a folder which didn't exist locally (the folder with the old name), and downloaded it from its peers. The result was that the folder was duplicated everywhere, with both the old and the new name.
The same happened sometimes with deletions: sometimes they didn't propagate over LAN boundaries, and this resulted in the deleted files reappearing after some time.
A machine staying "syncing..." forever also happened sometimes, requiring a manual restart of the service. I couldn't reliably reproduce it though.
I like the project and I'm sure if I reported the issues in github they'd have given support. I wish I had the time for properly reporting and investigating, but I really needed a syncing solution I could trust for a small business and I needed it ASAP, so I settled for owncloud (which for the moment has not given me any surprises). I will try to find some time to reproduce the bugs in my personal machines and report them, but I really didn't have the time when I found them.
Completely agree. Compare to BitTorrent Sync, which is proprietary but works perfectly out of the box. [1]
[1] Okay, this is true for BitTorrent Sync versions < 1.4 -- more recent versions have been less reliable, particularly with regards to cross-platform syncing.
I use Syncthing for most of my cross-device sync'ing, but I haven't yet found a good FOSS solution for sync'ing locally (local folder to external hdd, mp3 player, another local folder, etc).
There were a few posts on the Syncthing forums requesting it as a feature but they didn't really get anywhere:
It works quite well for that purpose, but since it's proprietary I'm not comfortable with blindly trusting it to sync my important files over the internet.
Still waiting for that perfect open-source p2p sync app that can also do local sync'ing.
I personally use duplicity to make versioned backups of my homedir (on OSX, debian and Arch machines) and keep versioned dropbox-like backups via owncloud. My homedir backups are GPG encrypted automatically.
duplicity with gpg is good idea for backup and security, but probably difficult to deal with large amount of binary data(pictures,family videos) up to, say 2TB.
No? If only does deltas, presumably you aren't changing all the files very often. My largest duplicity backup is, ~800GB. With 24h of hourly snapshots, 1 month of daily snapshots, and 1 year of monthly snapshots, the actual backup currently takes ~5TB. It only increases in ~1-10GB/day
Actually your link says just that nobody is paid to maintain it.
"It will continue to be maintained and supported for the foreseeable future, and we occasionally release new versions with bug fixes, improvements (sometimes large ones), and contributed patches."
> I use Syncthing for most of my cross-device sync'ing, but I haven't yet found a good FOSS solution for sync'ing locally (local folder to external hdd, mp3 player, another local folder, etc).
I love Syncthing. I used to use Dropbox and then ownCloud on a VPS, but so far Syncthing has been the perfect replacement to regain complete control of my data.
I haven't had too many. About the same as Dropbox, I think. From memory, I believe it appends -conflicted-BLABLA's-copy or some such to the filename.
Conveniently, it has a built-in backup system you can enable that keeps the last 'n' copies of files. I've got mine set to 5 so even if something goes horribly wrong, I can fetch from the backup.
I would still worry without offline backups. Malware or a malicious actor could delete the built-in backups from _all_ sync'ed devices by modifying each file 6 times.
Yes. It's very good. I've been using it for about 4 months now between a Raspberry Pi (Raspbian Linux), an old laptop (Ubuntu Linux), my main computer (Windows 8.1), and my girlfriend's computer (Windows 8.1).
It's usable if you have a small amount of files. For anything serious (say 100k files or 500GB of data) is blows through CPU and RAM to the point of being unusable. I've been actively testing several and there is currently no actually good open-source solution in this space.
I've got about 150GB (but well over 100k files) and it only really lags on the Raspberry Pi. One thing I could suggest is that you change how often it rescans your directories for changes (I set mine to 6000 seconds versus the default 60). That might make a difference.
I've explored this pretty extensively and it's currently just the way it is. It happens from first sync so the rescan interval makes no difference. Your collection probably requires 300-400MB of RAM to sync so it should be on the cusp of failing on a raspberry pi.
Here's the bug report that was closed without any fix:
A sync tool that uses on the order 2MB of RAM per GB of files or 4kB per file isn't really usable. A cheap NAS these days can easily have 6TB of RAID6 storage but it won't have 12GB of RAM. Clearly something in syncthing is keeping way too much stuff pinned in memory and the developers so far haven't focused on that. There's no apparent need for this memory usage either as there's already an on-disk database with the state of the sync folder.
It uses inotify, so "scanning" for changes more frequently should have only a minimal impact on performance. Of course if you have a lot of actively changing files, you don't want it spending a lot of time transferring files that are still in the process of being modified, so reducing the frequency might help.
I've never noticed performance issues with syncthing, excepting my old original Nexus 7 tablet which does seem to suffer a bit due to its poor hardware. I keep my photos volume (~250GB) synchronized between three desktop systems and it's never presented a problem with those.
Given that the grandparent mentions issues with larger volumes, there may be some sort of "tipping point" beyond which performance degrades, and I just have not encountered it.
Regardless I'd encourage people to give Syncthing a try. The project is under very active development and the issue the grandparent mentions may be resovled by now, and of course that issue might be environment specific and not generally applicable.
Actually it doesn't. There are some add-ons to do it but the base tool does periodic full scans.
> I've never noticed performance issues with syncthing, excepting my old original Nexus 7 tablet which does seem to suffer a bit due to its poor hardware. I keep my photos volume (~250GB) synchronized between three desktop systems and it's never presented a problem with those.
250GB between desktops with multiple GB of RAM probably works. It will soak up plenty of resources though.
>Regardless I'd encourage people to give Syncthing a try. The project is under very active development and the issue the grandparent mentions may be resovled by now, and of course that issue might be environment specific and not generally applicable.
The issue is pervasive and apparently quite deeply rooted in the way syncthing is built as it's indeed under very active development but this hasn't improved.
Thank you for the correction re: inotify, I did not mean to mislead people. Sadly I can no longer edit the incorrect information in my original comment.
I second this. Tried to use it to sync a picture directory between my desktop and NAS device and after the original sync, it never managed to complete any further attempts. It's great for smaller directories, it's horrible at lots and lots of small to medium sized files.
Yes. It's easy to install and runs on Android. It's quite fast in detecting differences and then syncing 2 nodes. It's meant to permanently run in the background though. It cannot be not easyly automated to, eg, sync 2 nodes and then shut down. In case if concurrent edits, you end up with multiple versions of the file on disk. You have to somehow detect those conflicts (find) and merge the files yourself. It doesn't help you much with that.
synch Thing is really a good option. It has a good community that is actively supporting the product and the features include only syncing certain files is great. I was a former Bit Torret Sync user but switch due to its open source code.
Neat. I really liked Ubuntu One. However, if they seek to receive contributions I think GitHub would give them more exposure than launchpad, from what I can tell the history is not preserved (probably with good reasons) so the switch from bazaar would not have been painful.
They don't. This is a code dump for a dead project. From the FAQ:
"Will you take patches?"
In general, no.
We won’t have anybody assigned to reviewing and accepting code.
We’d encourage interested maintainers to fork the code [...]
They're only using Launchpad because it's a Canonical product and the way they always do distribution of this kind. I too would agree in your choice of Github as it tends to be more developer centric in it's social tools. I think it will get picked up more if it's there..
So could this evolve into a better owncloud server? I don't think AGPL is a problem for the majority of owncloud usecases. It looks it's based on Python & Postgres - maybe it has better performance than owncloud?
As a person who does not always want or need to install another VCS tool (Bazaar) I have created a cloned copy on Github (I'm sure others have done so as well but they haven't posted here yet from what I could find). Enjoy! https://github.com/stevegood/filesync-server
Why did it take so long to release?
Our engineering team was tasked with supporting
the release of the phone and then the now
announced Snappy project. The team cared deeply
about open sourcing this code and spent time
wherever they could spare in moving it forward,
cleaning up the code so it would work enough
outside of the very specific production
environment and untangle it of some commercial
code that was used at some point.
The obvious follow-up question is, given their limited resources, why were they spending some of it on closed-source code based on other people's commercial code?
The answer quoted above said they needed to spend time to "untangle it of some commercial code that was used at some point." That just strikes me as a bizarre problem for Canonical to have even had.
Only if your focus is open source. UbuntuOne was conceived of as a service, so to make the project go faster and solve problems commercial products were used. As you'd expect you then have to disentangle that code to release you code as open source - happens all the time in big proprietary code bases that are being open sourced.
This is exciting. Alternatives currently are things like Dropbox (proprietary and somewhat pricy) and TorrentSync (proprietary). I look forward to firing this thing up on my own server and have a private remote file storage. I do currently run a NAS but without a VPN connection home it's not as useful.
I tried to package and run Seafile, but it was kind of complex, and under-documented. Also, the first time I got it to run on my server, it ran my Linux server into the ground (IIRC it kept the CPU so busy that I couldn't do anything through SSH -- had to reboot the server to get back in). I also got the impression the open source community isn't that active -- i.e., it's mostly maintained by the commercial backers (some Chinese company, I think).
On the other hand, Syncthing is a clear demonstration where deploying Go code is dead easy, although I really don't like Go's packaging all that much.
The one thing I still want from Syncthing is a way to have client-side encryption, e.g. have remote storage with zero knowledge of the actual data I'm storing.
What specific trouble do you have with it? I have it setup on a server with a semi reasonable deployment/update mechanism managed through fabric and ansible.
I understand the setup was crappy (you have to leverage their script), but I believe Debian is looking at packaging it so maybe in the future that flow will be better.
I wonder how long until someone creates an Android client for this (one that is configurable, not hard-coded to point to the defunct Ubuntu One servers).
I'd love to run a ZFS-backed Ubuntu One server for all my family's portable devices.
A very good response to Win10s inclusion of closed-source, proprietary, non-removable, everything-is-uploaded, "just trust us", spyware. Ubuntu isn't perfect, but it's far more trustworthy than anything MS has put out in recent decades.
Yeah, in the FAQ of the article it becomes pretty clear how dead this code is:
------------
Do I have to sign a contributor agreement to modify the code?
No. This is no longer an active project, we are providing the code for public to read and use on their own.
Will you take patches?
In general, no. We won’t have anybody assigned to reviewing and accepting code. We’d encourage interested maintainers to fork the code and build out a community around it.
------------
So, I mean, er, thanks for open sourcing a dead project. But still under an AGPLv3 license.... meh.
So, I mean, er, thanks for open sourcing a dead project. But still under an AGPLv3 license.... meh.
A project gets released as free software, and not only do you make a sarcastic thanks, but you moan about how it's using a license that ensures you pass on the same freedom to other users. Talk about entitlement.
This is Canonical, a company that would not have existed on the work of the free software community. For them to even consider writing closed-source software is entitlement.
90% if not more of web startups from the last 10 years would not exist without free software. Is it entitlement that Facebook has closed-source software? Dropbox? GitHub?
Canonical's primary product is Ubuntu, which is, to first order, a repackaging of Debian. Facebook and Dropbox just use free software. GitHub is sort of in between; certainly they have invested a lot into the stuff besides raw git hosting, which is why they have such a dominant position, but certainly git itself is a core part of their product.
Canonical has put a lot of energy into open source for a decade - contributing code and putting free software into the hands of millions around the world. It should be entitled (your word, different usage) to write proprietary code if it thinks that's the right approach.
If you constantly burn the companies and people who actually work on open source for not meeting the purity standard then you make it a hard environment. Honestly, there are way more fully proprietary software companies out there, or companies using but not contributing anything to open source to have a go at.
Works for Neo4j, MongoDB, ownCloud and some others.
Freedom ain't free. There is a price you have to pay. That price is to pass on the same freedoms to others, the freedoms which you explicitly benefited from in your use of the software. To me that sounds like an exceptionally reasonable requirement, but evidently the one not being a grubby freeloader is somehow entitled in your worldview. Interesting how that works.
This reflects the fundamental schism between free software and open source, at least philosophically rather than in terms of what licenses fall under their definitions. Free software implies an ethical responsibility to give back to the community, open source is a way to conveniently reappropriate other people's labor with no obligation to contribute, while hiding behind the cloak of "technical superiority" and "code quality" (which is bullshit).
Well, we agree that there's a philosophical difference.
In my world view, there's nothing ethical about laying claim to all future work output of my own in exchange for incorporating any amount of copyleft code.
This situation favors the original author in the extreme, which is why it's not surprising to see that this works for maintained projects owned by a single author/organization, such as Neo4j, MongoDB, ownCloud, "and some others".
However, the answer for people other than the original author is, unsurprisingly, to generally avoid copyleft code.
The proof is in the pudding; the majority of usable successful projects are licensed in a way that grant the author(s) the ability to benefit from their work outside the straightjacket confines of copyleft.
Well, no, that's a mischaracterization. The provisions are only triggered when redistributing it, and further many GPL projects have linking exceptions, assuming the LGPL isn't used outright. AGPL is stricter because the semantics of networked services are different from local software.
It actually doesn't favor the original author that much. Permissive licensing is more adapt at massive propagation, hence why it's preferred for some low-level or utility libraries like zlib where you would rather not have those be poorly reinvented.
The proof is in the pudding; the majority of usable successful projects are licensed in a way that grant the author(s) the ability to benefit from their work outside the straightjacket confines of copyleft.
Ignoring the stupidity of defining copyleft as a "straitjacket" when it's clearly beneficial for continuing free terms, most of the popular free software is in fact copyleft, be it GPL, LGPL or MPL. FileZilla, GIMP, Notepad++, Blender, VLC, Wordpress, etc. Then for development, the GNU system is all copyleft and ubiquitous, just to list the most obvious example.
The "you have to distribute it" argument doesn't matter. When your company IP is subject to onerous restrictions as soon as you have to "distribute it", you've greatly reduced the value of that work, now and into the future.
It's notable that your popular examples had to be qualified to be limited to "free" (as in open-source) software. The majority of popular software isn't open source at all. Your examples also include software that is 1) maintained by copyright-holding commercial organizations who 2) rely on the asymmetric relationship created by copyleft to build a sustainable business in which they can actually sell IP rights beyond copyleft for profit.
The copyleft ideal of "user freedom" is founded on the premise of a software industry barter economy in which the only tradable resource is not even software, since that's freely copied by anyone, but rather, actual time spent writing software.
This is insane, not least of all because the vast majority of humanity can not and does not want to write software, and even if that wasn't the case, time spent writing software is literally worthless outside of social standing.
You can't use bartered time to pay rent or buy food, which means you cannot actually build a sustainable economy around purely copyleft software ideals in which anyone is able to recoup the real-world costs of spending time writing software in the first place.
It's notable that your popular examples had to be qualified to be limited to "free" (as in open-source) software.
Um... you asked for free software copyleft examples, so obviously. Why is "free" in scare quotes again? Open source hijacked free, not the other way around.
The majority of popular software isn't open source at all.
Irrelevant to the point at hand.
maintained by copyright-holding commercial organizations
Completely irrelevant. There is nothing about free software that is opposed to business. It is perfectly fine that commercial organizations work on free software.
The copyleft ideal of "user freedom" is founded on the premise of a software industry barter economy in which the only tradable resource is not even software, since that's freely copied by anyone, but rather, actual time spent writing software.
No, that's open source. Free software makes ethical arguments for the Four Freedoms, largely based on determining one's destiny, being in control of one's computing and privacy concerns.
This is insane, not least of all because the vast majority of humanity can not and does not want to write software
False premise aside, that's the great thing about software. You don't need an extremely large force of programmers to satisfy most needs. This is because software is a non-rivalrous, non-scarce good.
time spent writing software is literally worthless outside of social standing
Not factoring in the end product?
You can't use bartered time to pay rent or buy food, which means you cannot actually build a sustainable economy around purely copyleft software ideals in which anyone is able to recoup the real-world costs of spending time writing software in the first place.
You don't make money off free software from bartering time. You've set a ludicrous straw man through and through.
As I mentioned earlier, software is non-rivalrous (or even anti-rivalrous, with utility increasing along with use) and non-scarce. You simply cannot sustainably sell it on a per-unit basis like it's a shrink wrapped box, not without maintaining artificial scarcity through IP law, creating a deadweight loss.
In light of this reality, one must rethink their business models. Possibilities then emerge from dual licensing, consulting work, selling merchandise, etc. etc. A common approach is also SaaS, but that poses ethical dilemmas of its own.
You set up a false premise as to the motivations of free software beg the question by assuming that non-rivalrous, non-scarce goods can be marketed sustainably using traditional business models based on information asymmetry.
> False premise aside, that's the great thing about software. You don't need an extremely large force of programmers to satisfy most needs. This is because software is a non-rivalrous, non-scarce good.
Someone has to write the software. If the only software that's worth writing is the software you yourself need, then the only software that gets written will be designed for people who write software.
What about all the people who can't write software?
The market meets their needs by allowing them to exchange money for software. This allows programmers to solve their needs (food, shelter, clothing) while doing work includes meeting the needs of others -- this includes hiring people in non-development roles -- such as artists and UX designers -- that are necessary to produce software usable by more than other software developers.
If software is freely distributable without restriction, this ancient human economic model fails, and what you're left with are "free software" business models that work for only a very narrow set of problems in which developer and capitalist interests align.
In my world view, there's nothing ethical about laying claim to all future work output of my own in exchange for incorporating any amount of copyleft code.
So you're saying that you don't believe someone can ethically sell their own work at a price they choose?