The way script works, it runs change monitor on both sides; if there is a change on local side, it will do local->remote sync; if there is a change on remote side, it will do remote->local sync.
This can go wrong in many, many ways. Here is the first example that came to my mind: you started a process on remote machine which creates lots of small files -- maybe extracting an archive, or generating images. So the syncer keeps syncing those files in remote->local direction. Meanwhile, you got bored watching the script and decided to edit some code. POOF! Any edits you make are continuously reverted.
Oh, and there is no error checking anywhere. Did your network had a hickup? Tough, we will march on anyway. Let's it was not in lines 101 or 104 -- if these commands have transient failures, then your newly made changes would just get reverted.
If you care about your data, please do not use this. Use anything else -- syncthing, osync, unison were named in this thread, they are all good.
edit for clarity
The trick is every sync step is with -u (not replacing updated files) and done again in the other direction before restarting the watch.
And the same deletion applies to network errors -- if ssh fails, then newly created files would get deleted.
That said, I agree that -u options does makes it less likely that files get overwritten. This option approach a some caveats, however -- archive extraction will mess the modification time, symlinks and directories are not handled properly. Still, regular IDE editing will work.
Still not sure how is it better than unison though :)
Since there's no lock or anything this vanishing new file just after creation is a possibility. I just wrote this yesterday, I'll be using it extensively, I'll see how annoying this is, and if there's something to do about it.
As a side note, ssh failure has not been a problem (yet), since the script does the same strategy when starting up. In fact I kill and restart this script a lot. I havent played with archive extraction, this is mainly for source code editing.
It would seem that a small modification to the --delete behaviour of rsync to only delete files at the other end that are older than say 30 seconds would handle this edge case. I'll see how annoying it is and if it warrants the time to investigate this.
I personally run it in "-auto" mode -- every time I run the program, it shows me all actions it wants to do, any conflicts it detected, and asks me if I want to proceed.
If you want to run live mode, you can just run:
unison -ui text -repeat 5 . ssh://remote-host/dir
I recommend treating unison like you do rsync -- read the manpage, configure it with config files and shell wrappers.
If you run it with "-ui text -auto", then it will print the list of changes, and ask: "do you want to proceed (y/n)", which is not that confusing.
Treat it as mostly text-based syncer with UI as an extra bonus, and it will be much less confusing.
Works will for one-way or multi-ways syncing
(Note that local locking can be helpful to prevent simulatenous modifications by editor and sync tool; but unfortunately Linux ecosystem is not designed with such locking in mind)
Amazing stuff. Brew, Cygwin and your favorite package managers have it.
almost any two rsyncs from the last 20 years speak to each other and do it rather quickly.
I agree with slowness -- I feel the recent versions are pretty fast on Linux, but it can be faster. Also it's "scan, then transfer" approach means it can take lots of memory if there is a lot of files.
Still, if you can your rsync in your application, use it. For automated backups, you cannot beat it. But unison has two unique advantages:
- Proper two way sync -- doing it safely with rsync is almost impossible.
- GUI/TUI which shows what changed and allows conflict resolution.
- GUI which shows
It's more so that I wish IDEs supported software like this. There's a plethora of such offerings such as CyberDuck, Expandrive, etc that would benefit from reduced read/seek activity if the IDE could just orchestrate when to emit changes to what it thinks is the "disk". As you noted on GitHub, such software gets really laggy when working in directories that aren't trivially small.
For iOS coda is pretty good. I've used Codeanywhere for android; 2-way sync would need to be integrated in the app itself on these platforms, I would think...
With regards to fswatch, from what I gather, it blocks a iotcl call on all the files in your watched folder. This script fires rsync (always from the local end to prevent confusion) as soon as a change is detected on either end, starting with the end where the change has been detected. Pretty simple. I guess if you change files and delete files at the same time on both ends, some deleted files might get recreated. At this point this is the worse I can see, but I could be very wrong... :-)
2. It doesn't actually replace Dropbox.
3. It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the service, is it reasonable to expect to make money off of this?
It seems to be built for a different use case; Nextcloud (server/client) and Syncthing (P2P) are already excellent Dropbox alternatives.