> Recently, the proprietary Dropbox Linux client dropped support for all Linux file systems except unencrypted ext4.
What the heck. Anyone has more information about that? Any announcement? And why would it fail to work on an encrypted (LUKS?) ext4 FS when encryption seemed to me to sit below the filesystem (since AFAIK ext4 doesn't support encryption itself)?
I too was a bit surprised by that statement. But according to Dropbox, full disk encryption should work fine.
”If you received a notification on Linux and you are running ext4, it may be because you are also running ecryptfs. ecrypfts is not supported. However, we support full disk encryption systems, such as LUKS for Linux users.”
Ubuntu's installer for a long while set up an "encrypted home directory" using eCryptFS, i.e., file-level encryption on top of ext4. That's the thing Dropbox is finding painful to support.
Migrating away from that to block-level "full disk" encryption basically requires copying all your files. If you're using <50% of your disk and you feel comfortable doing tricks with partition resizing you can do it, otherwise the best approach is to back up your files (hopefully to encrypted media...) and restore them.
It's potentially misleading phrasing. "Unencrypted ext4" means that the filesystem Dropbox sees has to be actual ext4, not an encryption layer on top. There can be an encryption layer under it, and generally full-disk encryption works that way, by encrypting a block device and letting you run whatever filesystem you want on top.
"Unencrypted or decrypted ext4" might be slightly more accurate (although I guess it maybe sounds like you need to remove encryption, not that you just need to decrypt files in memory).
In fairness, it's pretty difficult (read: impossible) to enable full-disk encryption on an NVME drive (read: most Ultrabooks including the Dell XPS "Developer Edition.")
Full-disk encryption usually means encryption at the block layer below the filesystem. It doesn't actually have to cover the full disk.
There is almost always at least one unencrypted partition on machines with full-disk encryption, since the boot loader and then the decryption routine have to be launched from somewhere. And yes, OEM recovery partitions or similar laptop-specific needs are another case.
Full-partition would be a more accurate term than full-disk.
I use full-disk encryption on my old thinkpad. As in, the whole disk is encrypted, from first sector to last. There isn't even a MBR or a partition table on it! I boot the laptop from my USB stick which contains the kernel and initrd. That's what I call full-disk encryption ;)
If you've ever installed gentoo, then it shouldn't be too difficult. Follow the installation steps but instead of partitioning the harddrive and installing into one partition, make the filesystem across the whole harddrive (eg. `mkfs.ext4 /dev/sda`) and install into that. The system on the laptop (Lenovo X61) is really old (like 8 years at least), back then I needed to perform some manual steps to ensure the initrd has the necessary tools so it can mount an decrypt the root partition, this is probably not necessary anymore. Upgrading kernels has to be done carefully, because I must make sure the USB stick contains a compatible kernel. I also need to configure the UEFI boot menu so that it knows where to find the kernels. Fun stuff.
I wouldn't know where to start if you want to use ubuntu or another more user friendly distribution though.
Or btrfs. Good thing I had a spare external hard drive I could dedicate to dropbox storage, because I'm sure not converting my drive array to a different FS just for that.
In principle I agree with you, and many could argue the the app is broken. But in all fairness it is not so easy. Then there are lots of little details and corner cases where filesystems that implement a POSIX fs have slightly different semantics. This is important for example when dealing with sync'ing files to disk/backend. Encryption is another problem. Error handling. See for an example the annoying details here:
I thought the whole point of an encrypting filesystem was that applications using it would not need to care about such things? Unless you're going below the FS abstraction and actually accessing blocks of the device directly, it shouldn't matter.
One of the weird things about an encryption layer on top of a filesystem like eCryptFS (as opposed to encrypted block storage) is that it needs to encrypt and MAC the filenames somehow, but the underlying filesystem has maximum lengths on the filenames, so whatever space you're using for padding and authentication code needs to be squeezed in. And you also need to potentially find an encoding of the encrypted data in case the underlying filesystem doesn't treat filenames as bytes-except-for-ASCII-slash (e.g., common Mac and Windows filesystems treat filenames as Unicode of some form, and Mac ones even do canonicalization). So, your encrypted filenames end up being longer than your decrypted filenames, which means that your maximum filename length is shorter than the underlying filesystem's maximum length.
For a system like Dropbox, which automatically renames files to things like "file.txt (userbinator's conflicted copy 2018-12-25)", this is a problem. For eCryptFS, whose limit is "we typically recommend you limit your filenames to ~140 characters", (https://unix.stackexchange.com/a/32834), the uncertainty is a problem too.
Dropbox could make this work (and did in the past), but they decided it would be more reliable to say they won't try and won't make promises about their client working 100% of the time on these filesystems. (And I guess their support or business department was not thrilled with the option of "we'll let you try but we won't support you even though you're paying".)
With LUKS it doesn't matter. But not all encryption methods are like that. That was just really an example of potential differences. In fact, when this (Dropbox dropping non ext4 support) happened the problem that triggered this was encryption related IIRC.
They could just give you warnings that the FS isn't supported. Instead the client refuses to run if it's not on a supported FS, even though it would very likely work fine.
It shouldn't have to. Dropbox ran fine on all filesystems for years. In September they sent everybody an email basically saying "you can't use Dropbox anymore, bye bye"
If I care to guess, because if you go an abstraction lower you can do your job better. Dropbox actually uses kernel drivers on some platforms (not sure on Linux).
Because different filesystems have different features and xattrs is one which isn't implemented everywhere.
Another example of features not implemented everywhere is the cornucopia of file locking APIs which is why SQLite doesn't officially support being run on NFS.
I'm not a an active dropbox user but I find this very surprising. Next question is, why wouldn't they open source the client libraries? Supporting many platforms is already a huge task.
I setup Syncthing [1] to do the same for my vimwiki folder. It was surprisingly easy, and doesn't require any external storage services. And it can even automatically sync files without internet, over the local network.
I've used Syncthing and it's excellent. But for me, the lack of a mobile app was a dealbreaker. Even though it's not open source, I switched to Resilio Sync[1] (formerly known as btsync). It ticks all the other boxes, and just works amazingly well. I want to support them so I sprang for the $20 one time Pro license which adds selective sync and a few other nice features.
It's lacking and has a weird workflow, but at least it worked for me. Didn't test it too much as I hated the apple experience and gave away the device.
All traffic between peers is encrypted. You can read more about the encryption below[1]. You can also create encrypted (at rest) folders[2] for replicating via public clouds while still maintaining 100% privacy.
I use Syncthing for my KeepassX on 6 computers (Linux) and my phone. My music folder too. My photos folder between 3 computers. And then we use Syncthing to sharo Call of Duty maps between 5 people (Mac, Windows). All in all I really like it. I also start it with systemd on Linux.
I've been using Syncthing for more than a year across ~6 machines to keep them in sync. It works great for more complex setups than the article describes.
If you don't need mobile support, you (not parent, but someone else reading this) should try Unison. I use it and can vouch for its reliability, and if your use case is simple (sync files between N computers with a central server), then it's probably the best tool for the job as it's considerably faster than syncthing.
Yay Unison! I also use it and love it. I use it to back local servers up to the central file service, another copy flings to a remote server. Very fast, very happy. I'd love for it to do the collection of iThings, but Unison grabs the iTunes files, so I use that path.
If I have my desktop and my laptop, both using syncthing, does one (my desktop) need to always be online for me to sync files from my other device (my laptop)?
I'd love to use something else besides Dropbox, but it's convenient that Dropbox works as the middleman that is always operating.
It can only sync data between devices that are operating of course. But if you have three devices A,B,C then it can also sync A-B and later sync B-C, or sync A-C directly if both are running.
> It can only sync data between devices that are operating of course.
What does this mean? Does my desktop always need to be on? Or, can I change a file on my laptop, then, when I turn on my desktop, get those changes on the same file?
I do exactly this regularly, but it only works because for my most important folders (like personal documents), _somewhere_ I have a device online that is facilitating the sync. In my case, my Desktop, Laptop, mobile phone (Android), and a Cloud Server that I use for other things.
To answer the question most directly: If you have only your desktop and your laptop, then for the sync to work, at some point they both need to be powered on and online at the same time. This is one area where syncthing is a bit weaker than some other cloud-backed options; at that point you're essentially paying a service provider to replace the cloud server in my personal setup with their own always-on solution. Personally I prefer not to rely on a third-party, depending on your goals though you should pick a solution that makes sense for you.
You would need to have a chronological chain of pairs of devices operating at the same time, beginning with your laptop and ending with your desktop.
I tend to use my phone to ferry information between home pc and work pc. If I change something in my sync folder at work, I'll run sync thing on the pc and phone before leaving, then run it on my home pc and phone when I want the information to update on my home pc.
Another option is to simply have a third system (eg your own server) which is constantly operating on the Internet and running sync thing.
It means you need both computers to be on at the same time to sync between them. Alternatively you can spend $5/mo for a server to handle the syncing between laptop and desktop in the way the grandparent described.
It's an Android permission issue. Put it into /sdcardX/Android/data/com.nutomic.syncthingandroid/files/ and Syncthing can write to the folder just fine.
Nextcloud has the exact same SD limitation of needing an app-specific folder. For syncthing it's Android/data/com.nutomic.syncthingandroid/files/ and for nextcloud it's Android/media/com.nextcloud.client.
6) If we want to mount this automatically at login.
Adding it to /etc/fstab may not work because of encryption.
But perhaps some script at login to mount it will work.
For example, we can create a script at /usr/local/bin/mountDB.sh and then add a line to /etc/sudoers
It lets you mount a file as a filesystem, instead of an actual hardware device. I think it's called "loopback" because instead of reading from /dev/whatever it loops back to /dev/original and reads the specified file.
I didn't know about entr as well but I'm using fswatch which is largely similar and portable as well, supporting not only Linux and BSD but also non-WSL Windows and Solaris too, and leveraging the Filesystem Event API on macOS.
Huge additional plus, entr does not rely exclusively on inotify, it is also 'BSD kqueue wrapper. It is the only command line utility that I found that allows cross platform (linux/BSD) scripting with file system events.
I have few scripts that rely on entr and I am always sure they are OS agnostic.
Thanks, didn't know systemd can do that [0]! I must grudgingly admit that it seems to be quite useful, though I still don't like having a single app doing all that stuff. But that's off topic... :)
Both path units and entr use the inotify kernel API on Linux. If your needs get more complex, there are python, perl, etc, APIs for the inotify libraries.
Also, if you have an older linux distro that predates inotify(), there's similar functionality within auditd that will log changes to a file with a tag of your choosing, then you can tail the file.
.... Except, a file added to your dropbox will only appear in your local box the next time you create a local file -- there is nothing to wake the local entr when a file is remotely added to dropbox.
If I have a shared dropbox folder with someone, as soon as they put the file in, I get a copy. There is no way to get a notification that would provide a similar experience without the real dropbox client, afaik.
Consider using git or another SCM instead. You don't need a centralized server, you can push/pull between any two machines with ssh (or a handful of other protocols). Encrypted end to end. Supports branches, useful for stuff like dotfiles on different machines. You don't need commit messages, just make an alias to add all changes and commit with and empty message. You have a full history, can revert back to any state, inspect any old version, diff versions, etc. There's git clients for mobile. Free as in beer and freedom. Check-summed. Puts you in control of when to sync, supports offline. Future-proof, acquisition-proof, VC-proof. Familiar.
Yes, you'd have to automate the pull for this to work, and then you'd have a way to notify on conflicts so that they are resolved ASAP.
Come to think of it, Dropbox isn't really so great at the notification aspect of this either. It creates a 'conflicted' file but there is no notification that I'm aware of. I have to remind myself to periodically check for conflicts.
Is there a standard-ish git merge driver that resolves all merges with a Dropbox-style "conflicted copy", so you can unconditionally git pull in a cronjob and it never leaves the repo in an unmerged state?
Why does someone have to bring up this eleven-year-old comment any time file synchronisation or Dropbox is mentioned?
Dropbox is now broken for certain Linux configurations and someone had a bit of fun writing a shell one-liner to replicate some of its functionality. Meanwhile, the original eleven year old comment was partly a throwaway remark that it didn't do anything he could not do already which means it wasn't interesting to him (plus two other points, but nobody links to it for those).
That's not even "not really the same", it's "and since you mentioned Dropbox, let's make fun of BrandonM some more". It comes across as just mean-spirited and shitty to me.
In that setup entr only watches files that existed when systemd service was first started. So new files won't trigger backup. Something like this should work:
while true; do find $ORG_DIR | entr -d -r rclone sync -v $ORG_DIR $REMOTE:org; done
That's constantly polling the filesystem, isn't it?
I've never used it myself, but I know systemd has file watching built in. I think using `PathModified=/path/to/org` would also eliminate the need for entr [1].
Also, doesn't entr invoke its command in parallel when there are multiple changes in quick succession? And wouldn't multiple parallel instances of rclone potentially mess up the backup?
rclone is great, i have been using it to make backup of google photos. I had tried so many different solutions before achieving this. Great to see it being here on HN.
rclone is such a great tool indeed. This is almost the only viable approach for transferring your data from one cloud to another. Huge thanks to the creator and contributors!
It's kind of funny to replace a proprietary client for a proprietary service with FOSS tools when you could completely replace Dropbox with FOSS alternatives instead and enjoy Freedom: Syncthing works very well, and while syncing is not its main feature you could also work with nextcloud.
If you did want to run the original Dropbox client, you could likely make a blob image on your encrypted partition, loopmount it and then sync to that.
I did something similar actually. I have my 1Password vault on Dropbox so I can get it on my Linux machines.
Dropbox has some nice python modules/libraries to support me, so I’m able to download the directory as a zip and extract it in memory.
It’s stupid and unidirectional, but I’m sure more talented people are able to build a bidirectional python Dropbox client in no time, with inotify since python has bindings for this too.
rclone is great, but I find syncing manually in intervals to be ideal since you get to see what is changing from your destination/remote. I use CrashPlan Pro to back up everything automatically, and then a few times a week I'll run rclone on my data drive so that I can see what has changed.
Dropbox is a no no for one simple reason - there is no end to end encryption. Unless I don't know something?
I wouldn't want a disgruntled employee to fiddle with my files.
Is there any Dropbox-like service that lets you control your own private keys, without resorting to the ugliness of uploading an encrypted image to Dropbox?
It's important to note though that their client is not open-source, so if one goes through all that trouble to use end-to-end encryption, it seems a bit unsatisfactory to me to then trust this company to actually keep the private keys on my machines (and encrypt things correctly).
Personally I used syncthing which doesn't do encryption but also only uses my own devices, so I can keep the data on my machines at all times.
Tresorit is nice, but seems kind of pricy. Boxcryptor works well on top of Dropbox. Personally, I've switched to Nextcloud which keeps everything in my control, and it also supports e2e (though the e2e UX is still rough around the edges).
Even though it's proprietary, Resilio Sync does encrypted peer to peer sync, plus it allow encryption-only nodes. These contribute bandwidth, but cannot decrypt the data.
End-to-end encryption means that they don't see your encrypted files at all, even if they want to. Importantly it means it is impossible by design to make mistakes like accidentally not checking passwords on login https://techcrunch.com/2011/06/20/dropbox-security-bug-made-... .
Transport security and at-rest security is very important (and is the best you can do if you want Dropbox's ability to access your files, e.g., so their servers can show your files in a web interface), but it's not the same sort of thing as end-to-end encryption.
Do you want something different than the Dropbox exclusion list ("dropbox exclude add ...")?
That only supports excluding directories not individual files, and the actual list of exclusions is buried in some local binary config both of which are moderate annoyances - perhaps those are your qualms?
What the heck. Anyone has more information about that? Any announcement? And why would it fail to work on an encrypted (LUKS?) ext4 FS when encryption seemed to me to sit below the filesystem (since AFAIK ext4 doesn't support encryption itself)?