Hacker News new | past | comments | ask | show | jobs | submit login
My one-liner Linux Dropbox client (lpan.io)
510 points by l_pan_ on Dec 25, 2018 | hide | past | favorite | 160 comments



> Recently, the proprietary Dropbox Linux client dropped support for all Linux file systems except unencrypted ext4.

What the heck. Anyone has more information about that? Any announcement? And why would it fail to work on an encrypted (LUKS?) ext4 FS when encryption seemed to me to sit below the filesystem (since AFAIK ext4 doesn't support encryption itself)?


I too was a bit surprised by that statement. But according to Dropbox, full disk encryption should work fine.

”If you received a notification on Linux and you are running ext4, it may be because you are also running ecryptfs. ecrypfts is not supported. However, we support full disk encryption systems, such as LUKS for Linux users.”

Source: https://www.dropboxforum.com/t5/Error-messages/Dropbox-clien...


Oh, I can see why ecryptfs would be a pain to support, but I always considered it to be some kind of an epic-scale hack.


Perfect, thank you for the -forum- link! :)


De nada :)


It's FUD. ext4 with LUKS, which is what pretty much everyone is using on a desktop, is still working and the client supported.


Ubuntu's installer for a long while set up an "encrypted home directory" using eCryptFS, i.e., file-level encryption on top of ext4. That's the thing Dropbox is finding painful to support.

Migrating away from that to block-level "full disk" encryption basically requires copying all your files. If you're using <50% of your disk and you feel comfortable doing tricks with partition resizing you can do it, otherwise the best approach is to back up your files (hopefully to encrypted media...) and restore them.


No, it's not FUD. encrypted ext4 is encfs and ecryptfs, not xy on LUKs, and yes, they did drop support.


Just FYI: Ext4 has file-level encryption built-in now [1, 2], so 'encrypted ext4' is actually different from encfs and ecryptfs.

[1]: https://www.kernel.org/doc/html/v4.19/filesystems/fscrypt.ht...

[2]: https://wiki.archlinux.org/index.php/ext4#Using_file-based_e...


I didn't know this, that's new for me.


It's potentially misleading phrasing. "Unencrypted ext4" means that the filesystem Dropbox sees has to be actual ext4, not an encryption layer on top. There can be an encryption layer under it, and generally full-disk encryption works that way, by encrypting a block device and letting you run whatever filesystem you want on top.

"Unencrypted or decrypted ext4" might be slightly more accurate (although I guess it maybe sounds like you need to remove encryption, not that you just need to decrypt files in memory).


"Encrypted ext4" most likely refers to ext4's built-in support for file-based encryption.


In fairness, it's pretty difficult (read: impossible) to enable full-disk encryption on an NVME drive (read: most Ultrabooks including the Dell XPS "Developer Edition.")

https://www.dell.com/community/Linux-Developer-Systems/XPS-1...


Full-disk encryption usually means encryption at the block layer below the filesystem. It doesn't actually have to cover the full disk.

There is almost always at least one unencrypted partition on machines with full-disk encryption, since the boot loader and then the decryption routine have to be launched from somewhere. And yes, OEM recovery partitions or similar laptop-specific needs are another case.

Full-partition would be a more accurate term than full-disk.


I use full-disk encryption on my old thinkpad. As in, the whole disk is encrypted, from first sector to last. There isn't even a MBR or a partition table on it! I boot the laptop from my USB stick which contains the kernel and initrd. That's what I call full-disk encryption ;)


Cool. :) Even there, you do have some disk or partition unencrypted, it just happens to be on removable media. Good counterexample, though uncommon.


Just make sure nobody swaps out your stick :)


I would love to see a post on how to set that up!


If you've ever installed gentoo, then it shouldn't be too difficult. Follow the installation steps but instead of partitioning the harddrive and installing into one partition, make the filesystem across the whole harddrive (eg. `mkfs.ext4 /dev/sda`) and install into that. The system on the laptop (Lenovo X61) is really old (like 8 years at least), back then I needed to perform some manual steps to ensure the initrd has the necessary tools so it can mount an decrypt the root partition, this is probably not necessary anymore. Upgrading kernels has to be done carefully, because I must make sure the USB stick contains a compatible kernel. I also need to configure the UEFI boot menu so that it knows where to find the kernels. Fun stuff.

I wouldn't know where to start if you want to use ubuntu or another more user friendly distribution though.


You _can_ do it with libreboot on ancient thinkpads, but it requires a decent time (or money) investment and some know-how. https://libreboot.org/docs/gnulinux/encrypted_trisquel.html


DiskCryptor does it in Windows fairly easily and lets you install the bootloader to USB.


Isn’t this how FileVault works?


Be more respectful. It broke my workflow which is having my $HOME on NFS which is also nuked with the stupid Dropbox change.


Add it to the already long list of software that misbehaves over NFS.

Ceph/RBD with the filesystem of your choice that supports xattrs is pretty much the only way to get full application compatibility.


NFS is widely supported and has great performance. I'm open for alternatives, but only if supported by FreeNAS where my files live.


Why don't I just keep my data in a database then?


That's essentially what Ceph does. It's just implementing a filesystem interface on top of a generic object store.


Yes, sorry, there was sarcasm involved.


When installing Ubuntu from scratch and follow defaults you end up with ecryptfs, not LUKS


They recently (around 17.10 I think) removed encryptfs, they only support LUKS now.


That news hit HN top stories a few months ago: https://news.ycombinator.com/item?id=17732912


Doesn’t work on ZFS either. I dropped Dropbox because of this.


You can create a zvol for your dropbox folder and format it in ext4 and it'll work: https://pthree.org/2012/12/21/zfs-administration-part-xiv-zv...


Or btrfs. Good thing I had a spare external hard drive I could dedicate to dropbox storage, because I'm sure not converting my drive array to a different FS just for that.


Why would an application like this care what file system it’s running on?


In principle I agree with you, and many could argue the the app is broken. But in all fairness it is not so easy. Then there are lots of little details and corner cases where filesystems that implement a POSIX fs have slightly different semantics. This is important for example when dealing with sync'ing files to disk/backend. Encryption is another problem. Error handling. See for an example the annoying details here:

https://danluu.com/file-consistency/ http://danluu.com/filesystem-errors/


Encryption is another problem.

I thought the whole point of an encrypting filesystem was that applications using it would not need to care about such things? Unless you're going below the FS abstraction and actually accessing blocks of the device directly, it shouldn't matter.


One of the weird things about an encryption layer on top of a filesystem like eCryptFS (as opposed to encrypted block storage) is that it needs to encrypt and MAC the filenames somehow, but the underlying filesystem has maximum lengths on the filenames, so whatever space you're using for padding and authentication code needs to be squeezed in. And you also need to potentially find an encoding of the encrypted data in case the underlying filesystem doesn't treat filenames as bytes-except-for-ASCII-slash (e.g., common Mac and Windows filesystems treat filenames as Unicode of some form, and Mac ones even do canonicalization). So, your encrypted filenames end up being longer than your decrypted filenames, which means that your maximum filename length is shorter than the underlying filesystem's maximum length.

For a system like Dropbox, which automatically renames files to things like "file.txt (userbinator's conflicted copy 2018-12-25)", this is a problem. For eCryptFS, whose limit is "we typically recommend you limit your filenames to ~140 characters", (https://unix.stackexchange.com/a/32834), the uncertainty is a problem too.

Dropbox could make this work (and did in the past), but they decided it would be more reliable to say they won't try and won't make promises about their client working 100% of the time on these filesystems. (And I guess their support or business department was not thrilled with the option of "we'll let you try but we won't support you even though you're paying".)


With LUKS it doesn't matter. But not all encryption methods are like that. That was just really an example of potential differences. In fact, when this (Dropbox dropping non ext4 support) happened the problem that triggered this was encryption related IIRC.


They could just give you warnings that the FS isn't supported. Instead the client refuses to run if it's not on a supported FS, even though it would very likely work fine.


I agree here. But they just want to cover their asses I guess.


It shouldn't have to. Dropbox ran fine on all filesystems for years. In September they sent everybody an email basically saying "you can't use Dropbox anymore, bye bye"


If I care to guess, because if you go an abstraction lower you can do your job better. Dropbox actually uses kernel drivers on some platforms (not sure on Linux).

I guess power users are not the target market.


Because different filesystems have different features and xattrs is one which isn't implemented everywhere.

Another example of features not implemented everywhere is the cornucopia of file locking APIs which is why SQLite doesn't officially support being run on NFS.


Filesystem primitives the application requires to deliver the desired functionality.


I'm not a an active dropbox user but I find this very surprising. Next question is, why wouldn't they open source the client libraries? Supporting many platforms is already a huge task.


I have Git repos that can't be cloned onto encyptfs, so it's not surprising to me. Make an encrypted partition or just encrypt your entire disk.


Ext4 does support encryption. [0] https://lwn.net/Articles/639427/


I setup Syncthing [1] to do the same for my vimwiki folder. It was surprisingly easy, and doesn't require any external storage services. And it can even automatically sync files without internet, over the local network.

[1] https://syncthing.net/


I've used Syncthing and it's excellent. But for me, the lack of a mobile app was a dealbreaker. Even though it's not open source, I switched to Resilio Sync[1] (formerly known as btsync). It ticks all the other boxes, and just works amazingly well. I want to support them so I sprang for the $20 one time Pro license which adds selective sync and a few other nice features.

[1] https://www.resilio.com/individuals/



For Android yes, but the GP is likely using iOS where no such client seems to exist.


I used fsync when using I bought an apple device https://itunes.apple.com/us/app/fsync/id964427882

It's lacking and has a weird workflow, but at least it worked for me. Didn't test it too much as I hated the apple experience and gave away the device.


The reviews would show you that it no longer works at all for iOS 12 and beyond.


That's correct.


Does Resilio sync encrypt the file names and folders during sync or just the data?


All traffic between peers is encrypted. You can read more about the encryption below[1]. You can also create encrypted (at rest) folders[2] for replicating via public clouds while still maintaining 100% privacy.

[1] https://help.resilio.com/hc/en-us/articles/205451025-Can-oth...

[2] https://help.resilio.com/hc/en-us/articles/207370466-Encrypt...


I use Syncthing for my KeepassX on 6 computers (Linux) and my phone. My music folder too. My photos folder between 3 computers. And then we use Syncthing to sharo Call of Duty maps between 5 people (Mac, Windows). All in all I really like it. I also start it with systemd on Linux.


I've been using Syncthing for more than a year across ~6 machines to keep them in sync. It works great for more complex setups than the article describes.


If you don't need mobile support, you (not parent, but someone else reading this) should try Unison. I use it and can vouch for its reliability, and if your use case is simple (sync files between N computers with a central server), then it's probably the best tool for the job as it's considerably faster than syncthing.


Yay Unison! I also use it and love it. I use it to back local servers up to the central file service, another copy flings to a remote server. Very fast, very happy. I'd love for it to do the collection of iThings, but Unison grabs the iTunes files, so I use that path.


Just had a look, definitely looks amazing especially this particular feature for ignoring files [1]. I've been waiting so long for this in Dropbox.

[1] https://docs.syncthing.net/users/ignoring.html


You can simply use grep to ignore files using the entr + rclone script ;-)


I'm sorry if this is a stupid question:

If I have my desktop and my laptop, both using syncthing, does one (my desktop) need to always be online for me to sync files from my other device (my laptop)?

I'd love to use something else besides Dropbox, but it's convenient that Dropbox works as the middleman that is always operating.


It can only sync data between devices that are operating of course. But if you have three devices A,B,C then it can also sync A-B and later sync B-C, or sync A-C directly if both are running.


> It can only sync data between devices that are operating of course.

What does this mean? Does my desktop always need to be on? Or, can I change a file on my laptop, then, when I turn on my desktop, get those changes on the same file?


I do exactly this regularly, but it only works because for my most important folders (like personal documents), _somewhere_ I have a device online that is facilitating the sync. In my case, my Desktop, Laptop, mobile phone (Android), and a Cloud Server that I use for other things.

To answer the question most directly: If you have only your desktop and your laptop, then for the sync to work, at some point they both need to be powered on and online at the same time. This is one area where syncthing is a bit weaker than some other cloud-backed options; at that point you're essentially paying a service provider to replace the cloud server in my personal setup with their own always-on solution. Personally I prefer not to rely on a third-party, depending on your goals though you should pick a solution that makes sense for you.


You would need to have a chronological chain of pairs of devices operating at the same time, beginning with your laptop and ending with your desktop.

I tend to use my phone to ferry information between home pc and work pc. If I change something in my sync folder at work, I'll run sync thing on the pc and phone before leaving, then run it on my home pc and phone when I want the information to update on my home pc.

Another option is to simply have a third system (eg your own server) which is constantly operating on the Internet and running sync thing.


It means you need both computers to be on at the same time to sync between them. Alternatively you can spend $5/mo for a server to handle the syncing between laptop and desktop in the way the grandparent described.


Or plug in something small like a raspberry pi with an external USB hard drive.


Though very useful, syncthing unfortunately uses direct communication which leads to problems with port forwarding.


They have relay servers.



Sadly, the Android-App still can't save stuff on the SD card. My main show-stopper.


It's an Android permission issue. Put it into /sdcardX/Android/data/com.nutomic.syncthingandroid/files/ and Syncthing can write to the folder just fine.


The devs should fix this.


They fundamentally cannot fix it. Unless you mean the people developing Android.


Other apps can use the SD card just fine. Example: Nextcloud


Nextcloud has the exact same SD limitation of needing an app-specific folder. For syncthing it's Android/data/com.nutomic.syncthingandroid/files/ and for nextcloud it's Android/media/com.nextcloud.client.


I don't mind an app specific folder. The app should do it for you.


Dropbox will work if you just create a local loopback file for an ext4 filesystem and mount the Dropbox folder on it.


The above is a great answer.

To expand on this (from some googling). All of this requires root.

1) Create a sparse file (actual size depends on usage)

  truncate -s 100G /home/zzz/Dropbox.image
2) Create a ext4 filesystem on it

  mkfs.ext4 -m0 /home/zzz/Dropbox.image
3) Mount it as a loopback file system

  sudo mount /home/zzz/Dropbox.image /home/zzz/Dropbox-fs
4) Create / Copy your dropbox folder to Dropbox-fs

  sudo mkdir /home/zzz/Dropbox-fs/Dropbox
  sudo chown -R zzz.zzz /home/zzz/Dropbox-fs/Dropbox
  cp -r /home/zzz/Dropbox-original /home/zzz/Dropbox-fs/Dropbox
5) Start Dropbox and point it to the new location

6) If we want to mount this automatically at login. Adding it to /etc/fstab may not work because of encryption.

But perhaps some script at login to mount it will work. For example, we can create a script at /usr/local/bin/mountDB.sh and then add a line to /etc/sudoers

  zzz ALL=(root) NOPASSWD: /usr/local/bin/mountDB.sh
Then somehow call this script at login of user zzz.


I've never heard of a 'loopback filesystem' before. Can you expand a little on what this is?


It lets you mount a file as a filesystem, instead of an actual hardware device. I think it's called "loopback" because instead of reading from /dev/whatever it loops back to /dev/original and reads the specified file.

See the man page for `mount`: https://linux.die.net/man/8/mount (search for "loop")


The right term is a 'loopback device' which allows you to use a file on an existing filesystem as a block device.

Here's what the flow looks like:

open(/path/to/myfile) ->

ext4 driver ->

block operations on /dev/loop0 ->

open(/path/to/loop-file) ->

block operations on the data region of the file ->

btrfs driver ->

block operations on /dev/sda


I hadn't seen entr before! My first glance at your one-liner:

find $ORG_DIR | entr -r rclone sync -v $ORG_DIR $REMOTE:org

is that find will enumerate all your existing files, but starting a new one won't get picked up.

It seems like entr is prepared though, and you should just pass in -d. In fact why use find at all, if you want to rsync the directory?


I didn't know about entr as well but I'm using fswatch which is largely similar and portable as well, supporting not only Linux and BSD but also non-WSL Windows and Solaris too, and leveraging the Filesystem Event API on macOS.

https://github.com/emcrisostomo/fswatch


Huge additional plus, entr does not rely exclusively on inotify, it is also 'BSD kqueue wrapper. It is the only command line utility that I found that allows cross platform (linux/BSD) scripting with file system events.

I have few scripts that rely on entr and I am always sure they are OS agnostic.


Then take a look at fswatch! (See sibling comment)


It's mentioned on reddit, that passing the -r flag will pick up new files, right? Link @https://www.reddit.com/r/linux/comments/a92m1u/comment/ecgqf...

EDIT Oh I see what you're saying; that's confusing, though, how's it supposed to work in the first place, then?? :))


I love entr for running tests while I'm working, it's nice to have a tool that works reliably in any ecosystem.


Since it is being used with systemd a systemd path unit could be used to monitor the file system removing the need to use entr.


Thanks, didn't know systemd can do that [0]! I must grudgingly admit that it seems to be quite useful, though I still don't like having a single app doing all that stuff. But that's off topic... :)

[0] https://blog.andrewkeech.com/posts/170809_path.html


Both path units and entr use the inotify kernel API on Linux. If your needs get more complex, there are python, perl, etc, APIs for the inotify libraries.

Also, if you have an older linux distro that predates inotify(), there's similar functionality within auditd that will log changes to a file with a tag of your choosing, then you can tail the file.


dnotify is another older one.


.... Except, a file added to your dropbox will only appear in your local box the next time you create a local file -- there is nothing to wake the local entr when a file is remotely added to dropbox.


Could write a little script to add then delete a temp file on some chron job or at login or something.


If I have a shared dropbox folder with someone, as soon as they put the file in, I get a copy. There is no way to get a notification that would provide a similar experience without the real dropbox client, afaik.


Consider using git or another SCM instead. You don't need a centralized server, you can push/pull between any two machines with ssh (or a handful of other protocols). Encrypted end to end. Supports branches, useful for stuff like dotfiles on different machines. You don't need commit messages, just make an alias to add all changes and commit with and empty message. You have a full history, can revert back to any state, inspect any old version, diff versions, etc. There's git clients for mobile. Free as in beer and freedom. Check-summed. Puts you in control of when to sync, supports offline. Future-proof, acquisition-proof, VC-proof. Familiar.


I do this, but 90% of the time I forget to pull on the other machine and have to deal with conflicts.


Yes, you'd have to automate the pull for this to work, and then you'd have a way to notify on conflicts so that they are resolved ASAP.

Come to think of it, Dropbox isn't really so great at the notification aspect of this either. It creates a 'conflicted' file but there is no notification that I'm aware of. I have to remind myself to periodically check for conflicts.


Is there a standard-ish git merge driver that resolves all merges with a Dropbox-style "conflicted copy", so you can unconditionally git pull in a cronjob and it never leaves the repo in an unmerged state?


Check out Sparkleshare.


Ooh this looks good!


Yes, I also found that git is too much hassle for notes.


Doesn't work for non text non code files that people normally use Dropbox for. For example, word docs, excel spreadsheets etc.


Of course it does. There's no such restriction in git, you can track binary files just fine.


Not really the same, but this reminds me of the top comment of the Dropbox announcement: https://news.ycombinator.com/item?id=8863


Why does someone have to bring up this eleven-year-old comment any time file synchronisation or Dropbox is mentioned?

Dropbox is now broken for certain Linux configurations and someone had a bit of fun writing a shell one-liner to replicate some of its functionality. Meanwhile, the original eleven year old comment was partly a throwaway remark that it didn't do anything he could not do already which means it wasn't interesting to him (plus two other points, but nobody links to it for those).

That's not even "not really the same", it's "and since you mentioned Dropbox, let's make fun of BrandonM some more". It comes across as just mean-spirited and shitty to me.


> any time file synchronisation or Dropbox is mentioned

This is one of the most relevant topics you could quote it on, really. The post is tackling one-way sync and calling it a rough equivalent to Dropbox.


My thought exactly. My one-liner dropbox client looks a lot like `rsync foo bar`.


In that setup entr only watches files that existed when systemd service was first started. So new files won't trigger backup. Something like this should work:

while true; do find $ORG_DIR | entr -d -r rclone sync -v $ORG_DIR $REMOTE:org; done


That's constantly polling the filesystem, isn't it?

I've never used it myself, but I know systemd has file watching built in. I think using `PathModified=/path/to/org` would also eliminate the need for entr [1].

[1] https://www.freedesktop.org/software/systemd/man/systemd.pat...


It should re-poll only when a new file is added. In op's usage scenario it won't be very ofthen.

The systemd solution may be even better, though it seems there is no way to ignore changes in temporary files that emacs likes to create.


It looks like there might be a bug. What happens when you add a new file? A quick scan of the entr help text suggests you should be using the -d flag.


Ha! I took too long to format my response :). The entr examples from [1] seem to do the same (ls *.rb for example).

[1] http://eradman.com/entrproject/


Also, doesn't entr invoke its command in parallel when there are multiple changes in quick succession? And wouldn't multiple parallel instances of rclone potentially mess up the backup?

(I didn't read any of the man pages)


yep...specifically, use the -d flag in a loop since all it does it exits the process when a new file is added:

  while true; do ls -d src/*.c | entr -d <cmd>; done


The true workhorse behind this one-liner is inotify(Linux) or FSEvents(Mac) API. Not sure what is the Windows equivalent.

If your favorite language has access to those, then you can basically build your own file syncing client.


rclone is great, i have been using it to make backup of google photos. I had tried so many different solutions before achieving this. Great to see it being here on HN.


rclone is such a great tool indeed. This is almost the only viable approach for transferring your data from one cloud to another. Huge thanks to the creator and contributors!


It's kind of funny to replace a proprietary client for a proprietary service with FOSS tools when you could completely replace Dropbox with FOSS alternatives instead and enjoy Freedom: Syncthing works very well, and while syncing is not its main feature you could also work with nextcloud.


If only there was a platform agnostic, standards-based, cloud platform that you could point any old SSH tool at.

Imagine if such a provider existed and was running on top of ZFS and maintained the current, stable version of 'borg[1]' on the server side ...

Probably too much to ask.

[1] https://www.stavros.io/posts/holy-grail-backups/


Given the username, I’m assuming this is tongue-in-cheek because rsync.net is pretty much that.


Seedbox server hosting might be usable here. They are bascically shared Linux hosts with loads of disk space and network.


I got one too, but syncs to ipfs/ipld and uses gpg for encryption. https://github.com/peerparty/fs2ipld/blob/master/fs2ipld.sh



If you did want to run the original Dropbox client, you could likely make a blob image on your encrypted partition, loopmount it and then sync to that.


I did something similar actually. I have my 1Password vault on Dropbox so I can get it on my Linux machines.

Dropbox has some nice python modules/libraries to support me, so I’m able to download the directory as a zip and extract it in memory.

It’s stupid and unidirectional, but I’m sure more talented people are able to build a bidirectional python Dropbox client in no time, with inotify since python has bindings for this too.


rclone is great, but I find syncing manually in intervals to be ideal since you get to see what is changing from your destination/remote. I use CrashPlan Pro to back up everything automatically, and then a few times a week I'll run rclone on my data drive so that I can see what has changed.


You will be interested by this free and open source software: duplicati.com


Why does the client care which filesystem you use???


Different supported features make it easier to maintain such a client. Example: Some filesystems notify observers on file change, some don't.


This shouldn't affect other local filesystems though. Inotify won't work with NFS, but should be ok with fs owned by the local system.


The mistake is assuming that there's something special about 'local' filesystems. They have as much feature disparity as networked ones.


This is cool. Thanks for sharing.


Wow, this is better than getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem.



This is _VERY_ cool but I stopped using dropbox and keep everything on my self-hosted Nextcloud.


Dropbox is a no no for one simple reason - there is no end to end encryption. Unless I don't know something? I wouldn't want a disgruntled employee to fiddle with my files.


Is there any Dropbox-like service that lets you control your own private keys, without resorting to the ugliness of uploading an encrypted image to Dropbox?




It's important to note though that their client is not open-source, so if one goes through all that trouble to use end-to-end encryption, it seems a bit unsatisfactory to me to then trust this company to actually keep the private keys on my machines (and encrypt things correctly).

Personally I used syncthing which doesn't do encryption but also only uses my own devices, so I can keep the data on my machines at all times.

In the past I used seafile which does support encryption (and it's self-hostable): https://www.seafile.com/en/home/


Tresorit is nice, but seems kind of pricy. Boxcryptor works well on top of Dropbox. Personally, I've switched to Nextcloud which keeps everything in my control, and it also supports e2e (though the e2e UX is still rough around the edges).


I think syncthing wins here. Not at rest, mind, but since you control the endpoints that should be okay.


Even though it's proprietary, Resilio Sync does encrypted peer to peer sync, plus it allow encryption-only nodes. These contribute bandwidth, but cannot decrypt the data.


I use a cryfs mount stored in Dropbox. It's been quite painless.


There is restic


Just sync an encrypted image if your threat model includes disgruntled employees motivated enough to mess with your account.


Could be problematic if the image is huge.


Anyone have experience with Boxcryptor? Haven't tried it since a few years ago (https://www.boxcryptor.com/en/)


You could use something like gocryptfs that encrypts at the file level


Cryptomator is another cross-platform alternative for file-level encryption: https://cryptomator.org

Also includes first-class support for cloud syncing apps.


If you trust https://www.dropbox.com/en_GB/security#files, then they have encryption on transport and at rest in the DC, and implement forward secrecy and certificate pinning.


End-to-end encryption means that they don't see your encrypted files at all, even if they want to. Importantly it means it is impossible by design to make mistakes like accidentally not checking passwords on login https://techcrunch.com/2011/06/20/dropbox-security-bug-made-... .

Transport security and at-rest security is very important (and is the best you can do if you want Dropbox's ability to access your files, e.g., so their servers can show your files in a web interface), but it's not the same sort of thing as end-to-end encryption.


Actually, hm, you could point Dropbox at the layer under eCryptFS and have it work (for some value of work) and get e2e that way, right?


After e2e encryption, my second largest feature request is a .dropboxignore file

I don't understand why they don't have it even after all these years.


Do you want something different than the Dropbox exclusion list ("dropbox exclude add ...")?

That only supports excluding directories not individual files, and the actual list of exclusions is buried in some local binary config both of which are moderate annoyances - perhaps those are your qualms?


If you run a project off a Dropbox directory you would want .env and similar files excluded. You don't want someone to get your AWS keys?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: