Hacker News new | past | comments | ask | show | jobs | submit login
Dropbox ignore file or folder in beta (dropbox.com)
121 points by spectaclepiece 25 days ago | hide | past | web | favorite | 136 comments



It's unfortunate that as startups get larger, the speed of improvement on their product often gets slower.

It's strange to me that Dropbox has thousands of employees, people have wanted this for a _long_ time, and yet this hasn't been built.

You'd think that with more engineering/PM/design talent the product would get better, and faster.

Anyone have any insight into why happens? I've never worked at a early stage startup but here are some hypotheses:

- Maybe this is a good thing. After a product is "done", adding more functionality makes it worse, not better.

- Maybe the company leadership's focus shifts from building a great product to scaling as fast as possible. And doing both at once isn't possible.

- Maybe the engineering division grows substantially, but the number of people actually working on the product doesn't change much. Instead the new engineers work on important, but auxiliary things, like dev tooling, security, infra, ops, etc

- Maybe developing features takes longer because there's more process: security/legal/ops needs to review it, several layers of management need to approve it, it needs to work in multiple countries, etc

- Maybe the urgency to keep improving your product disappears after you feel that you've made it

- Maybe it's more important to take longer to build stable, complete features instead of shipping as fast as possible


The larger a product gets, the risk and customer impact of regressions increases, so even "simple" changes require significant engineering and QA effort. There are also so many feature requests that prioritizing them is a full time job.


That, and, as people move on - their code remains... so part of adding new features means sifting through the codebase to learn what everything does and be very careful not to add new bugs.


I'd say customer impact, even without regressions. As an example of something that I'm currently working on:

- there is a need to add a single link to a page

- some percentage of the customers will not like this change (perhaps 5%), because it links to a non-whitelabeled area of the product

- as a result, a list of customers that will be upset has to be assembled, and then each one reached out to, which a different department, that doesn't really care if the link is there or not, has to do

I'd say that doing the above will take a month or so. All for a link.


Overhiring. Individual productivity goes down the more the company hires. At one point big software companies are only big because they can afford to be big. I wrote about it here https://medium.com/@franz.enzenhofer/overhiring-b966a6ff383d...


I read this somewhere on HN: if your company has $1 million in ARR with 3 people, you'll likely feel need to hire more people, even if the 3 people get the job done.

Now repeat this with $1 billion in ARR...you get the picture.

I've seen "Bullshit Jobs" recommended here on HN [1].

[1] https://en.wikipedia.org/wiki/Bullshit_Jobs


I know "we" in the dev community are always eager for improvement, better tooling and often specific features. Many non-dev folks will also sometimes express the need for certain functionality.

But there's one factor that plays a big role in this i think: most people don't like it when somebody changes "their stuff", regardless of possible benefits.

So once a product is well established, tweaking and changing it puts you at quite a risk of annoying or losing some of your aquired userbase, and further development therefore ensues much more prudently and slowly.


One thing I’ve seen: feature priority driven by what is expected to drive sales. This is a feature that I could see, justified or not, as not driving adoption.


The most requested feature, to allow a file or folder to be ignored by Dropbox sync without using selective sync is finally in beta.

The community requested a .dropboxignore file but they chose another solution which I’m sure is reasonable for making the feature more user friendly to non-devs.

This will be immensely helpful for node_modules or build target directories.


I can imagine a world where a arbitrary directory is filled with gitignore, dropboxignore, googledriveignore, backblazeignore, s3ignore, rsynciginore, dotignore, ipfsignore . . .

Goddamn, stop the world I want to get off


I think the problem here is poor tooling that contaminates your working directory.

If you look at something like Bazel, all the build artifacts end up in ~/.cache (or similar). Thus there are no artifacts to gitignore. OCI container builds ("docker images") are done by simply adding artifacts controlled by a build rule into the image (rather than starting a vm/container, copying a working directory into the container, and running random shell commands).

To summarize, I think the problem is that node puts packages in your working directory instead of some other location, causing you to have to ignore them. It is reasonable to check in your dependencies, in which case node's strategy is fine. But you can compare this to something like go, which puts all modules in $GOPATH/pkg/mod and if you want to check in your modules, you run "go mod vendor" and it creates a vendor/ directory in your working directory to check in. All of these ignore files exist to work around other packages; it's not git or docker or dropbox's fault that some tool you use contaminates your working directory. Plenty of designs exist for those types of tools that don't.


Personally I prefer that build artifacts end up somewhere under my current working directory. That means if I want to clear everything out related to a project, it's easy to do so without interfering with anything else.


Yes, as frustrating as it is to need on occassion to rm -rf /node_modules and start again to fix things, at least that's an option.


Of course it still would be an option if these were stored out of tree, but it means I need to know where that is. I also have the potential to affect other unrelated projects.


I suppose this could be addressed by the tooling, if there was an easy way to get it to echo the target, and it made sure to always keep things separate.


Don't forget the git rm -r --cached ./node_modules!


> If you look at something like Bazel, all the build artifacts end up in ~/.cache (or similar).

That is a terrible design, as it makes parallel builds error prone.


Output directories are unique by the user and source project workspace directory. Output artifacts don't clobber, even though they are all in ~/.cache.


Other reasons to avoid the home directory:

1. It might be NFS mounted.

2. Space might be an issue and more difficult to plan for if build outputs are going there.


The location is not hard-coded and can be configured with the --output_base flag.


Bazel is the king of parallel builds though, at work we can spread out 100 actions wide across across a cluster of machines, and the only reason it's that small is we have no real need to go wider...


That's probably a clue something needs to be done at the OS level regarding FS integration with cloud syncing systems and permissions.


I really don't think this should be handled at FS level.

robots.txt didn't need to know about who the robot was or what the hosting server was.


Windows (and some other OSs) have/had an "archive bit" ¹. It used to be visible in the file properties, next to the read-only and hidden attributes, and was labelled "Archive" ²

¹ https://en.wikipedia.org/wiki/Archive_bit

² https://www.samba.org/samba/docs/using_samba/figs/sam2_0801....


Some file-system level property could work, but the semantics of the archive bit don't match. For files I want to be ignored, I don't think it makes sense to mark them as backed up, which is what the archive bit was designed for.


All major OS already have a permission' system. It's on dropbox & co to honor them.

On linux you can create an user group named "cloud", and allow dropbox to only sync files that belong to that group.


Except dropbox will be totally unaware and try to access it.

What i envisionned was more like a "control center" app of all the outgoing (and incoming) pipes from your FS to syncing systems, with both last syncing time, configuration, etc. A little bit like what's already happening for mail and calendar integration into iOS applications, only for files and on the desktop.


Sorry, I edited my post before reading yours.

Yes, right now Dropbox would probably crash, but that's their problem. I really don't think it's something that should be handled by the OS when they already have a solution for it.


Depending on wether you're a linux, windows or mac user, you'll probably have different expectations on what the OS is suppose to provide by default.


This isn't really a great solution when I have files that are autogenerated. I either have to have the default group be "cloud" and then go back and chgrp these files when they are generated so they don't sync. Or I have to have the default group be something else, and inevitably forget to chgrp for files that I do want backed up.


What missing is .folderinfo/ to hold those things, akin to HFS resource fork or file attributes


We clearly need a common standard for this:

https://xkcd.com/927/


Now they just need the also much-requested inverse feature, a reliable way to sync files outside of the Dropbox root folder. Useful for app specific config files that often don’t let you choose where to store them. Using symlinks on Mac sort of works but it’s kind of fragile and it’s a pain to remember to set up on a new machine, and on windows there’s no workaround at all.

It’s kind of crazy that there are still these low hanging fruit advanced syncing use cases, that would help keep Dropbox’s position as a leader in this crowded space, and they are a multi billion dollar company, but they drag their feet on it for so long. I know they want to keep cranking out mediocre me-too productivity tools that nobody is asking for in case they get lucky and one catches on, but they should really be able to do this too.


You can use the mklink command to link files since windows 7

https://www.tenforums.com/tutorials/131182-create-soft-hard-...


All linking tricks are often defeated if the program in question writes to the file. Especially on windows, it's common practice to first write out a new file, if you didn't crash, remove the old file and rename the new file. This breaks the link. I'm not sure if this is also common for config files, but I would not be surprised.


Just link the parent folder into dropbox, I do it for savegames all the time.

Or switch to resilio sync, it’s great and selfhosted. Unlimited folders can be used.


the problem with this solution seems to me that you can't easily recursively ignore e.g. `__pycache__` or `node_modules`. sure, it's less complexity than defining and parsing an oddball text file with potentially weird glob expressions, but also less powerful.

still, i'll definitely be making use of this!


Who needs a VCS when you could just use Dropbox?


Even better, Dropbox as your VCS and database!

https://www.reddit.com/r/sysadmin/comments/eaphr8/a_dropbox_...


That story makes me want to become a goat farmer every time I read it, but, Dropbox must have intended that usage if they had a publicly available API, right?


According to TFA in parent Dropbox happily had them on a custom plan, charging them more and more as their data grew.


Dropbox as a VCS for separate documents intended to be accessed by non-technical users is not a terrible solution. Certainly there are better options, but I think something built on Dropbox could be workable in this case.


Why use Dropbox if you could just use FTP + SVN/CVS? [1]

[1] https://news.ycombinator.com/item?id=9224


I did exactly that: https://github.com/mickael-kerjean/filestash As a side effect, it even supports .gitignore!


Anybody who might want to back up uncommitted changes and files you can't check in to your repository?

I don't do it personally, but I know people who do and it's not entirely illogical.


For dropbox, which was mainly a user focused solution to sharing files, this seems like a very unfriendly solution to ignoring files?


> This will be immensely helpful for node_modules or build target directories.

Why would anyone put a node project in Dropbox to begin with?


I put everything in Dropbox including all of my projects. In my experience it works pretty well even if node_modules can't be ignored now.

Putting every projects in Dropbox with .git is the most awesome feature of Dropbox. With Dropbox's History and Time Machine feature, you will be almost never able to delete/overwrite your work permanently, even by intention.


Why not just use github, gitlab or other VCS? Personally, I enjoy using Dropbox for music projects (which VCS are not well suited for), but for code...

(I apologize if I sound as if I'm telling you what you should be doing; I am really just trying to understand the motivation behind such a decision - but I fully understand that even if something seems bizzare to me, it's just my personal humble uninformed opinion).


I don’t do as the parent does, but I’ve thought about it. The main difference is that a remote repo still requires an action to achieve backup parity. Using Dropbox your changes would start syncing the second you type :w. So even if you don’t finish the current development thought-line and commit upstream, you’re still backed up.

Still not sure if it’s a good idea or not, but not being able to ignore node_modules was the real blocker for me before.


They're completely different tools with different benefits, and both can be used together. VCS remembers only what you commit and push, per-project. Dropbox remembers your entire workspace as a whole, automatically. It remembers every time you save any file (more detail than you usually need, but can be handy in an emergency). It also remembers which projects you have on the go, and how they're organised. It remembers all the unpushed stuff within each project, including uncommitted changes, stashed changes, experimental branches, gitignored notes and ideas, even the knowledge of which branch you are currently on. Your entire workspace. And by 'remembers', I mean it automatically syncs all that stuff to all your devices, so your workspace is always exactly as you left it, whichever device you're now using.


>Why not just use github, gitlab or other VCS?

You can only revert changes to states that you have explicitly stored and uploaded. Dropbox does that automatically any time you save a file. That simplicity makes it more reliable. You will never lose anything you saved. And with rewind you can go back to any point in time. The advantage of git comes from its branch/merge features. But for a single developer on a small project that doesn't know git well that's not worht the complexity.


I moved my notes folder to git after having conflicted files for the upteemth time from Dropbox.

Two things that keep everything in sync:

1. Every two minutes my notesync.sh script runs, which basically does git add --all;git commit -am "New commit";git pull;git push.

2. Vim with AutoSave.vim which saves the document as I write.

Yes, there are a lot of commits, but I don't have to worry about losing anything.


Sharing your work with other companies where sales/marketing people need to have access to since they have no clue how to use Git.


After an office break in and having my laptop stolen my company gave me dropbox to back up my home folder.

Dropbox actually managed to completely hose every single node_modules folder by the end of the week. It got confused somehow then ended up just splitting all the files up with dates on them as if there was a conflict across hundreds of files.

Stopped using it after that.


Your company didn't have backups until it had a break-in?


I actually do this with my hobby .NET project and OneDrive. It's really a lot nicer when bouncing between two desktops and a laptop when I only have a half hour or an hour to work on something - everything is just there, and I don't have to futz around and check if I need to git pull, or worry if I get interrupted and don't get a chance to push whatever I'm working on.

Of course, NuGet is far less promiscuous than node, and I'm playing with a desktop app, so there's a lot less ephemeral stuff to have to sync.


> but they chose another solution which I’m sure is reasonable for making the feature more user friendly to non-devs.

But this begs the question: why would the "ignore file/folder" feature itself be useful to non-devs?


More people can use it, including non-devs.

Seems like a win on their side.


The dropbox cli already lets you ignore folders. I've have had the following script for many years to ignore node_modules:

  exclude_folders=$(find . -type d -name "node_modules" | grep -v "node_modules/")
  echo "Excluding $exclude_folders"
  dropbox exclude add $exclude_folders
  dropbox exclude list
The feature I want is pattern-based ignore in a .dropboxignore file.


I work on a major dropbox competitor. We have the ability to do regex-based ignores, but it's deliberately hidden. (And not part of any regression suite.) As we've optimized the product, regexes would cause a huge performance problem.

Why? Part of our optimization includes representing paths in an optimized data structure. In order to process the regex, each path would need to be converted back to its full string, which is computationally "expensive" when working with 20,000 or more files and folders.


This is also true of the GUI. It's called Selective Sync


I’m finding it hard to understand how this new feature differs from Selective Sync.

Is there something this lets one do that couldn’t be done before? Or is it just adding convenience?


This is kind of the opposite. With Selective Sync, the file exists on Dropbox.com and 0 or more client devices, and each device can choose whether or not to sync the file. The new ignore feature allows a file to exist on one device without being uploaded or synced to other devices.


Try unselect node_modules in Selective Sync for ignoring purpose. Uh-oh, you just deleted node_modules in your machine.


for those using google drive or onedrive, we have ignore rules which supports gitignore syntax -- https://help.insynchq.com/en/articles/3045421-ignore-rules

note: i'm a co-founder


To clarify, Insync is a client used in place of Backup and Sync / Drive File Stream?

Do you have a Linux CLI client? I have a headless Ubuntu server at home that I sync personal files on. Currently using Dropbox but in the process of switching to Google Drive (both a personal and business account) and it seems like there's no good CLI only client for Linux.


we have a headless client for insync 1.5.7 but we are adding it also to our laster version: insync 3.

we are asking for feedback here -- https://forums.insynchq.com/t/feedback-wanted-insync-3-headl...


That seems like an arcane solution, although I'm happy they finally added it


I think this is a perfectly good technical solution that they haven't built a modern UI for, is all. It's great that under the hood the feature will be easily scriptable using standard OS utilities.


Yeah I'm just confused why there is no UI integration. It's not like Dropbox is too light on the UI otherwise.


I'm assuming since this is beta they probably guessed enough of the users who want this are technically capable of managing without a UI during testing.


OK, this is probably a very dumb question, but I'm trying tio understand the usecase.

I put things that I want to sync with Dropbox in /Dropbox and take out things that I don't want to sync.

Why would I want to leave things in /Dropbox that I don't want on Dropbox?


If you're a Node/JavaScript developer, you might want to keep code in Dropbox, but putting node_modules in there will tank it.


I don't use Dropbox, nor am I a Node dev. Why would a node_modules directory tank Dropbox?


Because NPM's first operation is to pull in half the internet as modules so you can write 'Hello World' on your webpage. Thats too many files for the Dropbox client to handle.


Does it not suggest that node_modules in the source directory isn't a good idea to begin with?


The project directory would contain everything needed for that project including node_modules (which is generated from the package.json)

Every nodejs project is structured this way... node_modules is always sitting next to your src...


It is usually git-ignored, which means it isn't part of the "real" source directory (the one that is shared with other devs). It's a temporary output of the build process, in essence.


This is the same question as why .gitignore files exists. You probably want to leave build files unsynced in your project folder.


The primary purpose of my project folder is to contain my project, including source, dependencies, build output, etc - so it makes sense to me that tools like Git allow you to exclude/ignore parts of it.

But the primary purpose of the Dropbox folder is to sync stuff to Dropbox. The idea that I would put something in my Dropbox folder that I didn't want to sync is bizarre.

It sounds like it's a common case where people put their project folders inside their Dropbox folder. Before today it never occurred to me that anybody would do that.

I do have some projects I periodically Git Bundle into a Dropbox folder though.

Not criticizing anybody's workflow, just surprised it's so common.


Its so I can have my computer burn down, and set up another one, install Dropbox, and continue work uninterrupted.

Hey! This person has a way: https://superuser.com/questions/469776/how-to-exclude-files-...


I don't use Dropbox, but in OneDrive this is a per-computer setting, so I may have some large files that I want to sync on my desktop, but not on my laptop where I only have a 256gb SSD.


I worked on a sync product that had this feature "forever." The problem with the "ignore" feature is that it creates corner cases in almost every use case we add. A huge amount of engineering resources goes into this feature, even though its only used by a minority of users.

A big problem comes with un-ignoring a file / folder, specifically if someone else has gone and added the same file / folder on another computer or in the web. The only way to make that use case work smoothly is to basically read minds, because there's no way to know which version of the file / folder is the right one.


Sync conflicts are never fun, but from personal experience programs that attempt to auto-resolve them aren’t exactly letting the user in on what’s going on.

The best way to make that work in that case would be to explicitly inform the user when unignoring of the conflict and ask which to keep (showing metadata etc). A quick “retain other version” would help if the user was unsure.


All I can say is: That's easier said than done. Dropbox's implementation has no UI, building such a wizard is probably much more complicated than they anticipated!

I've always advocated for wizards like that, but everyone gets enamored with building the next shiny feature.

Some days I wonder if I should just quit and make an open source sync product... And if I'd actually make a living doing it!


Could they make it work like git? Then all of the templates hosted by https://gitignore.io would work!


While the instructions go to a level of detail with the assumption that the person typing these commands may not be familiar with them, I don’t get why there are no instructions on this page to revert ignoring a file or folder so that it starts syncing to Dropbox again.


Interesting. It's the first time for me to see a use of file attributes in Linux. What are the common use cases of file attributes?

Also, I just notice that every files in Dropbox folder has com.dropbox.attributes key with unreadable binary value. Does anyone have an idea about this field?


> What are the common use case of file attributes?

CephFS (a distributed filesystem) uses them to allow users to control the layout of directories and files, ie. how they are split into chunks and what pool they are stored on; and each pool has its own replication settings.

https://docs.ceph.com/docs/mimic/cephfs/file-layouts/


its dropbox id for each file - useful when you move files around, dropbox client can check the ids with the data on server and decide whether to upload the file or not...


That is a clever approach to track files. Thanks for info.


Obligatory mention of Syncthing, which had .stignore file for a while [0]. It's open source, and you can selfhost in; but the downside comparing to Dropbox is that you won't be able to access your files if your computers are off.

[0] https://docs.syncthing.net/users/ignoring.html


Recommending Syncthing kind of misses the point for many Dropbox users.

I want cloud storage that's hosted on infrastructure that's not mine, because I don't trust myself to keep it redundant and up to date, a perpetual security risk. Even if I could setup a local LAN storage device at home and keep it always on and available, it's a nuisance and I don't want it.

Even more, when accessing files, I don't necessarily want to download them locally. If I have photos or documents, I want to search them or browse them without downloading entire folders locally. And I browse files on my iPhone quite often too, I need a good iPhone app.

My only problem with Dropbox is one of security, as it misses end to end encryption, but then again, the files wouldn't be as accessible if they did that, plus personally I think encryption should be local and app specific for it to be reliable, therefore I'm encrypting (via GPG mostly) the sensitive files that I need to encrypt.


I recently wrote a blog post about another feature I have been wishing Dropbox would add. Looks like it won't be happening anytime soon...

"Hey Dropbox, why can't I compare file versions like this?" http://web.eecs.utk.edu/~azh/blog/whycanticomparefileversion...


Are these settings persisted on their server?

Does Dropbox know to not sync those files on a new computer, or do you have to set those settings everywhere?

If not, if this works like Selective Sync (which is a local setting), then you can end up accidentally syncing those files from new computers. And is thus not equivalent with a .gitignore.

Also does it support glob patterns?

Why couldn't they do a .gitignore that syncs along with your other Dropbox files and be done with it?


On a side note, you really are true to your name!


FYI I tried this feature out a few months ago and found it to be useless for build folders. Extended attributes (xattr) get lost everytime you recreate a file/folder - think 'build', 'node_modules' etc. Additionally Dropbox broke selective sync which is even more infuriating. All this combined means my hard drive is running all the time.


Will an ignored file be ignored for all devices or just my device?

That is, what if I have a file/directory in my dropbox on one device and then another device "ignores" it? would it be ignored everywhere or just in my device?

Either way seems like a recipe for a lot of problems. No wonder it took so long to get to it (and it's still in Beta).


Paragraph #2 answers your question:

> Once ignored, the file or folder remains where it is in your Dropbox folder and is synced to your computer’s hard drive, but it’s deleted from the Dropbox server and your other devices, can’t be accessed on dropbox.com, and won’t sync to your Dropbox account.


This is good to know! On Linux, Dropbox's CLI implements an 'exclude' command that you can apply to certain files/folders. But it doesn't always work. I've had issues especially with Docker volumes, that I set to ignore, but dropbox gets stalled in a endless syncing status... Hope this fixes it!


Fantastic!!! Will definitely go back to Dropbox when this feature is available.

Recently I switched to OneDrive, also no ignore that I can find, but I'm only making extra copies of stuff in there that I want backed up to the cloud.


Did OneDrive finally lift the 20,000 hard limit on the total number of files?


It would be cool to make a file that would be globally ignored with all syncing services so it couldn't be uploaded. Though I do not think this is possible.


I have never understood why dropbox refuses to allow folders other than "Dropbox" to be sync'ed. It mystifies me and keeps me from using it.


Kinda feels like a dark pattern in order to get Dropbox to be your "filesystem".

Can't lie, I've been a dropbox user since nearly the very beginning, and a paid user for longer than I can remember, and it does kinda fill this purpose for me. Application and platform specific things live outside of my Dropbox directory, but everything personal, work, and school related document wise lives in it, as well as a mostly unsynced folder that holds the backlog archive of my entire digital life, to the tune of ~400gb.


Well, better late than never.


can I do a pattern based ignore, to ignore a specific folder in all subfolders?

Dropbox//node_modules/

or similar?


Mazal tov


Genuine question, why do people here use Dropbox or box when Google drive is far superior?


The Dropbox client is just vastly more reliable.

I never had any sync issues with it and it is still the only one which supports both Delta sync and LAN sync and their automatic "Smart Sync" still works better than what Google offers without the need to awkwardly mount it as external/network storage.

The Backup and Sync client from Google is unreliable and imo just broken: It moves old versions of edited files into the local recycle bin and once it never recognized a file deletion I made on an other computer. Only some days later after manually restarting the client, it noticed that file deletion.

Google Drive File Stream has severe performance issues: I have an unlimited G Suite account and use it mainly for Arq Backup. While ~5 TiB data is not THAT much, Arq does create thousands of small files in its destination folder. For the initial sync the GDFS client took over a week while using more than 60 GiB RAM! Afterwards its memory usage settles down but this was just for enumerating the remote files without any anything marked as 'available offline'. On my MacBook Air with less RAM this initial sync never finishes without GDFS crashing first.

While I still use the official Dropbox client, I replaced all other sync clients with Mountain Duck. It doesn't show any deal-breaker bugs and its resource usage is also manageable.


Genuine answer: because it is not superior, certainly not “far superior” and it’s a Google “I’ll probably kill you tomorrow.” product made by a company whose ethics I no longer trust (Google).


I first thought you meant Google "I'll probably kill your account tomorrow."

I constantly hear about people breaking ToS on one of many Google services and having them all instantly disabled, no warning (usually losing mail email account).


There is that problem too! Using Google as a backup solution, even in a "belt and suspenders" scenario, is too dangerous.


Because, to this day, Dropbox is the only cloud storage service that is capable of syncing only the part of a file that was modified. If you have ISOs, zips etc, GDrive will re-upload the entire file while Dropbox will only upload the changed parts.

This alone is why I keep using Dropbox.


I have to use Google Drive at work and it's a complete dumpster fire.


Because I'd like to support competition.

I also prefer buying services from companies whose only purpose is solving the problem I'm having.

The bigger a company gets, I feel like, the more their "we do everything" becomes untrustworthy.

I'd bet it is more unlikely that Dropbox will pivot from their main file sync product than Drive's breaking a functionality I depend on.


In my case because Dropbox support for Linux is far superior.


Fair enough, I guess I should have asked why Mac users prefer dropbox or box.


always between mac and windows. i have had better experience with dropbox than drive. in fact, i have all three: onedrive, dropbox, and drive. drive is my least favorite...


The Google Drive app is terrible, eats your battery, is limited to three accounts per device and does not work on Linux.

HOWEVER, there is Insync, which is cheap, works on all OS I care about and supports pattern-based sync exclusions, anyone here has experience with it?


Bigger install base and it's not Google (which has become quite a huge argument for technical as well as non-technical users)


Because it is a much more seamless experience, at least on Mac, where it is properly integrated with Finder?


Take a look at Alfred (1).

(1)https://www.alfredapp.com/


Out of Dropbox, OneDrive and Google Drive. Google Drive is by far the worst and most unreliable one.


What's so superior about Google Drive?

I've used both and it feels like Dropbox is more polished and user friendly in every way...


Google drive doesn't have an official Linux client.


In what ways is Google Drive superior would you say?


Rejoice!


Is there any reason to use dropbox over onedrive? This feature has been in onedrive I believe since Windows 10 release? (Also onedrive has linux clients too so- I use it on Linux itself and the linux client by abraunegg supports this)


I have had lots of sync problems with one drive (files excluded because of characters in their names, frequent, heavy resyncs).

Perhaps you don't see these problems if you use Windows. I wanted to use it as the marginal cost was 0 (already had to have a Word license) but I couldn't rely on it.

I don't like Dropbox spraying shrapnel through my mac's UI even though I told it not to, so don't take this as a defense of DB -- it simply sucks less and does the baseline (save my files) better.

Which is also my justification for using a Mac (sucks the least of the options available to me).


YMMV, but Dropbox used to be much, much faster (by at least an order of magnitude) than OneDrive in my region. I don't know the status quo though.


> Also onedrive has linux clients too

But there is a huge gap between official linux support by Dropbox and a thrid-party app.

In my experience, Dropbox's syncing experience is generally superior than the others, especially when it comes to file stability. Its history and time machine feature (or folder rewind) is so great.


Oh god.. Onedrive.

First of all, this was about 2-3 years ago.. not sure if things have changed.

(1) Onedrive "for business" was completely different than OneDrive personal. It still worked under some archaic office sharepoint or whatever thing to some degree.. so first when I tried to use it there was some terrible path/file limitation (lower than normal windows) and it didn't like some special symbols.

So let's ignore that nightmare above, maybe it's changed.

(2) Even with onedrive personal and the business version I used.. it COULD NOT manage a lot of files. I mean even 50k-100k files it would just stop working basically. It would just sit there stuck, never do anything etc.

On Dropbox we have well over a million files and it works fine. Some chunk of files change all the time and it still works.

tl;dr if you have a LOT of files.. Dropbox is the way to go- nothing else I have tried comes even close to it.


dropbox is a nicer experience of the three.


LOL I think Nextcloud has this feature for years already? Such a superior experience compared to Dropbox! I will never look back! Especially now with the next big update: Nextcloud Hub: https://nextcloud.com/




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: