It's strange to me that Dropbox has thousands of employees, people have wanted this for a _long_ time, and yet this hasn't been built.
You'd think that with more engineering/PM/design talent the product would get better, and faster.
Anyone have any insight into why happens? I've never worked at a early stage startup but here are some hypotheses:
- Maybe this is a good thing. After a product is "done", adding more functionality makes it worse, not better.
- Maybe the company leadership's focus shifts from building a great product to scaling as fast as possible. And doing both at once isn't possible.
- Maybe the engineering division grows substantially, but the number of people actually working on the product doesn't change much. Instead the new engineers work on important, but auxiliary things, like dev tooling, security, infra, ops, etc
- Maybe developing features takes longer because there's more process: security/legal/ops needs to review it, several layers of management need to approve it, it needs to work in multiple countries, etc
- Maybe the urgency to keep improving your product disappears after you feel that you've made it
- Maybe it's more important to take longer to build stable, complete features instead of shipping as fast as possible
- there is a need to add a single link to a page
- some percentage of the customers will not like this change (perhaps 5%), because it links to a non-whitelabeled area of the product
- as a result, a list of customers that will be upset has to be assembled, and then each one reached out to, which a different department, that doesn't really care if the link is there or not, has to do
I'd say that doing the above will take a month or so. All for a link.
Now repeat this with $1 billion in ARR...you get the picture.
I've seen "Bullshit Jobs" recommended here on HN .
But there's one factor that plays a big role in this i think: most people don't like it when somebody changes "their stuff", regardless of possible benefits.
So once a product is well established, tweaking and changing it puts you at quite a risk of annoying or losing some of your aquired userbase, and further development therefore ensues much more prudently and slowly.
The community requested a .dropboxignore file but they chose another solution which I’m sure is reasonable for making the feature more user friendly to non-devs.
This will be immensely helpful for node_modules or build target directories.
Goddamn, stop the world I want to get off
If you look at something like Bazel, all the build artifacts end up in ~/.cache (or similar). Thus there are no artifacts to gitignore. OCI container builds ("docker images") are done by simply adding artifacts controlled by a build rule into the image (rather than starting a vm/container, copying a working directory into the container, and running random shell commands).
To summarize, I think the problem is that node puts packages in your working directory instead of some other location, causing you to have to ignore them. It is reasonable to check in your dependencies, in which case node's strategy is fine. But you can compare this to something like go, which puts all modules in $GOPATH/pkg/mod and if you want to check in your modules, you run "go mod vendor" and it creates a vendor/ directory in your working directory to check in. All of these ignore files exist to work around other packages; it's not git or docker or dropbox's fault that some tool you use contaminates your working directory. Plenty of designs exist for those types of tools that don't.
That is a terrible design, as it makes parallel builds error prone.
1. It might be NFS mounted.
2. Space might be an issue and more difficult to plan for if build outputs are going there.
robots.txt didn't need to know about who the robot was or what the hosting server was.
On linux you can create an user group named "cloud", and allow dropbox to only sync files that belong to that group.
What i envisionned was more like a "control center" app of all the outgoing (and incoming) pipes from your FS to syncing systems, with both last syncing time, configuration, etc. A little bit like what's already happening for mail and calendar integration into iOS applications, only for files and on the desktop.
Yes, right now Dropbox would probably crash, but that's their problem. I really don't think it's something that should be handled by the OS when they already have a solution for it.
It’s kind of crazy that there are still these low hanging fruit advanced syncing use cases, that would help keep Dropbox’s position as a leader in this crowded space, and they are a multi billion dollar company, but they drag their feet on it for so long. I know they want to keep cranking out mediocre me-too productivity tools that nobody is asking for in case they get lucky and one catches on, but they should really be able to do this too.
Or switch to resilio sync, it’s great and selfhosted. Unlimited folders can be used.
still, i'll definitely be making use of this!
I don't do it personally, but I know people who do and it's not entirely illogical.
Why would anyone put a node project in Dropbox to begin with?
Putting every projects in Dropbox with .git is the most awesome feature of Dropbox. With Dropbox's History and Time Machine feature, you will be almost never able to delete/overwrite your work permanently, even by intention.
(I apologize if I sound as if I'm telling you what you should be doing; I am really just trying to understand the motivation behind such a decision - but I fully understand that even if something seems bizzare to me, it's just my personal humble uninformed opinion).
Still not sure if it’s a good idea or not, but not being able to ignore node_modules was the real blocker for me before.
You can only revert changes to states that you have explicitly stored and uploaded. Dropbox does that automatically any time you save a file. That simplicity makes it more reliable. You will never lose anything you saved. And with rewind you can go back to any point in time. The advantage of git comes from its branch/merge features. But for a single developer on a small project that doesn't know git well that's not worht the complexity.
Two things that keep everything in sync:
1. Every two minutes my notesync.sh script runs, which basically does git add --all;git commit -am "New commit";git pull;git push.
2. Vim with AutoSave.vim which saves the document as I write.
Yes, there are a lot of commits, but I don't have to worry about losing anything.
Dropbox actually managed to completely hose every single node_modules folder by the end of the week. It got confused somehow then ended up just splitting all the files up with dates on them as if there was a conflict across hundreds of files.
Stopped using it after that.
Of course, NuGet is far less promiscuous than node, and I'm playing with a desktop app, so there's a lot less ephemeral stuff to have to sync.
But this begs the question: why would the "ignore file/folder" feature itself be useful to non-devs?
Seems like a win on their side.
exclude_folders=$(find . -type d -name "node_modules" | grep -v "node_modules/")
echo "Excluding $exclude_folders"
dropbox exclude add $exclude_folders
dropbox exclude list
Why? Part of our optimization includes representing paths in an optimized data structure. In order to process the regex, each path would need to be converted back to its full string, which is computationally "expensive" when working with 20,000 or more files and folders.
Is there something this lets one do that couldn’t be done before? Or is it just adding convenience?
note: i'm a co-founder
Do you have a Linux CLI client? I have a headless Ubuntu server at home that I sync personal files on. Currently using Dropbox but in the process of switching to Google Drive (both a personal and business account) and it seems like there's no good CLI only client for Linux.
we are asking for feedback here -- https://forums.insynchq.com/t/feedback-wanted-insync-3-headl...
I put things that I want to sync with Dropbox in /Dropbox and take out things that I don't want to sync.
Why would I want to leave things in /Dropbox that I don't want on Dropbox?
Every nodejs project is structured this way... node_modules is always sitting next to your src...
But the primary purpose of the Dropbox folder is to sync stuff to Dropbox. The idea that I would put something in my Dropbox folder that I didn't want to sync is bizarre.
It sounds like it's a common case where people put their project folders inside their Dropbox folder. Before today it never occurred to me that anybody would do that.
I do have some projects I periodically Git Bundle into a Dropbox folder though.
Not criticizing anybody's workflow, just surprised it's so common.
Hey! This person has a way:
A big problem comes with un-ignoring a file / folder, specifically if someone else has gone and added the same file / folder on another computer or in the web. The only way to make that use case work smoothly is to basically read minds, because there's no way to know which version of the file / folder is the right one.
The best way to make that work in that case would be to explicitly inform the user when unignoring of the conflict and ask which to keep (showing metadata etc). A quick “retain other version” would help if the user was unsure.
I've always advocated for wizards like that, but everyone gets enamored with building the next shiny feature.
Some days I wonder if I should just quit and make an open source sync product... And if I'd actually make a living doing it!
Also, I just notice that every files in Dropbox folder has com.dropbox.attributes key with unreadable binary value. Does anyone have an idea about this field?
CephFS (a distributed filesystem) uses them to allow users to control the layout of directories and files, ie. how they are split into chunks and what pool they are stored on; and each pool has its own replication settings.
I want cloud storage that's hosted on infrastructure that's not mine, because I don't trust myself to keep it redundant and up to date, a perpetual security risk. Even if I could setup a local LAN storage device at home and keep it always on and available, it's a nuisance and I don't want it.
Even more, when accessing files, I don't necessarily want to download them locally. If I have photos or documents, I want to search them or browse them without downloading entire folders locally. And I browse files on my iPhone quite often too, I need a good iPhone app.
My only problem with Dropbox is one of security, as it misses end to end encryption, but then again, the files wouldn't be as accessible if they did that, plus personally I think encryption should be local and app specific for it to be reliable, therefore I'm encrypting (via GPG mostly) the sensitive files that I need to encrypt.
"Hey Dropbox, why can't I compare file versions like this?"
Does Dropbox know to not sync those files on a new computer, or do you have to set those settings everywhere?
If not, if this works like Selective Sync (which is a local setting), then you can end up accidentally syncing those files from new computers. And is thus not equivalent with a .gitignore.
Also does it support glob patterns?
Why couldn't they do a .gitignore that syncs along with your other Dropbox files and be done with it?
That is, what if I have a file/directory in my dropbox on one device and then another device "ignores" it? would it be ignored everywhere or just in my device?
Either way seems like a recipe for a lot of problems. No wonder it took so long to get to it (and it's still in Beta).
> Once ignored, the file or folder remains where it is in your Dropbox folder and is synced to your computer’s hard drive, but it’s deleted from the Dropbox server and your other devices, can’t be accessed on dropbox.com, and won’t sync to your Dropbox account.
Recently I switched to OneDrive, also no ignore that I can find, but I'm only making extra copies of stuff in there that I want backed up to the cloud.
Can't lie, I've been a dropbox user since nearly the very beginning, and a paid user for longer than I can remember, and it does kinda fill this purpose for me. Application and platform specific things live outside of my Dropbox directory, but everything personal, work, and school related document wise lives in it, as well as a mostly unsynced folder that holds the backlog archive of my entire digital life, to the tune of ~400gb.
I never had any sync issues with it and it is still the only one which supports both Delta sync and LAN sync and their automatic "Smart Sync" still works better than what Google offers without the need to awkwardly mount it as external/network storage.
The Backup and Sync client from Google is unreliable and imo just broken: It moves old versions of edited files into the local recycle bin and once it never recognized a file deletion I made on an other computer. Only some days later after manually restarting the client, it noticed that file deletion.
Google Drive File Stream has severe performance issues: I have an unlimited G Suite account and use it mainly for Arq Backup. While ~5 TiB data is not THAT much, Arq does create thousands of small files in its destination folder. For the initial sync the GDFS client took over a week while using more than 60 GiB RAM! Afterwards its memory usage settles down but this was just for enumerating the remote files without any anything marked as 'available offline'. On my MacBook Air with less RAM this initial sync never finishes without GDFS crashing first.
While I still use the official Dropbox client, I replaced all other sync clients with Mountain Duck. It doesn't show any deal-breaker bugs and its resource usage is also manageable.
I constantly hear about people breaking ToS on one of many Google services and having them all instantly disabled, no warning (usually losing mail email account).
This alone is why I keep using Dropbox.
I also prefer buying services from companies whose only purpose is solving the problem I'm having.
The bigger a company gets, I feel like, the more their "we do everything" becomes untrustworthy.
I'd bet it is more unlikely that Dropbox will pivot from their main file sync product than Drive's breaking a functionality I depend on.
HOWEVER, there is Insync, which is cheap, works on all OS I care about and supports pattern-based sync exclusions, anyone here has experience with it?
I've used both and it feels like Dropbox is more polished and user friendly in every way...
Perhaps you don't see these problems if you use Windows. I wanted to use it as the marginal cost was 0 (already had to have a Word license) but I couldn't rely on it.
I don't like Dropbox spraying shrapnel through my mac's UI even though I told it not to, so don't take this as a defense of DB -- it simply sucks less and does the baseline (save my files) better.
Which is also my justification for using a Mac (sucks the least of the options available to me).
But there is a huge gap between official linux support by Dropbox and a thrid-party app.
In my experience, Dropbox's syncing experience is generally superior than the others, especially when it comes to file stability. Its history and time machine feature (or folder rewind) is so great.
First of all, this was about 2-3 years ago.. not sure if things have changed.
(1) Onedrive "for business" was completely different than OneDrive personal. It still worked under some archaic office sharepoint or whatever thing to some degree.. so first when I tried to use it there was some terrible path/file limitation (lower than normal windows) and it didn't like some special symbols.
So let's ignore that nightmare above, maybe it's changed.
(2) Even with onedrive personal and the business version I used.. it COULD NOT manage a lot of files. I mean even 50k-100k files it would just stop working basically. It would just sit there stuck, never do anything etc.
On Dropbox we have well over a million files and it works fine. Some chunk of files change all the time and it still works.
tl;dr if you have a LOT of files.. Dropbox is the way to go- nothing else I have tried comes even close to it.