All this talk of efficiency, yet the Dropbox windows client is such a bloated multi-process memory hog of a mess that I ended up uninstalling it and rigging my own sync with a command line tool (dbxcli), with about 1000x less resource usage.
For all this attention to saving their server's resources they sure don't seem to care much about wasting their customers'.
The Electron app wouldn't be an issue if it didn't churn away uselessly even when Dropbox is in the background. The company doesn't care though, their advice was to open a topic in the support forum, to which their response is "This idea will need some more support before we can share it with the team." You can lead a horse to water.
I had a paid subscription but canceled because of their attitude on this. When dropbox.exe can't sync a file because of some file permission or other access problem, it just consumes huge amounts of CPU trying over and over. There is no notification of a problem. There is no log. There is no place in the interface where you can look to see what file is causing the problem or even that there is a problem. You have to just notice the CPU usage.
When I told the support person that this was a problem, he flat out refused to open a ticket or file a bug report. He denied that this was a problem!
That was a couple of years ago, so maybe they have improved since then, but I'll never find out.
I really wish this idea had become more widespread. Microsoft Research published this 15 years ago. It shipped as part of windows but the paper describes it in enough detail to implement it I think. I got it running years ago and it seemed to work really well.
> Rolling out the above changes went relatively smoothly until one of the curious engineers on our network team found that compression was actually a bottleneck on high bandwidth connections
Back in the early-to-mid-aughts there was a push to start gzip-ing server responses.
It was my general experience at the time that this actually often made the browsing experience worse. Whereas without compression the page would begin to render as the html began to trickle in (very slow internet back then), when compressed you had to wait for the entire document to download and be decomposed first.
Uncompressed you could often decide if the page was worth waiting for entirely before it finished downloading.
Gzip is streamable (windowed), so perhaps it was the extra CPU cycles that caused the slow down, misconfigured gzip, or just incomplete gzip implementations?
Not sure how compression was a bottleneck on the high bandwidth connections? I understand @donatj's comments about having to wait for the entire file to be downloaded before rendering, but that would be a user experience issue, not sure how the network team sees it as a bottleneck on the network?
HTTP requires that the length of the content be sent before the content, so that pipelined connections know where the data ends and the next headers start. If you're sending a file directly off disk you know the size before you've even looked at the data so there's no delay. If you're running everything through gzip, you have to wait to compress the entire file before you know the output length, so the critical "time to first byte" metric could get much worse. There are similar issues with dynamic content (CGI scripts, PHP, etc) where both the server and browser would end up buffering large amounts of content before compressing/decompressing them, which also affected perceptual speed. If the connection bandwidth was high enough, skipping all of this and just sending the uncompressed file would appear faster to the user, despite transferring more.
This was later improved with things like chunked encoding and caching the compressed output on the server side, but they came later and weren't always supported or desirable.
And yet the Mac client still wakes up whenever I touch files outside my Dropbox folder, and has become a monstrosity full of “UX” I never use (so I switched to other things, and most of my friends to https://github.com/SamSchott/maestral).
For all this attention to saving their server's resources they sure don't seem to care much about wasting their customers'.