Hacker News new | past | comments | ask | show | jobs | submit login
Improve Docker performance on macOS by 20x (pen.so)
54 points by fabienpenso on Sept 2, 2021 | hide | past | favorite | 17 comments



This has been known and grudgingly tolerated by every team I’ve been on that uses Docker. Occasionally someone will get sick of it and introduce something like docker-sync [1], essentially just to rsync a directory instead of using a volume, then later someone else will find an issue with it and rip it out… usually on a 2-4 year cadence. It’s amazing this is still an issue for Docker Desktop on both Mac and Windows.

With their new nagging un-skippable updates, the future isn’t looking bright for this product.

[1] - http://docker-sync.io/


Agreed, and now they're charging up to $21 per user per month for large enterprises to use Docker Desktop for Mac after years of ignoring the community on issues like this.

Granted, you can still build and install the docker cli and use it to interface with the daemon running inside of a VM for free, but I doubt it'll be long until they close that loophole. My guess is they'll either move docker cli under the same licensing umbrella or simply make it too difficult for most users to build and run in a standalone fashion on MacOS.


In my experience, gRPC FUSE[1] volume mount improves performance quite a bit, and we only observe up to 10% performance decrease when using volume mounts instead of code living on the DockerForMac VM.

However, I do wonder, if Docker Desktop performance is worse than running Docker on VM in multipass/VirtualBox/VMWare, even if there is no file sharing from host OS.

[1] - https://www.docker.com/blog/deep-dive-into-new-docker-deskto...


Hate its popularity at my job.. it’s fine if you want to use it for a tech stack in the background like a db or cloud emulation.. but call me old fashioned I think if you’re walking around calling yourself a software engineer, loading up software and tools is part of the job.


+1 That's not a thing at my job but I have given up using docker to replace the native dev environment. The idea of having each dev environment isolated and therefore not polluting the host system seems appealing but impractical. Performance penalty is a big issue if you are not on a Linux desktop. Docker can be handy at times for things like bootstrapping an integration test environment but I prefer having a native dev environment where I can run commands for the projects, run unit tests locally and debug easily, all of which require extra hoops to jump through if done in Docker.


Can you retest with experimental virtualization enabled? At least on my M1 this boosted performance a LOT (in addition to using gRPC FUSE mounts, as a other commenter also posted)


I just tried on the repo links in the post, had hopes when I read your comment. But it makes no difference...


I’m going to try this later. I noticed the slowdown switching from a relatively weak Linux server to my M1 Macbook and had already accepted the disappointment of the M1 not being so great after all.


We use NFS for volume sharing for Docker Desktop on OSX and it's really fast (edit: really fast compared to without it, still not as good as Docker on Linux).

See: https://medium.com/@sean.handley/how-to-set-up-docker-for-ma...


Maybe a dumb question, but are they running M1 with the ARM version of Docker & Ruby? I imagine it’s probably difficult to accidentally get the AMD64 version, but I didn’t see that detail in the benchmark


author here. M1 was running arm images, but as far as I know Docker Desktop on M1 use Qemu which is super slow anyway.


Uh, why? All projects I saw in the early days of M1 (first some script with a modded version, then UTM) were using Qemu to demonstrate the power of Virtualization.Framework (with near native performance). What is Docker doing wrong?


it's not really an improvement if you have to run your dev environment inside a VM, with the inconvenience of not having access to the files in the host.


I use ssh and vim so it works for me, but agreed it doesn't work for everyone.


From looking over your Vagrantfile it looks like the project files are mounted from the host into the VM so ssh would only be required for tunneling to the db connection? (Although that can also be avoided by exposing a port or using bridged networking).

The biggest issue I've found with this kind of setup is that file change events from the host don't cascade to the virtual machine using VirtualBox's shared folders. NFS handles this better (still seconds delay based on cache values) but that is then a problem for Windows users.

It's been a few years since I switched to using Linux natively so my knowledge on Docker for Desktop Mac mount strategies may be out of date, but at one point docker introduced options such as :cached and :delegated which can have a considerable improvement to container file access speeds. Additionally, I've heard of people using mutagen and nfs for significant improvements, but I've never tried that setup. Here's a long discussion about it: https://github.com/docker/for-mac/issues/1592#issuecomment-6...


I've tried it all and the only real solution is a Linux VM without shared folders, which I'm fine with. The project I booted is used by other colleagues so I tried to make it "generic" and then yes, vagrant copies the file.

As you said (and others in comments) bidirectional syncing with docker-rsync, or using NFS, is one way to have files synchronising but it always ends up being a pain.

I did use cached and delegated but it's nowhere near close to the ideal solution: removing macOS of the equation. At least until Docker find a real fix for those performance issues, if anytime.


You can improve performance a tremendously by limiting number of files within directories that can be used as volumes under settings




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: