Hacker News new | past | comments | ask | show | jobs | submit login
Docker on MacOS is slow and how to fix it (paolomainardi.com)
193 points by riccardomc on Dec 22, 2022 | hide | past | favorite | 203 comments



Funny that this came up — shameless plug: I've actually been working on a new Linux+Docker+Kubernetes solution for macOS recently! Already added quite a few improvements over existing apps including Docker Desktop, Rancher, Colima, etc:

- Fast networking: 30 Gbps! vs. 150 Mbps with Docker VPNKit. Full VPN compatibility, IPv6, ping, ICMP and UDP traceroute, and half-open TCP connections.

- Bidirectional filesystem sharing: fast VirtioFS to access macOS from Linux, but also a mount to access the Linux filesystem from macOS. This setup can help with performance: for example, you could store code in Linux and edit it from macOS with VS Code (which can take the performance hit of sharing), so the container runs with native FS speed.

- Not limited to Docker or Kubernetes. You can run multiple full Linux distros as system containers (like WSL) so they share resources.

- Fast x86 emulation with Rosetta

- Much lower background CPU usage. Only ~0.05% CPU usage and 2-5 idle wakeups per second — less than most apps, while Docker wakes up ~120 times per second. Made possible with low-level kernel optimizations. Also, no Electron!

- Better solutions to other problems that can occur on macOS: clock drift is corrected monotonically, dynamic disk size, and more I'm working on now. Will look into memory usage too, although I can't guarantee a good fix for that.

- No root needed.

Planning to release this as a paid app in January. Not OSS, but I think the value proposition is pretty good and there will be a free trial. Not sure about pricing yet.

If anyone is interested, drop me an email (see bio) and I'll let you know when this is ready for testing :)

Also, feel free to ask questions here or let me know if there are other warts you'd like to see fixed.


Biggest question: is it backwards compatible with Docker? Docker CLI and docker-compose is used in tons of scripts. To have any change of this being adopted in a team setting it needs to be a drop in replacement.


Yes, Docker CLI will be configured to talk to the VM.


What are the tradeoffs?


In general, I don't expect anything to be worse than existing solutions, but not everything will be better.

Enabling Rosetta can have a minor performance hit on memory-intensive workloads in the VM (not only x86 ones) because of TSO memory ordering, so it'll be optional. Hypervisor.framework doesn't have an API for third-party VMMs to set this and doesn't seem to let the VM modify ACTLR_EL1 either, so unless I can find a private API for it, I'm stuck with Virtualization.framework's limitation of Rosetta being either on or off for the entire VM at boot time.

Memory usage is probably the biggest uncertainty right now. It should be at least slightly better, but I'm not sure if I can improve it much more due to Virtualization.framework limitations. Still looking into it.

Networking is implemented with my custom userspace proxy for VPN compatibility. Servers are forwarded to localhost automatically, but you can't connect to the VM by IP because the network doesn't exist from the host's perspective. I've ran into too many issues with Apple's NAT setup and host-only networking is a private API, so this is postponed for now. Should be able to do better with root.

Graphics won't be supported at launch, but I could look into it later if there's interest. Not sure how feasible acceleration will be if I can't find a way around having to use Virtualization.framework.

Let me know if there's anything specific that I missed!


It cost money sounds like the big one


Even after building and selling developer tools for a decade, it always surprises and enrages me to see how miserly developers are.


I'm not surprised. Software is an attractive hobby to the miserly because it requires little investment. And eventually those hobbies become careers.

If spending was our thing perhaps we'd have gotten into woodworking, or photography, or whatever it is that can take a good chunk of change to get into deeply instead.


It's also our employers. They don't like to spend money on dev tools because the suits don't see the benefit. There's no glossy Gartner magic quadrant BS for every niche development usecase so they think they're throwing money into the fire.


If you think you get more productive with $10/month or something that most dev tools cost, why don’t you buy it on your own. After all even if it increases your salary hike by 1%, it will be worth it. I have seen designers buying Adobe tools which cost much more than dev tools, and their salary is on average lower.


I don't know where you work but in our place using tools without internal approval or certification is really frowned upon. And in the case of this particular product we were talking about, it's really an infrastructure thing. You can't implement this on your own because then your colleagues' work won't connect to yours anymore.

But also, the company should just supply the tools.


> glossy Gartner magic quadrant BS

The phrase is gold. I understand from this that companies which provide tools and infrastructure for professional developers must not only attract the devs themselves, but also (and maybe more importantly) market/advertise and sell to the "suits", their managers and employers.

Not sure if that's what you intended, but I'm now seeing the value of glossy Gartner magic quadrant BS, and thinking how to apply it in my own projects.


I am calling it that because I often see solutions being marked at the top that are actually worst in capability. I don't understand how they make these quadrants but I guess money has a lot more to do with it than technical capabilities.


Also, there are some tools where you could convince someone up the chain to spend money, until they see the offer from the vendor.

I would have loved to have HashiCorp Vault Enterprise for instance, but the math just wasn't working out to get a feature you can get by... just running more of them.


Anything but a FOSS license makes my life as an employed software engineer harder if I don't want to completely disregard compliance rules. So usually I don't bother.

Also, some developer tools want outrageous prices that are in no way proportionate to their value if you compare them to some standard paid tool (i.e. a JetBrains IDE)


That's fair, but I think the value proposition is there for some :)

I'm honestly not sure how pricing and licensing will work yet, but there will be some way to try it for free. Maybe something like Docker Desktop: free for personal use, license required for companies? That seems like a risky bet as an indie dev.

There's also the whole question of one-time purchases vs. subscriptions. Subscriptions seem like the optimal model for this, so I'm not sure how to accommodate people who just don't like them.

Would love to hear if you have any thoughts on how it could be done to reach as many users as possible.


> There's also the whole question of one-time purchases vs. subscriptions. Subscriptions seem like the optimal model for this, so I'm not sure how to accommodate people who just don't like them.

My company is just large enough to require Docker Desktop licensing, and a per-seat continuous drip is too much for us. So, if you're looking to differentiate, having a buy-out option that gives permanent access to at least a range of versions would be big.


The way I've seen this choice done that makes it easy (for me as a director of software dev) to buy into is a hybrid. You can pay $x/mo or $10x/yr. If you pay for the year, you don't have to do it as a subscription, but if you don't renew then you're stuck on the last version released during that year.

FontAwesome, TablePlus, and some others I've paid for multiple seats on do this and it's great. Some we just paid for the one year, and others we were able to see enough ongoing value to keep paying on the subscriptions.


Oh, I'll certainly pay for it. I'm excited to do so; and if it's substantially better than Docker we'd consider moving Notion eng over to it.


I’m not on macOS so I’m not in your intended audience but I’ve paid for stuff on the JetBrains model where the subscription also gets you permanent access to some previous version.


> Maybe something like Docker Desktop: free for personal use, license required for companies? That seems like a risky bet as an indie dev.

That's how it works currently.


Yes, but I'm guessing kdragOn's legal budget is a whole lot smaller than Docker, Inc's, so the perceived risk of stealing the product would be a whole lot smaller.


Most businesses are honest, plus once you get to a certain size, it's all about CYA, so sure, some people might plunder it, but if you provide something a lot of people want, you could make plenty of money from the legitimates or people afraid of getting sued at some point.

Then once the business is viable, the legal budget will be bigger...


Also seems like they put in a shit ton of work


Docker VPNKit, fast VirtioFS, Much lower background CPU usage

Are there some new Linux drivers involved, or is this "just" a better tuned VM?


No new drivers, but I did make some changes to the Linux kernel. It's mostly a better tuned VM and services on both sides, e.g. a custom fast networking stack in place of Docker's VPNKit.

(Also, by "fast VirtioFS", I meant the same VirtioFS implementation tested in the article because it's faster than other solutions — sorry if it wasn't clear.)


Blog post author here, cannot wait to see the solution out in the wild, it will be open-source ? Can you let me try it ?


It won't be open-source, sorry.

> Planning to release this as a paid app in January. Not OSS, but I think the value proposition is pretty good and there will be a free trial. Not sure about pricing yet.

> If anyone is interested, drop me an email (see bio) and I'll let you know when this is ready for testing :)


Is there any way to join the waitlist for the software?


No formal waitlist yet, but drop me an email (see bio) and I'll let you know when there is!


Often overlooked: there is also podman machine and podman desktop (for windows and macOS). It is not as fancy as docker, but fully free and open source.

It provides docker compatibility to some extent, you don’t need a license and it’s much less heavy than docker desktop. If you need kubernetes, there is also minikube, which provides a lot of options.

Most of the things discussed in this article still apply for podman machine and minikube.


There's also Rancher Desktop. I don't know if it's less heavy, but it doesn't require a license and also includes k8s.

https://rancherdesktop.io/


I didn’t try it out yet. But usually thing from rancher are awesome.


Last time I tried it, it was using sshfs mounts. Way slower than whatever Docker Desktop is using. Looks like sshfs is now unmaintained, I can't find what they're using now.


Podman has been a massive disappointment for me. I stupidly gave up my docker desktop because the company was trying to make cost savings and it was supposedly a drop-in replacement.

Some short-lived containers like our repo’s linter takes easily 4x as long to run in podman as it did with Docker. Immediately I have lost productivity.

It’s incredibly unreliable, every time I start my computer I have to podman machine stop then podman machine start because there’s something broken about how it gets initialised at startup. I’ve spent ages debugging random broken functionality.

It doesn’t support docker-compose. There’s a community project called podman-compose, but it’s not great because it won’t do stuff like build containers concurrently, and it has random weird quirks about volumes already existing when you do podman-compose up —-build whereas docker doesn’t complain for the same compose file.

Overall podman has been a massive regret for me, and I wish I hadn’t given up my docker desktop just to save a minuscule amount of money.


When did you last use it? It _does_ support docker compose. Has for a while now actually.

I love it on Linux. The Mac version is not as smooth yet, but, for my use case, still works a hell of a lot better than docker desktop. There is something deeply wrong with docker desktop's networking, and I literally have to restart it almost every time I make a change to one of our services. Not an issue with podman.


Huh this was literally 2 weeks ago I looked into this and ended up using a third-party community podman docker compose substitute. I will have another look.


https://www.redhat.com/sysadmin/podman-docker-compose

Podman listens on the docker socket, and the official docker-compose just works


Oh wow I wasn’t expecting it to work that way. I was expecting to have to use some kind of podman aliased compose alternative that copies the CLI of docker compose.


It does, but last time I tried „docker-compose build“ on podman it just failed. So it doesn’t seem to just work as a drop-in replacement


I haven’t used it much I have to admit. But I also experienced the issues you are talking about.

I thought it was because I also have docker desktop on the same machine, but probably that’s not an issue at all and it’s podman remote that is unreliable.


I want podman to be a drop-in replacement, but it just isn’t there. I’ve wasted more time running down random errors I podman, so much so that I’ve switched back. Maybe in another year or two…


I've recently tried podman on a Mac and `podman machine start` failed with an “unknown error”. It was a disappointment.


I had similar issues when podman desktop first came out. Working much better on a re-try this week.


Docker is cripplingly slow on MacOS. I have a maxed out 16" mbp... starting rspec on our app takes 55-60 seconds. Compare that to my coworkers on Linux and Windows, who both see sub 10 second boots, and it's absolutely impossible to be ok with those numbers.


I have a nearly maxed out Mac Studio and my experience was the same, until I got my work to fork out a license for parallels and I just installed docker in an ubuntu VM and configured my DOCKER_HOST on the host to talk to the vm. Now it's crazy fast.


You are maintaining all your files in the VM though, right? I'm playing with setting up a sync from vm to host so I can do that but still use my local tools.


IntelliJ autosyncs all my files to the VM seamlessly using rsync, yes. I've considered switching to use the remote development features in intellij and just managing everything in the VM and not having anything on the host but I haven't had any trouble with my current setup so I haven't bothered tbh.


I use k8s in Docker desktop, so I tried using ksync, but it’s just broken. Lots of reports of it not working. I’m gonna play with a rolling my own syncing solution, either host -> vm or the other way around.


How does file sharing performance compare?


I automatically sync my workspace to the VM using IntelliJ and rsync so I don't mount any host filesystems in the VM for docker. I just use linux native volumes inside the vm itself and it works fine for me so far. It takes a bit of configuration from the IDE side but it works pretty seamless once it's setup.


Might be worth digging into why your rspec startup times are so slow. I recently dockerized our dev setup with separate containers for our rails backend, mysql, localstack, and rabbitmq. With Docker Desktop configured to use VirtioFS and the native virtualization framework (which is now I think the default), speeds are great. I've left my non-M1 coworkers in the dust.


It's not just rspec, any rails boot is slow. We have a pretty large project with a significant number of files and a significant number of gems, so the initial boot that loads all the constants etc is what does it.


this is very true. not to mention it's very slow on its networking... i've given up docker on macOS long ago. If I was to use docker, I'll switch to my debian laptop. way much much faster.


Happy to give up Docker, but still using MacOS as a daily driver?


Easy.

Just do whatever you're doing in docker natively in macos. Python, Nodejs, Ruby, Rust, Postgres, etc can all run as native macos processes.

The big advantage (aside from performance) is that you gain access to all the OS-native debugging capacities. You can just look at files, open multiple terminal sessions in the same folder, use Profiler, click the debug button without special configuration in your IDE, and so on. All without needing to think about VM images, docker containers, networking and all that rubbish.

The downside is you need to set up a second build environment (which might not match your deployment environment). Unless you're doing something truly special, setting up a macos-native build environment is usually pretty easy. Its normally just a few "brew install" / "npm" / "gem install" / "cargo build" etc commands away from working.


So because the macOS stack is janky, a solution is to ditch the standardization layer and run things directly in the janky stack?

That seems like a road to bringing in an entire new set of environment-specific bugs and hacks.


yes. it's not that i like macOS, but I need its xcode.


I use Docker inside of Parallels running Linux and get far better performance than Docker Desktop.


My preferred fix: don't develop in Docker.


My preferred fix: don't develop in Mac.


For many Mac is the only sensible option. There's one developer in our company that uses Linux and it's a lot of pain to setup.

Mac has the best balance between coding, utility tools and "other work stuff".

Windows probably on par if not more for "work stuff" but falls badly in the coding & tooling department.

Linux is OK ish for coding and utility but falls behind for "other work stuff" and certainly a pain to just keep it updated.

So in our company, everyone in the development & support team uses Mac (except this one guy who insisted in Linux), most in the sales & marketing team use Windows.


> everyone in the development & support team uses Mac (except this one guy who insisted in Linux),

> certainly a pain to just keep it updated.

> Linux and it's a lot of pain to setup.

If you're not using Linux, how are you justifying these claims?

> certainly a pain to just keep it updated.

Excluding the boot time for both, MacOS takes between 15 and 45min to update. Linux is a few seconds. I suspect that the only OS that has a more ridiculous update process than MacOS is Gentoo.


My read on it was that this person heard what their coworker has to go through, and has formed an opinion based on that. I understand that climbing Mt. Everest is fairly difficult, despite not having done it myself.


I find how specific coworker complains about Linux while simultaneously demanding to use it pretty contradictory. That's why I am interested in more details.


I think it's fair to say that everyone in a company using very similar hardware and software makes maintenance much easier. But I also think any dev should be supported in at least _some_ Linux environment.


One can be using something that isn’t Linux for development and still be familiar with it. Perhaps they did in the past? Perhaps working in the same team as someone exposes them to the alleged headaches?

This is a thread on Hacker News, not an academic article. You aren’t owed a massive amount of “justification” for these “claims”.


> You aren’t owed a massive amount of “justification” for these “claims”.

I was using Mac for development due to company policy (compliance). It is by far the worst development experience I have ever had. Brew is terrible. Updates are agonizing. The user interface is awful. Containers are a shitshow.


Yeah, I used a Mac for years and felt the same way. Linux has been the only half-reasonable OS for development for me.


How do I get my laptop to properly sleep when using Linux?


Personally, I close the lid of my laptop, though the Fn-key shortcut is sometimes better.

Sleep is one of those things that have always just worked for me, at least on Linux; in Windows, I've had to deal with random wakeups after some kind of Windows update. That said, with less than 30 seconds from off to IDE I don't tend to use sleep all that often.


One solution is to buy a System76 laptop. They have good support for this and good power management too.


I'm not sure I understand the question? I close the lid. How do you do it?


It a jab at Linux/PC.

Apple's completely vertical stack means that their machines never have issues sleeping. The huge variety of machines that Linux (and Windows) is expected to run on can cause problems entering S-states reliably. This issue is pretty uncommon (excluding the Windows modern sleep disaster), but that doesn't prevent comments like the GP throwaway.


AMD Cezanne chips were released in early 2021. Patches are still landing to fix suspend resume bugs in the Linux kernel: https://www.phoronix.com/news/Ryzen-5000-Laptop-Linux-6.1

This is a real issue with Linux. If you want the latest hardware, expect basic features like suspend resume not to work.


That's not a bug in the Linux kernel, it's a bug in the AMD hardware.

You can generally spot this immediately, even without reading the code, by the way it's described as a "workaround".


Does it matter if I want my laptop to sleep properly? Somehow it works properly on Windows.


Apple laptops definitely have issues sleeping. https://forums.macrumors.com/threads/macbook-air-m1-doesnt-s...

There's a litany of bugs in Apple's software. This often gets hidden because developers use workarounds of various sorts, but if you know, you know. It's honestly embarrassing how buggy Apple software is given the vertical integration you describe.

The worst part is that almost all of it is closed source so it's harder to debug, and the bits that are open source (and have bugs in plain sight) you can't just submit a patch to. You have to file an rdar and hope it gets prioritized, which for one of the bugs I'm aware of hasn't been in many years. And so the workarounds keep getting written.


> There's a litany of bugs in Apple's software.

Personally, the only problems I've only ever had getting macos to sleep properly has been when I've been running VMs. (Hello docker!)

When doing purely OS-local development (which is all I ever do these days, because I value my sanity) macos works great. (So long as you have a recent mac. New macos + old laptop is awful.)

But as nice as macos is, XCode is an absolute mess. Earlier today I was trying to import a swift package into xcode. The package looked fine, but XCode for some reason was only importing it as a "Folder Reference". Stackoverflow suggested quitting xcode and running "xcodebuild -resolvePackageDependencies", then relaunching xcode. And that fixed it! Why was that necessary? Why couldn't xcode figure that out on its own? What did that even do?? I have no idea. And I hate it.

These days developing in apple's ecosystem feels less like developing in a walled garden and more like developing in a swamp. They're truly lovely developer machines - just so long as you can stay away from xcode.


Again, the reason it seems great to you is because developers have papered over so many bugs and other deficiencies. Linux has its own issues but it overall is at a completely different quality level from macOS.


Linux's bugs are just in a different place. Macos's bugs are all deep technical problems that Apple doesn't have enough senior engineers to bother fixing. (Eg the FS watch APIs). Or weird bloaty preinstalled processes that eat up all your CPU when nothing is happening on your computer. Or... anything that the light of XCode touches.

Linux's bugs are things like the fact that every program has a slightly different set of keyboard shortcuts. Is copy Ctrl+C? Or Shift+Ctrl+C? Can I make it Meta+C (like on macos)? Not everywhere! Only some linux applications let you treat the meta- key as a modifier. (Eg intellij doesn't let you do that.)

Smooth scrolling (if you have hardware to support it) works in all native GTK applications. But not Firefox or IntelliJ. Normal mouse scrolling works everywhere, but scroll distance is wildly inconsistent between applications.

On macos I have homebrew. On linux I have apt. And snaps. And flatpak. But I think we're at war with snaps? I'm lost.

My bluetooth keyboard and mouse are both broken on linux. I'm not sure if the problem is my bluetooth chipset driver, or if the devices both have terrible implementations of bluetooth and they didn't bother testing on linux. Either way, I bet they both work fine on macos.

> Linux has its own issues but it overall is at a completely different quality level from macOS.

Its different alright. But its certainly not uniformly better.


Meh. Apple's software has silly UX deficiencies on top of the deep technical bugs. You can't even have opposite scrolling directions for mice and touchpads unless you install a third-party app. Some animations are extremely long, and on top of that can't be cancelled, resulting in atrocious UX for power user use cases. Apple no longer does subpixel antialiasing, making text look quite a bit worse than Linux on many screens. Etc.


Thanks for the tip. Hello docker!


> The worst part is that […]

The actual worst part is that it is impossible to draw a meaningful conclusion from a problem report like this one:

> Mine also shines from the holes when closed. M1 Air. Noticed happens when browser is open and i closed the lid.

Yeah, it shines from the holes. Also develops a halo. Sometimes two. Maybe three, dunno. Sometimes the laptop can see stars.

Yes, it can be indeed an OS related problem as it is not bug free. Or, it could be a faulty laptop specimen, or – most likely – the user has installed something on their laptop that meddles with the sleep routine activation. For instance, OS X has a power nap feature that allows the OS to wake up briefly from the slumber, quickly do something (e.g. check mail, syncronise messages etc) and go back into slumbering again. Any application can register with the operating system to be awaken during a power nap.

And this is where the problem occurs. Users install tons of jackshite on their laptops, and that stuff tends to run a myriad of pre- and postinstallation scripts that crap all over the file system, the launch service database, and install user and system wide agents and all sorts of other unthinkable things. Including adding themselves into the power nap wakeup list without having a real need for it. At best, the app can drain the battery, and it usually does. At worst, the app bundles an system level extension that is buggy and crashes the system when invoked. But oftentimes, such nonsense does not yield its slot in the power nap run queue easily, which meddles with the laptop going back into the sleep and results in all sorts of bizarre problems.

Enterprise software is the worst offender. Nearly every. single. one. enterprise app installs (or runs) untold amounts of crap all over the file system, and nearly each app comes nowadays with its own software update daemon that is forcibly installed into the launchd database and runs 24x7, including at the power nap time. Is there a need to run a update daemon 24x7 to check for updates multiple times a day instead adding a crontab entry to wake it up once a day? No. Is there a need to install a «helper» daemon that does nothing but phones home non-stop sending undisclosed, non-consentual telemetry (likely PII too and more)? Absolutely no.

Microsoft Office is a prime example of such an invasive pest with Citrix Workspace being one of the worst offenders I have encountered in a while. Pick apart installation bundles for either and take a look at their respective pre- and postinstall scripts. Faeces that both slap onto a working OS install have to be thoroughly scrubbed off after, and they can only be scrubbed off with a hardened spatula. I really wish all downloadable app installs could be sandboxed by default and contained to their sandboxes without being able to ever break out of them. APFS supports COW snapshots, so perhaps the installer could create a sandbox and give each a private copy of system configuration files and databases that only the app could crap into into an oblivion without having an effect on a wider system installation.*


Many of the bugs I'm talking about can be reproduced on a completely vanilla system.


You have to be a little bit more specific than that and quantify and itemise the «many» part. Otherwise it is pure speculations, generalisations, arm chair theories and hand waving.

What is a «vanilla system»? A brand new laptop? Then the hardware is likely faulty, and Apple will replace the faulty computing contraption or refund the purchase. A brand new install on an existing laptop? Likely a hardware fault, a compatibility problem (less likely) or, indeed, a specific or unspecified defect in the software.

I reboot my laptop approximately once a quarter – when a new update or upgrade is released. The average uptime is 3 months. I run a uninterrupted succession of OS X updates and upgrades dating back into 2009 – when I begrudgingly switched away from my Sony Vaio Z17 business laptop running Linux due to the X server randomly crashing on and bricking Vaio Z series laptops in-flight or upon an awake with no available recourse. I have not had to reinstall OS X from scratch even once and still occasionally come across non-OS files on the file system being intact since the original 2009 install. Different kernel versions in between 2009 and now have crashed on me less than ten times. The laptop goes to sleep and then wakes up daily. Zero maintenance. I used to have to reboot it once a year to disable the CSR mode, but not anymore – the stuff just works (except when I need to run dtrace / dtruss which is a less once in two years activity now). Clearly your vague definition «many bugs» does not apply to me.

I can't say the same about the said Vaio laptop that I still happen to have around, and that is in a perfect working condition (sans the obsolete 32-bit CPU and maxed out 4Gb of RAM). It runs a version of Ubuntu but every upgrade is still a gamble. A upgrade to Ubuntu 20 has bricked the laptop in the umpteenth time, and it now requires an autopsy and a full reinstall via booting it from an external eSATA drive via a PCI Express card attached to it.


This isn't handwaving. I'm talking about bugs in the software (not hardware) with rdars attached. For some of them the bug dates back to like 10.5 or so, and the fix is straightforward, but it just hasn't been done. For others the problem is harder (but still possible) to reproduce, but the buggy component is managed by what has been described to me as "one of the somewhat less competent teams" at Apple, with a track record of "extremely stupid design decisions".

Describing them in more detail will deanonymize me so I'm not going to do that.


I mean its the same reason most corporate Windows laptops suck also - not sure why Apple gets a free pass but Windows cops it when similar issues only happen on corporate devices loaded with junk.


It’s uncommon? It still happens to me every time I’ve used a different windows laptop, and I haven’t even been on a MacBook for more than 6 months, so not legacy hardware.

That being said, it happened on my m1 MacBook a few weeks ago - so seems like Apple is no better ;)


Good question :)

Assuming this is a real question and not just an excellent ruse; the answer, in terms of simplest setup would be

1) Install a vanilla default, such as Ubuntu 2) If that fails, buy a vanilla default laptop

To be fair a reason why this is easier on MacOS is they have like 3 laptop options rather than 3000. If your problem still persists, then there is a step 3

3) Become an expert in cli foo

After that fix, you're welcome to either pretend the cli foo was incredibly easy all along, or weep and dream of a gui with two glowing buttons [Cancel] [OK], providing all the configurability you might ever need


For me 'systemctl hybrid-sleep' does the trick.


Everyone in my company just uses Windows if they don't want to bother with Ubuntu. Excellent Linux tool chain support, excellent driver support, it just works.

WSL 2 is a game changer because it makes all the Linux centric dev tools available to Windows without setting up virtual machines or other such nonsense, even running graphical applications these days. The only major pain point I've run into (that isn't "I prefer Linux") is the lack of IPv6 support within WSL 2.

If you avoid buying Nvidia hardware, Linux generally "just works", unless you use Windows-only software (which macOS also suffers from) or choose to make your life harder by installing Arch or Gentoo. Ubuntu's snap is a pain for power users who want to hack on their Linux system but if all you want to do is develop or do work stuff, it just works out of the box.


I centralize all my work through Windows these days (used to be a "Mac person"), but I was pleasantly surprised by Linux Mint recently. I use Ubuntu all the time through WSL2, but I am liking Mint so far as a little Linux GUI / server machine.


> There's one developer in our company that uses Linux and it's a lot of pain to setup.

There was one developer in our team using Mac. Everyone (>20 people) else were using Linux. Mac was a lot of pain to setup.

Linux is the best balance between coding, utility tools and "other work stuff".

Windows is not even considered for development, it falls badly in the coding & tooling department.

So in our company almost every dev uses Linux (except this one hipster using Mac) & marketing team use Windows


And your production environment is Linux, so you will be using Linux anyways and all the testing you do on Mac has differences with production unless you do it in a VM where everything is slower.

Linux is not a pain to setup.


Seriously, of all the things you can say about linux, you need to mention that it's a pain to just keep it updated?

I am a .net developer and run Debian linux on my work laptop since ages. Keeping the OS and most of the software up-to-date is just "apt-get update; apt-get dist-upgrade". Microsoft has Debian packages for teams, skype, powershell-core, .net (core), azure-cli. Google has Debian packages for chrome. I use a lot of JetBrains' tools and keep those up-to-date using the JetBrains ToolBox. Where I work, we use Google workspace and Slack and I use those through chrome.

Just to be clear, most of my development is currently done on Linux using Rider, but I do have a Windows VM (on KVM) for older projects that run on .NET full.

The issues I come across, are related to our customers. For example when I am on location and need to connect to external hardware. One example is to connect wirelessly to a WiFi Direct display: this does not work for me and I did not investigate if there are drivers available or not. Another example is DisplayLink to use an external display through a dock: this I checked and there are drivers available and I did have it working at some point, but it broke after a kernel upgrade and it's too much bother to fix it again. Also for some customers we can connect remotely to their systems over VPN, but not all VPN solutions are available (or work out of the box) on Linux.

In any case, for my day-to-day work, I don't have any issues at all on Linux and I believe it's very very capable for coding, tooling and other work stuff. I definitely prefer it above Windows and Mac.


> Mac has the best balance between coding, utility tools and "other work stuff".

That maybe used to be true. But today, Windows is that OS. In one OS, I can freely develop in Windows, Docker, WSL2, including near seamless integration of apps, browsers, and even GUIs. And with VS Code, I basically have any OS except macOS (but who cares?) at my fingertips in a single interface. The dev experience is by far the lowest friction between Windows and macOS.

And Windows has superior support for external hardware. macOS refuses to work well with anything that doesn't have Apple on the box.


> Windows probably on par if not more for "work stuff" but falls badly in the coding & tooling department.

Given the option, I rather spend my day on VS than XCode, and C#/F#/C++ than Swift/Objective-C, but to each their own I guess.

And then there is the whole thing about where macOS Server end up.


> but falls badly in the coding & tooling department.

That's more perception than reality, often from people who simply don't know how to use Windows.

Visual Studio, Visual Studio Code, and IntelliJ IDEA blow any Linux text editor out of the water for developer productivity.

For Linux workloads there is the Windows Subsystem for Linux (WSL 2), which now even supports GUIs with GPU acceleration!

Visual Studio Code can even operate in "remote" mode where it tunnels into a Docker container or Linux server and acts as-if the remote target was the local machine.

On Windows, x86 and x64 Linux Docker containers run in process isolation at full speed, unlike on Macs where there is CPU emulation required.


This is just really false. As someone that used Windows as my daily driver & migrated to Linux many years ago, the tooling on Linux is vastly superior. Most codebases and extensions also work trivially easy in shell, and command line operations in general are much less a pain point.

I’ve since migrated to Mac because I was spending too much time making linux work properly, but I do miss the control I had. I have Windows on parallels and I can’t believe how much of a disaster it’s become. I wouldn’t encourage anyone to it.

For most people migrating off of Windows, I’d recommend Kubuntu. It is KDE Ubuntu, and it feels like Windows used to during its golden age. Also, it’s free. All major IDEs work on Linux so the shift is pretty painless. Really recommend migrating.


So how you do you single step GPU shaders on your superior tooling?


Here is an extension for you to debug GLSL/HLSL shaders in VSCode on every platform: https://marketplace.visualstudio.com/items?itemName=dfranx.s...

NVIDIA also has a multiplatform GPU debugging sdk. I’m sure if you look at extensions for whatever IDE you’re using, you’d find what you are looking for in under 5 minutes.

You’d be surprised how good all the tooling is now days. It also isn’t my tooling, it’s everyones. That is the key difference. With Linux, you aren’t the product. People work and maintain it for the betterment of mankind and for personal satisfaction / freedom. So people care, and they are passionate, and there is a very involved worldwide community. It is more than a paycheck to them.


I know GNU/Linux since Slackware 2.0, know pretty well how it goes.


I’m sure you do, not questioning your competence nor intellect nor experience. There is just a lot of stuff to keep track of, so sharing this info for others who might happen upon our small exchange.

We’re all in this together, and there are always new things to discover for our finite selves interfacing with an infinite pool of knowledge.


The same way you develop iOS applications on your Windows laptop.


Completely unrelated to how wonderful Linux tooling above anything else.

By the way, for iOS I single step shaders using Metal tooling on XCode and Instruments, no need for Windows.


Nice! Now run any 32-bit program.


That wasn't what we were discussing about, goalkeeper.


The goal is so large though! You can't score on something as simple as running old software or using whichever graphics API you prefer?


Apparently having the tooling greater than the universe doesn't fit into that goal, bumps off.


> Visual Studio Code, and IntelliJ IDEA blow any Linux text editor out of the water

But those tools themselves work on Linux.


Depends a lot on what you are trying to accomplish.

The inability to get something akin to VT220 terminal access and shell scripts on a POSIX-based system without resorting to a virtual machine (a la WSL2) is a deal-breaker for me. The steps for using scripting languages (e.g. Node, Python, Ruby) is typically entirely different on Windows than it is from all other server and desktop platforms.

For those who can spend their time nearly 100% inside an Electron-based or Java-based IDE however, it matters a lot less whether that IDE is running Windows, MacOS or Linux.

> On Windows, x86 and x64 Linux Docker containers run in process isolation at full speed, unlike on Macs where there is CPU emulation required.

You do realize they didn't require us to all burn our old Intel-based Macintosh computers, right? Apple even still sells Intel-based Macs.

Windows requires emulation to run aarch64-based containers. Except on Windows for ARM of course, where presumably they run full speed but those x86/x64 containers above require CPU emulation.


WSL2 is not a virtual machine, it's a subsystem. It's basically the opposite of WINE.

You can run all of your favourite languages nearly identically to Linux on Windows either via Docker or WSL2.

> Windows requires emulation to run aarch64-based containers. Except on Windows for ARM

Making Windows and Linux the only platforms with ongoing support for both ARM and Intel CPUs, unlike MacOS where Intel support will eventually expire.


Technically, WSL version 2 _is_ a VM based technology:

"WSL 2 ... uses the latest and greatest in virtualization technology run a Linux kernel inside of a lightweight utility virtual machine (VM)."

https://learn.microsoft.com/en-us/windows/wsl/compare-versio...

WSL1 is not a VM design, but that design reached its limits.

A VM doesn't make it bad - I don't do a lot with WSL but I like it a lot. It's not slow or limited.


> Visual Studio Code can even operate in "remote" mode where it tunnels into a Docker container or Linux server and acts as-if the remote target was the local machine.

Of all things in this comment this one is the funniest, because Emacs was able to do it for years (decades?).


If you're talking about tramp, its design is quite a bit worse than vscode's or intellij's remote support.


Probably, but point is it’s not something revolutionary. I’ve been using tramp for quite some time and despite some quirks it’s a working solution.


The big issue with tramp is that code doesn't run remotely. Running LSPs is impractical over a network connection, no matter how close the remote is.


false. i used windows and migrate to linux (starting with ubuntu) 10 years ago. i know how it feels to help a co-worker setting up a ruby on rails project on windows. we spent it full day.

on linux, we spent no more than one hour.

those editors you mentioned are also available on linux nowadays.

if someone needs to test something on windows, there's "windows on aws" (i used it once)


If you code in Windows and deploy on Windows, I think it's fine. Java/C#/Python/Microsoft C++ are well supported.

If you're coding in Windows and deploying on Linux, it's not ideal. Sure there's WSL2 which kinda sorta helps for a lot of day-to-day stuff. But often weird errors creep in, my favorite being the MS-DOS EOL instead of the Unix EOL, which breaks Bash scripts, leading to developers saying: "But it worked fine in MSYS!"


I have come across all of "Java/C#/Python" recently, and all of them were "deploy on Linux" most of the time. Yes, including modern C#.


> Visual Studio, Visual Studio Code, and IntelliJ IDEA blow any Linux text editor out of the water for developer productivity.

I used to love Visual Studio but Jetbrains has caught up to it and then surpassed it years ago. I generally run IDEA and VSCode for code editing and going back to Visual Studio is a real shock, especially with the pleasant memories I've had of using it.

VSCode and IDEA run great on Linux, though, perhaps even better because Windows isn't great with tons of tiny files.

WSL2 is great if you prefer the Windows GUI. It fixed almost every issue I've had developing on Windows outside of Microsoft's data hunger and terrible UI design. Whatever Linux centric tool you can think of, it just runs on Windows now.

If I could use the comfortable and stable Windows 7 UI with the Windows 11 kernel, I'd actually consider going back to Windows. In terms of usability, Windows just lacks polish these days. That said, my attempts to try macOS didn't fare much better, I just couldn't get over the primitive window management and the bad integration with my home/end/page up/page down keys.


People that keep repeating that matra really don't do serious Windows development.

GPU debugging, DDK support, ETW debugging, SQL Server integration, GUI designers for Forms, WPF, UWP, MFC, mixed mode debugging across .NET languages and C++, COM/WinRT IDE tooling,IIS integration,...


Yes, because this niche is so niche now it's not really worth talking about. Just as you're not talking about Mac as developer laptop to do Objective C or Cocoa development, but what 99% of programmers do.


Where in the world are those 99% developers?!?

Quite curious, I guess if we focus on US market, while disregarding game consoles and 80% of the desktop market.


But the remote stuff is blocked in the FOSS version :(

https://github.com/VSCodium/vscodium/blob/master/DOCS.md#pro...

Typical Microsoft. Pretending to be FOSS but there's always some sneaky strings and telemetry attached.


> Windows probably on par if not more for "work stuff" but falls badly in the coding & tooling department.

This is absolutely false. Windows is a perfectly fine development environment and has perfectly fine tooling. You just need to embrace powershell, windows tooling, and use cross platform tools. Too many devs put themselves in a corner by relying on posix shell or posix only tooling.

If you do user space programming your host OS should never matter. In my 10 years of programming the only time the host OS mattered was when I was writing Linux drivers.


My previous 2 companies have both switched to OSX for everyone. There are some teething problems - OSX is not really meant for a domain environment and JAMF Connect is necessary glue to work properly with Active Directory sorts of stuff, and it's still not quite perfect.

But overall it's actually worked out surprisingly well because there's something for everyone - developers get *Nix On The Desktop but with an actual support story, and the non-technical users get a happy bubble OS that holds their hand.

Linux code churn and distro fragmentation makes it fundamentally unsupportable in the vast majority of workplaces (outside very controlled server environments/etc - talking desktop use here) and for the vast majority of users. The code churn makes the support story (polish and documentation) impossible and the distro fragmentation means that there's 50 different solutions to the same problem. The Bazaar and the Cathedral doesn't mean the bazaar is better in all situations, a random non-technical business analyst is never going to learn how to build Arch or install Gentoo and a really good streamlined, polished Cathedral Experience is much more suitable to the business environment. That's the fundamental lesson from Linux and Windows and OSX now takes its place in that too. You can keep the good things about Unix-y environments and opt out of the terrible parts of the Linux ecosystem.

Unfortunately, like BSDs, that's not what Docker is built around. Docker assumes a Linux kernel, and Linux kernel ABI is not the same as Unix kernel ABI. That's the biggest problem. Same as FreeBSD Jails or Solaris Zones... they're a decade ahead of docker in terms of capability, security, performance, and polish, but Docker is where the mindshare is. I can't install a jail from a registry with a single command and that's not where the support/development time is going even for the people who have engineered those alternative docker-registry solutions for jails.

The only "fast" option for non-linux kernels besides full virtualization is to thunk the calls to your own kernel to patch around the differences. Obviously that didn't work out with the Windows kernel, it's just too different, but FreeBSD/Solaris have implemented this functionality for a long time as part of "Branded Zones". But everyone is enthusiastically rebuilding the wheel around ubuntu (specifically - not even linux generally) so that's not going to happen.

https://wiki.freebsd.org/LinuxJails

https://docs.freebsd.org/en/books/handbook/jails/

(the freebsd handbook is a great example of the kinds of documentation that rarely gets written for linux distros - other than commercial ones - because of the overwhelming code churn and the inevitable bit-rot that entails in the rest of the user experience. It's way more fun to write a new audio pipeline or init system than to document it fully, everyone knows it.)

https://www.oracle.com/technical-resources/articles/it-infra...

https://docs.oracle.com/cd/E19455-01/817-1592/gchhy/index.ht...

https://en.wikipedia.org/wiki/Solaris_Containers#Branded_zon...

(and note the Solaris stuff almost entirely applies to OpenSolaris/Illumos as well, you don't have to use commercial solaris to get Branded Zones.)

Anyway, apropos of nothing, but with the newfound attention on OS X from developers and power-users, it'd be really nice if Apple released a M1/M2-based "toughbook". Completely against their design aesthetic but I think a lot of people don't really like the idea of wafer-thin apple laptops and would like something that can take some bumps without shattering. Power users are becoming a more core demographic for macbooks and it'd be nice to see them cater a little more.


very true. i saw this as well when i was still working in the office ten years ago. in Linux, what you can do is mostly coding and browsing.


Unfortunately Mac is the best in terms of hardware and portability and for compliance stuff


yeah, I really wanted to buy a Linux laptop. I highly value battery life though and the m1 chip doesn't really have a competitor in that department.


> the m1 chip doesn't really have a competitor in that department

the m2 chip :)


I agree, but my company is in a very regulated area, so they standarised on Macs because of the very strong controls built into them at a hardware level.

I do miss my thinkpad and Fedora.


this is hard. what if you're on project with

- backend (docker) - needs linux based machine - client app (iOS) - needs xcode on macOS

both are in one repository.


Have fun managing osx and Linux dependencies then when you could just maintain one.


That really hasn’t been an issue for me since Homebrew came out a decade ago - using Python, Rust, Node, Java, etc., all of which have mature stories for cross-platform development. The main area I’ve run into problems are legacy projects where it’s not “Linux” but more like “one old Linux distribution with a 32-but binary nobody can reproduce” and Docker is really the least troublesome part of those projects.


If you're not IT, god speed y'all, then it's not all that bad honestly. I set up my stuff with Ansible and 90% of the "porting" work was the mapping between rpms and brew packages.

The only headache I get sometimes is because I have the GNU utils first in the path which makes compilation scripts mad sometimes.


Indeed, one can maintain only a macOS application.


And also there is some additional fun dealing with broken or incompatible packages whenever you upgrade OSX


In the languages I work with (primarily JavaScript and Rust), cross-platform compatibility tends to be pretty much a non-issue. I develop on macOS, deploy on Linux, and it Just Works. No extra work required.


You forget to mention all backend services (db, redis etc are managed and you need internet connection and cloud credentials for even a development workflow.


I certainly do not use backend services that need to be hosted (if I can at all help it, which I usually can). Open source databases like Postgres and Redis are easy to install on macOS with homebrew. You can even have multiple versions install simultaneously, although I've found that Postgres tends to be backwards compatible enough that I just run everything on the latest version.


When I used to use a mac I just put all my "Linux" (GNU really) dependencies in a prefix and that worked pretty well. Docker is kind of overkill for what people use it for IMO.


I used to be on the side of relying on native tools/libs, and managing them in a similar way to what you to describe, but it all became too much to handle, with dependencies across projects breaking with regularity.

Maybe I wasn't doing it right, but switching to Docker to sequester my projects and their dependencies has saved me so much time and hassle, especially with the amount of repos I work on throughout the year.

My biggest weakness today is that I still don't reach for Docker right away when starting work on a new project or when evaluating a new tool. Old habits…


My point was that you can do this without docker. On Linux you can use separate chroots for each project, on OSX you could probably do that too but since prebuilt tools for generating a darwin rootfs are rarer prefixes are easier.


I knows it’s becoming a bit of a meme now that modern software development is just rediscovering approaches from the 70s…

but it has really hit me recently how so many “indispensable” projects are just something that’s existed for decades re-written at a higher abstraction-level with a nicer UX.

I wish I had more classic Unix sysadmin experience. But unfortunately the gains are real, and I can’t justify investing time into learning the older tech (let’s say, using chroot instead of docker), when the newer is so much faster to learn, and has (comparative to modern hardware) such small overhead (if you’re not on MacOS :P)

I am excited about the rust-rewrite movement for shell though - precisely because it’s a chance to bring all the ux lessons we’ve learned in the last decades to cli-tools.

Stuff like charm.sh is super exciting.


This is an unhelpful and useless comment


And so is yours?

I do often find myself wondering whether Docker saved developers or system administrators any time. Is Docker really better than building an AMI and provisioning EC2 instances on-demand?


As a developer I can say that docker is saving me a lot of time that I would have otherwise spent on setting up different versions of postgres, redis, elastic search, etc. For the variety of client solutions we are building.

With a docker compose file in place, all I need to do is run "docker-compose" and everything is up and running.

It's such an upgrade over what we had before.


Yes, I'm sure my wallet would be very happy with me running 20 different EC2 instances for small apps and databases.


You provide multiple cloud environments for every dev?


My preferred fix: pay for a Parallels Pro license and run Ubuntu on a VM, then run docker there. The VM is configured to start on login and run in the background.

I have the Remote SSH plugin set up in VSCode, a `vmlogin` alias set up in bash, and all container ports forwarded in the VM's config.


Can vouch for this approach, been using it for rails development for the past 6 months after my work swapped my macbook pro for an M1 Max Mac Studio and it's been solid.


With VirtioFS on the scene I just don’t have this experience anymore. Docker for Mac is significantly faster than it used to be, particularly when using named volumes.

Mutagen also improved the experience but I prefer VirtioFS as it’s “built-in”


What sort of workloads are people doing where the filesystem access is limiting them? I develop python web apps on a mac and use dockerized postgres and a dockerized flask app. I don't seem to experience any noticeable issues. When I am developing I mount the source code directory as a volume so code edits are synced live into the running docker container.

I also develop frontends using vue, managed by npm. In my experience this doesnt need to be dockerized since npm installs everything in a subdirectory per project. Is there a benefit to running this as a dockerized app?


One issue I ran into at my previous employer was pylint on a large Python codebase. Pylint is slow on the best of days, but the difference on an M1 Mac running under docker (to standardize the version and settings across the team) was something like 10x as slow; several *minutes* to lint the codebase, which we absolutely required before code could be checked in. It finally got a lot better when VirtioFS came out, which, when enabled on an arm64 image, sped up filesystem access dramatically; suddenly my lints were taking seconds again.


Are your Docker images x86_64? On an M1 Mac x86_64 images run under qemu which is very slow, If an ARM64 image is available it should run a bit faster.


Oh man x86 is so slow it's insane. If you're using a non-ARM base image Docker will happily run the x86 variant for you automatically. Unsurprisingly, running an x86 VM inside an ARM VM on a laptop is very very slow


Are you linting the whole codebase, or just files changed in the commit?


I work on a _large_ C++ codebase on Linux. If I'm on my main (Linux) machine, things are fine, bind mounts are okay. If I'm stuck on my MacBook then compilation performance is...bad. I suspect it's due to heavy filesystem access from the compiler (reading source, writing object files, etc). At some point I need to confirm this by copying in my source directory.


A customer's Mac with a M1 is only 50% faster than my Intel laptop from 2014 at running Rails tests, because they run in a docker container: 50s vs 75s. The difference between the two machines should be much more than that (CPU, RAM, data bus, etc.)


You should try running your dependencies on docker but ruby on the host machine.


I'd be fine to run Ruby with asdf or rvm on my Linux laptop. I'm also fine to run it in a docker container. Performance is basically the same for me. The choice was made by my customer and it's them using Macs. They deploy in Linux containers though so that's probably why they accept not using all the performance of their hardware: same environment for production and development, no surprises.


I tried running our Golang mock generator through Docker on an M1 Mac and it was much slower than running it directly on the host. Probably since it reads every file in the codebase and writes out a file for every interface it finds.


mockery? That thing runs super slow just on its own.


Magento in docker on Mac is horrible for this reason.


Mentioned in the article, node packages that create tens of thousands, sometimes hundreds of thousands of files.


My company has a large python repo. Anyone that develops in this repo on a Mac is considered a sadist because…

Running Pylint on a Linux machine in docker: 3 hours from no cache

Running Pylint on a Mac in docker: 9+ hours from no cache unless VirtioFS is used, which makes it closer to 4 hours.


Workloads with a lot of files, for example a large elixir web app with hot reload / fswatch enabled, have upwards of 20 second page load time. More than enough to mess up my flow.


nginx is pretty unusable via docker - I guess file system cache is the issue…


You can turn caching off, I believe.


I have Apple Silicon but develop on docker x86. The game changer for me was macOS Ventura with rosetta support for linux vms.

I use UTM to run Debian 11 ARM. The update-binfmts command is absolutely magical, docker images will happily run both arm and x86 binaries.

Battery lasts all day and the machine stays ice-cold.

https://docs.getutm.app/advanced/rosetta/


The most recent version of Colima supports both virtiofs and mac's native virtualization framework (macOS 12.5+). I get totally acceptable performance using it.


Is not this the main way to speed up Docker on Mac: use a beefed up Linux Virtual Machine (VirtualBox, UTM, tart) and run Docker inside this Virtual Machine?


the 'DOCKER_HOST' variable (and the fact that all SDK's seem to support it) is honestly the greatest bloody thing in the entire ecosystem.

My workflow for the past 3 years with Docker has been: set up some desktop machine somewhere, configure docker, configure ssh like normal: set DOCKER_HOST=ssh://<tailscale_ip> on my laptop.

Docker responds as if it's local, but I get absurd build/fetch speedup (since the wired connection is faster than Wifi) and it's not running inside a slow VM.

Recently I've been using colima on my Mac natively, but I keep reaching for the DOCKER_HOST option.


I assume port forwarding would be a pain in such a setup, right?


You just access everything on the docker machines IP instead of localhost. Dev servers may need an additional parameter for that, it i don’t see any big problems.


The above commenter claims they're using Tailscale, a zero-conf VPN, which negates any port forwarding issues.


This assumes you want a distinct storage drive within your VM.

Many developers prefer to code in their host OS but run the image via Docker for Mac. They also want instant real-time code changes to appear inside the running Docker image. I suppose you could have some of the disk live within the VM and the code portions be memory mapped or Rsynced. I haven’t thought through the downsides.


There was a project for that called docker sync (it could use of 3 mechanisms (one being rsync) to continually keep the files up to date in the container. The problem I found was it would just randomly stop syncing with no warnings or errors of any kind. It was very flaky.

Nowadays make sure you use their new virtual machine thing in docker for Mac and add :cached in your compose file of any mounted volumes and I found that alleviated my issues. It used to be really bad though.


This doesn’t address the root problem. If you want the FS features that the “””native””” Docker Desktop provides, you end up with the same drawbacks re performance.


I stopped using Docker desktop and just forwarded my Docker CLI to a VM’s TCP port the instant I found how it exported my Mac file system wholesale to the Docker VM. Never looked back, and these days I just use sshfs or VSCode to develop remotely (which works everywhere).


A project I made solved this by running docker on AWS and doing two-way file sync on changes. Runs quite nicely and transparently.

https://github.com/lime-green/remote-docker-aws

Lots of benefits: speed, battery, fan noise


And money


I use a shitbox of a thinkpad with my dev environment in the cloud, but they run Linux so I can just as easily run it locally if I have no connection. I buy them for $300-500 on eBay, upgrade them to at least 16gb of RAM and a 1TB Samsung SATA SSD. I have 2 cold spares ready to go, credentials loaded, just need to update and sync a few git repos. 14 inch 1080p IPS displays, i5 processor, 16GB of RAM.

The money I save not paying for Apple laptops could pay for a crazy overpowered dev VM until the end of time.

I used Apple laptops for about 10 years until about 5 keyboard replacements with the butterfly switch debacle.


A MacBook is not that expensive and much faster than AWS VMs in the same price range (assuming a few years of usage) in my experience. The network storage of cloud VMs is particularly frustrating for development, unless you use ephemeral storage perhaps.


I finally gave up the effects of Docker on performance and battery life and switched to Windows. I still don't have a long lasting battery, but at least performance is better.


I have been working in a Citrix workspace Windows machine for the past 6 months with a Ubuntu 20 wsl2. I was very much anxious of the experience before onboarding but I have to say, I barely deal with the Windows side of things. Having said that, if I was to move away from mac, I‘d just go directly to Linux.


Gaming is my guilty pleasure. While the gaming ecosystem on Linux is getting better everyday (thanks, Steam!), the limited time I spend on gaming doesn't allow me to tinker, I just want everything to run without limitations.


We gave up on Docker on macOS long ago. Wherever we absolutely need Docker, we just throw a Linux machine at it. Developer time is costlier than hardware.


If anyone has any questions about Mutagen (or integrating it into their Docker-based workflows), I'm happy to help.

Just one clarification on the article: Mutagen offers Docker Compose integration, not Composer integration (Composer is a PHP package manager). However, as mentioned, DDEV is a great option if you're looking to do containerized PHP development while using Mutagen for performance improvements.


Have any of you tried to chase the microvm train instead of docker on MacOS? Thus far, it feels like this is a deeply lost cause with a passing hope that somehow you can hack a solution to nested virtualization and drop into a KVM-style experience on a guest vm and then go a layer deeper with microvms on top of that guest. Oofff. What an absolute horror show.


> We had to abandon Docker because we had folks with macOS on the team. But, other tasks (email, conference calling, scanning, word, upgrading without breakage) came with more friction, and those tend to fill up ever larger shares of my day.

source: https://kvz.io/macos-install.html


I have a large, fairly complex Django application that I run in containers on a MBP with an M1 Pro, and it runs twice as fast as the AMD 16 core production server. It was a bit of a dog on an Intel Mac, but everything I use Docker for is blazing fast on Apple Silicon. Is there something about Python apps that makes this different?


I have "use virtualization" turned on in Docker Desktop for Mac, but I don't see different options for the file sharing implementation. I'm currently on version 4.14.1 (91661).

Separately, with "use virtualization" turned on, should I also enable "VirtioFS accelerated directory sharing"?


It’s under experimental features in 4.14. It’s the default where available in 4.15.

You might need to upgrade both docker desktop and macOS.


I’m interested to see how project finch works out compared to Docker on a Mac. It’s an open source client for container development.

https://github.com/runfinch

When I tried Rancher Desktop it didn’t work so well.


I fixed this problem by not getting a MacBook Pro this year when my last one from 2018 had no battery and two letters falling off the keyboard.

Bonus: paid half what a similarly spec'd M2 Air would cost.


Cool blog theme, feel like I've seen it before ;) https://filippo-orru.com/


Funny thing is that I switched from virtualbox for Vagrant on OSX over to Docker because vbox file operations were so incredibly slow.


Why would you want to mount node_modules? it's useless on the host because the binaries are for a different arch


this is one of the reasons i changed my OS to linux since docker there works natively. i believe there will always be a bottleneck for peak performance on macOS until we are able to run natively, not virtualized.


It’s no surprise that these replies have seen the usual jarring unfriendly personalities come out of the woodwork to advocate for the absolute use of desktop Linux.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: