You can also just run the VM's using Apple's own API's.
Their sample code includes a resizable display, shared drive, network access, etc. You can run x86 Linux on Apple Silicon (including running pre-built x86 binaries with Rosetta). On the latest release, you can save and restore the VM state to avoid boot initialization and application setup.
I keep debating over spending the time to implement an Apple Virtualization Framework vagrant provider. Maybe I'll get around to it after I unbury myself from the rest of my self imposed side projects.
This would be very welcome. I had hopes that Veertu would provide this (they used Hypervisor.framework at the time) but they pivoted away from regular VM usage before it ever eventuated.
With the current state of Vbox on Arm it would be good to have a reliable free/cheap alternative to Parallels or VMWare for vagrant use.
I second multipass, it is pretty cool if your target is Ubuntu. I think it is closer to "Docker for Ubuntu VMs" than Vagrant though.
In the latest releases hey are dropping support for native virtualization on mac and moving to QEMU only. I my experience the QEMU backend has not been as stable as the hyperkit one, but YMMV as usual.
you know what i need this very minute? to run a m1 vm on an x86 mac. why? because there seems to be no god damn way to faithfully build m1 wheels on github's mac runners without such a contrivance. note, cibuildwheel claims it can pull off this trick but in fact it screws something up and pip install x86 dependency wheels.
The problem is probably not cibuildwheel. That having been said, the maintainers are knowledgeable and responsive, so I would try to open an issue first. Speaking from personal experience, they were able to help me to solve my problem in a couple of hours.
I opened an issue but then also plowed through the source. I'm pretty sure the issue is pip is basically intransigent and will not let you easily install packages for whatever arch it insists you have (which most certainly close to the canonical platform.machine). You end up having to chain --platform --only-binary=:all:. Okay great but I'm cross-compiling - this is exactly the time I do want source distributions.
People say it a lot but I never really felt it until I tried it - python packaging is infuriating and is beyond a shadow of a doubt proof that python is a shit professional language. Not because professional languages need to have bulletproof packaging systems (see C/C++) but because the fact that python can't make it work after so long means python is fundamentally flawed. And don't tell me "well you're implicitly talking about C extensions". Yes I am but that's a language feature and if you design a language feature that your language fundamentally precludes you from being able to support, then again that's on you.
m1 Mac:mini is about $400 used (sold by vain people). If you are worried it will come with pet hair stuck in the fan blades, pay an extra $100 to buy from a reputable reseller. If you are constrained to software or work in a chair meant for eating or sun bathing, another option is Mac Orka/Anka.
To be clear, you can't run an x86 Linux VM on Apple Silicon using Apple's virtualization framework. It says right in the docs,
>The kernel and RAM disk image must support the CPU architecture of your Mac.
You _can_ run x86 via Rosetta inside the VM. But that's not the same thing as running an x86 VM.
To answer your question, Lima supports two VM types - QEMU and vz. QEMU is the default and is what's used by other projects to run x86. If you select a vz VM type then it uses MacOS' native APIs. I wouldn't expect a performance improvement in using a vz-typed VM with Lima and Apple's own instructions.
I've recently been enjoying OrbStack (https://orbstack.dev/), which I've found easier to get started with than Lima, starts up faster, and automatically mounts volumes so you can access things from Finder
I thought I'd elaborate on a few specific ways OrbStack improves on apps and issues mentioned elsewhere in this thread.
- Network hangs and connection issues: I wrote a new virtual network stack in userspace and made sure to address issues plaguing other virtual networking solutions (VPN compat, DNS failures, etc.).
- Inexplicable errors: Can't say it's perfect, but I do take every issue seriously. For example, sending OOM kill notifications on macOS instead of silently killing processes.
- Running x86 containers: Builtin fixes and workarounds for many Rosetta bugs. Since the bugs are on Apple's side, they affect all other apps (as of writing) and often cause issues when running linux/amd64 containers. If you're used to slow QEMU emulation, then well, it should be a major improvement.
- UTM: OrbStack doesn't do graphics yet, so you'll have to stick to UTM for GUI, but the CLI and other integration is designed to be on WSL2 level. It also boots much faster: baseline in ~1 sec, each machine in ~250 ms, total ~1.2 sec.
- Bind mounts and file sharing: It uses VirtioFS which isn't affected by sshfs consistency issues, plus caching and optimizations to give it an edge.
Thank you for your work on OrbStack. Just tried it after reading about in this thread and it looks really great so far, both as Docker replacement and absolutely delightful to launch full Linux VMs.
Noticed you are using a very recent kernel, Linux ubuntu 6.3.12-orbstack, which is so great to test latest revisions of Linux system calls (eg. io_uring) locally, compared to Docker old 5.x kernels which I gave up figuring out how to upgrade.
Any way to select a specific kernel version for VM or container? That would be a killer feature for regression testing.
There's currently no support for changing the kernel version. I think it may not be feasible to support many versions because a lot of OrbStack's improvements are very closely tied to the kernel, and maintaining the patches for multiple versions wouldn't be worth the work. Outside of regression testing, it's rare that anything breaks and in the event that such changes occur upstream, I try to hold off on updating until they're fixed.
Are you mainly interested in 5.15/6.1 LTS or other specific versions? Having an alternate LTS kernel (for a total of two options) might be a possibility in the (long term) future.
Having the option of an "older" LTS kernel (say 5.15 or 5.10) would be useful so as to match the kernel used in a lot of commonly used cloud images (including Ubuntu 22.04 LTS and Amazon Linux 2).
i love orbstack, like actually love it to the point where now i have to complain about its issues a lot, because i cant use anything else
still has a lot of things to imrpove on in general, which are annoying when you encounter them suddenly. most frustrating for me is the "unlimited permission" setup. Sometimes thats useful as hell. Other times i would indeed like to run a servive using the convinience of orb stack, but you know, a little more sandboxed...I loved /Users/yyy/Documents being tied to my iCloud Drive. Had to disable that once i started using orbstack for personal reasons. orbctl / orb really need much more explanation of a lot of the options. especially whatever the config options do.
or let me bind to other network interfaces not just the one :(
but i love orbstack. hard too go back after finding it. impossible actually. havent been at the computer in like two weeks so mabye its changed by now
Totally fair! There are definitely still limitations.
Support for "isolated" machines that don't have bind mounts is planned (https://github.com/orbstack/orbstack/issues/169). This is actually mostly implemented internally, but I'm not exposing it until a few remaining security gaps are plugged or it would just give a false sense of security.
If you meant binding servers to specific interface IPs, it might be possible one day but it's very challenging to implement as all of the host's IPs need to be assigned to an interface in the guest and managed accordingly. If you meant connecting machines directly to your LAN, it'll be supported eventually but it's low priority due to unavoidable compatibility issues. https://github.com/orbstack/orbstack/issues/342
Orbstack is nice to use but it’s not open source and who knows what they’re going to charge for it, once VC gets its dirty hands in there you know it’ll become expensive.
I honestly think this is a feature and not a bug. The FAQ shows an attention to detail for the trade-offs of various pricing models[0]. It's clear that Danny cares about monetizing the project in a thoughtful way.
I moved away from Docker Desktop to colima for a couple years and would not pay for Docker Desktop, but after a few weeks of swapping back to OrbStack now that it's public beta, I can definitely see myself paying. OrbStack just works and gets out of the way.
(dev here) Yes, this is still the proposal that I'm planning to move forward with. It's always possible that it doesn't work out and will need to be changed, but I think it's a pretty solid start.
Thanks for mentioning this, even though it’s not open source. I wanted to check the business model and pricing, and found the FAQ on pricing refreshing to see. [1] Unfortunately, a $5 per user per month kind of fee may be out of reach for personal use. So I hope the free plan for personal use stays (or they come up with an even cheaper plan).
The free plan for personal use should be here to stay. What's not set in stone yet is whether there will be a Pro plan for more advanced features, and if so, what said advanced features would be (likely cloud services / Kubernetes). But I'd expect the core Docker and Linux machines functionality to stay free.
I had a pretty bad experience with running a desktop in UTM. The UTM app itself freezes a lot and has to be force quit, and I think there are issues with the GPU acceleration.
Parallels was a night and day difference in both stability and responsiveness of my desktop. And copy/paste Just Worked as well. Definitely worth the $100/year subscription in my opinion.
colima pretty much solves dev experience for docker and k8s on mac, esp. for apple silicons (m1/m2), where you can build multi-arch containers with ease.
Some interesting caveats:
* By default, system packages don't persist, as the default alpine distribution runs on tmpfs and doesn't have a overlay. This is a reasonable default, as it keeps the default VM storage small.
* If you want to have additional system packages, you can turn on a ubuntu overlay that supports additional systemd services just fine. Of course, storage would balloon to a few GBs from a few hundred MBs.
Edit: typos.
BTW, the result of docker build is immediately available to the k8s (k3s) cluster without any insecure registry and/or side loading/caching steps, thanks to the seamless buildkit integration.
One of our tools runs in Docker just to ensure that it gets the right version of its dependencies, and that bug is a pretty huge bug for us, for that tool, as it basically broke things.
Still, we use colima; it is a decent workaround for the "Docker on macOS" problem otherwise.
In my mental map, yes, but in practice, they act a little differently than my intuition. Even on past non-colima docker usage, I came across surprises w/ `-v` vs `--mount` and so generally try both if I'm having problems.
Borrowing this thread to add additional context, Rancher Desktop on macOS also uses Lima to make VMs for running k8s (I think it's actually k3s?) on your workstation. I've been meaning to try out Colima, since, while nerdctl is pretty functional and things work, sometimes dealing with the nuances when I don't really need a real Kubernetes environment for most of my dev tasks is more overhead than I'd like. That said, if you do need a proper k8s environment on macOS, Rancher Desktop does work quite well, and makes a lot of sense especially if your shared k8s environments are managed by Rancher.
> Finch provides a simple client which is integrated with nerdctl. For the core build/run/push/pull commands, Finch depends upon nerdctl to handle the heavy lifting. It works with containerd for container management, and with BuildKit to handle Open Container Initiative (OCI) image builds. These components are all pulled together and run within a virtual machine managed by Lima.
Can't agree more. I've been using Docker for Mac and Colima alternately past few weeks on the same machine and the same projects. The amount of times I needed to curse at Colima was zero, while Docker for Mac sadly is still a poor experience. Every now and then things just "don't work" and you need to reset or even reboot.
Colima is great, compared to Podman it's a lot more of a drop-in replacement for my use case. I've always had issues with Podman volumes but with Colima it was as simple as uninstalling Docker Desktop and running "colima start".
There's one issue I'm running into where it becomes unresponsive after a while and "docker ps" hangs forever though.
lima (linux on macos) is a VM management tool CLI frontend which can use QEMU or Virtualization.framework as a backend, colima (containers on linux on macos) is leveraging lima to set up a linux vm to handle linux containers straight from macos (including host-vm shares, port forwardong to the vm, etc...)
If you want to draw some very coarse comparisons with big names, lima is like VMware Fusion, colima is like the Docker for Mac app.
colima kind of fills one of the use cases of docker-machine which kind of died as this use case was handled by DfM and the other use case (handling machines for swarm) was folded into docker swarm and docker compose.
No, Lima just sets up a VM for you. Colima is a wrapper around Lima that can configure a Docker daemon and context for you. You still need the Docker CLI to use Docker.
Generally when using docker on a Mac you are actually running linux containers, so you need a linux virtual machine.
Colima is a low-configuration command line tool to spin up a linux VM (using Lima) which includes docker support, so you can run docker commands in the Mac terminal but the containers actually run in the linux VM.
You still have to install the actual docker CLI tool separately via Homebrew etc. Colima just provides everything else.
This is also what happens generally when you install and run Docker Desktop on Mac or Windows, I just like Colima because it’s a much lighter installation and doesn’t come with the commercial paid license requirement of Docker Desktop.
> on Mac you need to run containers inside a Linux VM anyway, so I’d rather use a VM directly and not introduce another unnecessary layer
I was long confused at how Docker functioned on macOS, until I learned that it's "just" running a Linux VM within which it runs the container images. There is no other magic happening to run a (linux-assuming) container on macOS.
Same question. I have been using multipass on my Mac (M1), and so far so good. The current limitation of multipass is that it only runs Ubuntu VMs. Also, setting up fixed IPs for multiple VMs is a bit tricky (if possible at all, I don't remember right now).
I have a bash script that uses multipass to setup a few VMs... but still it feels "primitive" compared to what I was using when I had an intel Mac (I was using Vagrant, but the Vagrant experience on M1 is awful: I have tried it with VMWare and it's not very stable in my experience).
Have you seen https://multipass.run/docs/configure-static-ips for configuring static IP's? It's kind of Linux-centric as far as setting up the bridge on the host is concerned, but as long as you create a bridge as you see fit on any host, then the rest of the instructions should work.
Would you mind expanding some more on your use case and what is missing in Multipass? If it's static IPs, then as mentioned in https://news.ycombinator.com/item?id=36693413, there are ways to accomplish this.
With my limited use cases, I've found multipass to be really comfortable. Was really easy to get into and make work. I'm not passionate about Linux distros, so Ubuntu is fine for me.
Multipass is fantastic, very easy to use, great for local k8s playgrounds and cases where docker doesn't fit (ie. tests that change system clock etc) or simply to have linux box at hand.
I just stumbled across multipass 2 days ago and it's been great for our local dev environment with a script to manipulate a bunch of things with multipass exec.
I just wish multiarch containers weren't such a pain to deal with.
Multipass for me suffered from a bunch of Macos networking bugs when on managed Macs. Kernel panics and vms that you couldn't connect to etc. UTM also suffered from these too. Apparently some have been fixed by now though.
Yes, the dreaded Apple Firewall bug:( The cause is that somehow the firewall would block macOS's own bootpd process and that it the process responsible for handing out IP addresses to the virtual machine instances. We could never find out why macOS chose to block bootpd though.
Anecdotally, the number of users mentioning this issue have been much less lately, but I'm not sure if that is because the new Multipass release which has a newer QEMU made it better, a macOS update fixed it, or users have just given up.
I gave up on using VMs on my work Mac (Linux user at home). When I get round to trying the Ventura upgrade, I'll give it another go.
The annoying thing was that multipass and UTM would initially work before the firewall killed it a few days later. Makes me think it was learning from heuristics or something rather than being a defined policy.
Just discover that Multiple 1.12.0 updates appear to be migrating Hyperkit to QEMU-based, there could be a significant change whenever VM can be backup?
I am consideirng going pure ipad + ssh an AWS instance as a dev environment. i already just use ipad + ssh local machine. would have as much automation as possible to spin things up/down when developing. I don't know how I'd set this up to be nice and seamless just yet.
Pay for 3 years of AWS savings plans to make it cheap. Perhaps run a permanent instance for small things, make a setup that lets me spin up super power machines when i actually want to build.
Most of the time I'm at my ipad remoting in Im not actually compiling, just editing. I can just edit locally and pay for the few seconds/minutes it takes to build..
because honestly after about 3 years i always want to upgrade
and i can get windows that are completely functional for me on remote monitors with ipad now. just starting to wonder why im paying for physical compute power but 99% of the time browsing the web and editing code but not building it.
It's interesting to compare how soon the VPS's total cost will reach the coat of an MBP. Even a relatively beefy VPS which is $30 / mo is only $360 / year, like a cheap used Linux laptop.
On the other hand, it has much much less CPU than an MBP, and usually no GPU at all.
On the third hand, the cloud allows one to quickly start more VMs, and to shut them down as quickly, something that's possible but a bit more limited on a laptop / desktop.
Why pay for a whole month though? Depending on your work, spin it up and down as needed. If you're just building a binary you only need it up for minutes or seconds. Been on my mind lately. Lot more sandboxing too. And if you want to build fast you can just scale up your instance since its so short lived. Want to switch from x86 to ARM? no problem. Spin up 40 of your current app? no problem.
That's fair. But above I read about a dev box with Emacs running on it, etc. You likely don't want it to be too short-lived. You can switch it off during night, of course!
I especially like having a few VPSs on GCP, each with specific software installed. For example, I have one with Clojure and Python bridge set up. I rarely spin these up, but the disk charges are cheap so I can just leave them in a halter state until I need them.
it's all sshfs.. afaik they're looking into changing that. i don't have the i/o issues but always backup any files in the vm as they tend to get lost at some point.
When I want to make sure my software works on MacOS, it'd be nice if I could do that without having to have a whole other computer sitting in front of me.
I'll take a look at lima, but I've had nothing but problems using colima as a docker alternative on my macbook air m1. Could be user incompetence, but always got issues of images failing to pull and containers erroring out in mysterious ways.
Oh this is very nice, I spent about 2 hours getting around some bullshit bug with Vagrant and VirtualBox to work on ARM OSX at the weekend. This took 5 minutes to setup.
Anyone have good (preferably open source but not required) tools for running MacVMs on a Mac? Would love a way to programmatically control MacVMs (create new from image, start, stop, etc) as part of our Mac build server setup. GitHub actions Mac CI minutes are so expensive so we run our own setup and VM level isolation seems to be the best way to keep the build processes from stepping on each other.
You can use qemu/libvirt/kvm on any Linux host to run macOS pretty easily these days[1]. I run Ventura on unraid with nvidea gpu passthrough (w/ ryzen cpu even!) and it’s been fairly painless.
You can also run macOS in docker, but it’s ultimately running through qemu/kvm as well[2]
I see that lima has an option to choose between qemu and vz. What are the pros/cons to each? Is vz performance better?
Update:
I edited the YAML file for the Lima VM and changed from qemu to vz, also made sure the mount was using virtiofs.
Observations - on the surface, no performance difference but I haven't really done much yet. I noticed that there is no longer a qemu process running (obviously), and I see that /System/Library/Frameworks/Virtualization.framework/Versions/A/XPCServices/com.apple.Virtualization.VirtualMachine.xpc/Contents/MacOS/com.apple.Virtualization.VirtualMachine is now running.
Strange - com.apple.Virtualization.VirtualMachine goes into 400% CPU and the Ubuntu VM freezes. I've now reproduced it twice. Not sure why this happens.
I recommend having a look at Macpine [1] which allows you to run lightweight alpine VMs on MacOS with easy port forwarding, file sharing; you can also easily run docker inside of it and use docker context to target it.
How do you pass files into/out of the VM? I know Virtualbox has the Guest Additions software which is very handy. I also know of things like the Spice Project[0]. Does Lima have its own solution?
It should check the current directly and go up each time similar to.asdf or .rbenv, looking for a .virtconfig dir
Depending on the config, I want:
1: it running a foreground instance, soo I don’t need ssh, and I know that it will shut down when I end the shell
2: I want to configure my shares/mounts, which by default don’t go up from the .virtconfig dir
3: I have to think about read only instances and multiple instances of the vm
The idea is that when you later cd into a project directory, .direnv (I think) can automatically turn into a Linux shell, or other OS which is also sandboxed.
I’d also want a single command that spans a Linux instance with the current director mounted (r or Re) to Linux. This way you get some sandboxing when trying someone else’s code
You can use VMware Workstation Player and an unlocker like Auto-Unlocker (https://github.com/paolo-projects/auto-unlocker) to enable Mac OS as a guest. It works, but it's very slow because the Mac guest runs without GPU acceleration.
Would be cool if we see something come out that uses the Apple native Virtualization.framework so you can use the nested Rosetta extensions on M1. Dunno if that's been done yet.
Apple has kicked errbody out of kernel space over security concerns, so even companies with 25 years of experience writing network virtualization kernel extensions must now instead use Apple's vastly inferior macOS facility that can reduce performance by as much as 80%.
..never mind if it cost the equivalent of several m2ultras discovering what I wrote, -1 doh.
OTOH, I don’t believe much of what I read on here because of such direct negative experiences like this (being downvoted on stuff I have first hand knowledge of) —and that truly is priceless.
For the past twenty years I've bought thinkpads which during productive time have generally run (a) Firefox (b) some sort of system that lets me get full screen X11+xterm+ssh to do the actual work on remote development servers.
I've gone through VMs, colinux, cygwin, and currently WSL2+VcXsrv for (b) and so far it's basically worked out fine.
(I'm aware that desktop linux wrangling is far, far easier than it used to be but this setup very rarely annoys me and if I'm going to do some recreational geekery on a "because I can" basis it's far more likely to be something like playing with a new programming language than reinstalling my laptop)
You can install UTM via AltStore but you’ll be limited to slow battery-hungry emulation. You might be able to use the hypervisor for hardware virtualization support via an exploit like TrollStore running on an older iPadOS. The OS will kill UTM soon after backgrounding to free up memory and preserve battery life so you’ll need to keep it onscreen using Split View or Stage Manager. Overall I found it to be too much of a hassle compared to remote dev using Blink shell or vscode.dev
Their sample code includes a resizable display, shared drive, network access, etc. You can run x86 Linux on Apple Silicon (including running pre-built x86 binaries with Rosetta). On the latest release, you can save and restore the VM state to avoid boot initialization and application setup.
Run headless linux https://developer.apple.com/documentation/virtualization/run...
Run GUI linux https://developer.apple.com/documentation/virtualization/run...
Run intel binaries in linux vms on appleSilicon with rosetta https://developer.apple.com/documentation/virtualization/run...
Configuring Virtio shared directory https://developer.apple.com/documentation/virtualization/vzv...
And you can run MacOS VM's on Apple silicon:
https://developer.apple.com/documentation/virtualization/ins...
https://developer.apple.com/videos/play/wwdc2022/10002/