I use Virtualization.framework for OrbStack (https://orbstack.dev), which is a Docker Desktop and WSL alternative.
It's great overall, and fairly convenient, but it has a fair share of bugs and limitations. The Virtualization service crashes with various combinations of Linux kernel version, macOS version, and architecture (ARM/Intel), so making it stable in each setup takes quite a bit of work. Device support is limited (no USB, etc.), and workarounds have other limitations. Rosetta is really buggy.
The popular belief is that Virtualization.framework is inherently faster/lighter than other VMMs like QEMU, but I haven't found evidence to back it up. I spent a day prototyping a custom VMM (with an open-source base) and was able to get dynamic memory allocation working [1], as well as faster file sharing in some cases. There's a lot of other things I could do better (faster, simpler, more reliable) with a custom, tightly-integrated replacement.
But unfortunately, that's not an option. Apple doesn't allow third-party VMMs to set the necessary CPU flags for Rosetta. There are too many users relying on Rosetta for fast x86 emulation for me to ditch it, despite all its bugs.
So yes, I still use Virtualization.framework, but only because I have no choice.
Happy to answer other questions about the framework!
I've been looking into some of the options for containerized development environments and what I observed was that with lima even when using virtiofs it slowed down to a crawl when building a project from a shared directory so I installed orbstack and gave it a try and I was very pleasantly surprised to see that it was super fast, just as fast as if I was running on a local copy.
Needless to say orbstack earned its place in my setup so thanks a lot!
Edit: sorry, I wasn't clear, what I'm using in orbstack isn't a container instance but a full VM. I don't expect to find a huge difference with containers but I figured I'd make it clear so I don't cause any confusions.
I would definitely buy OrbStack if it become non free.
the only thing that i'm afraid of is this:
- i work in an enterprise company, and they don't want to pay for docker. as a software engineer i want to be able to buy OrbStack pro license with my own money and be able to use it both in company or my personal environment. is it possible? or your future licensing plan would prevent me to use it in professional environment and only allow me to use it in personal environment?
Your answer is still vague.
So suppose we have this 3 hypothetical plans:
- Professional : XXX $ - Business YYY $ - Enterprise ZZZ $
which XXX << YYY << ZZZ
I'll repeat my scenario:
- I'm a Backend Developer
- I don't have any share in the company.
- I don't have any management position in the company.
- I'm not forcing any one in the company to use OrbStack to progress my mission in the company.
- I'm solely using it inside own device provided by the company. i don't have it installed in my colleagues devices.
- this company device is only in my own control and is not shared with anybody else in the company.
- this company device is not used in any automation CI/CD pipeline in the company.
- I'm willing to use it as a docker GUI for the docker images hosted in companies account in Google cloud.
- I'm also sometimes/occasionally, willing to use it in my personal toy projects using the same company laptop in my spare time using the same instance of OrbStack license.
based on these are you implying that i'm not allowed to use `professional` license and i need to use `business` license?
Or you are telling that it's okey for me to use my personal `professional` license in this scenarios?
I've been using OrbStack for a few weeks now and, while initially skeptical that I would in future consider paying for it, now I think I would. I've found it to be less troublesome than Docker desktop, has some prettier terminal output, and it starts up and quits so fast. These might seem like small quality of life improvements, but added together it's just a lot nicer to use. Looking forward to seeing where it goes next.
I can't speak for the feasibility of running a macOS guest on Asahi, but yes, Linux could be in control of the guest's CPU flags. Same goes for the m1n1 hypervisor.
I can't find the exact post but I remember marcan saying that he doesn't want to poke the bear of using Apple's custom ISA extensions on Linux. So even though it's possible, I'm not sure whether it'll actually be done.
Been using vftool to host docker instead of Docker.app. It's been alright so far, got rosetta set up inside to run x86 images. All it takes is a little startup script and some up front work to prepare a disk image with Ubuntu server.
Every once in a while, the guest Ubuntu kernel will oops about smp related things. Every time it boots its clock is reset almost a month in the past and I have to force sntp to correct it. After sleeping the host, the guest clock will not advance so it'll be behind several hours.
I'm not sure where to assign these issues: Ubuntu kernel? virtualization framework? vftool? bad configuration?
Right, it uses Virtualization Framework. I forked vftool [0] and added direct Rosetta support which was trivial to do according to the Apple documentation; just had to convert Swift example code to Obj-C.
Have had a PR open for several months with no attention from the author. I almost forgot about it until seeing this HN headline.
The Rosetta support is surprisingly trivial to use. The virtualization framework exposes a mountable volume containing a single `rosetta` executable. In the Linux guest, you just register binfmt_misc support that recognizes an x86 ELF image and point it at the `rosetta` binary. It works for docker too so you can run arm64 and amd64 images.
I've also been using the Apple Virtualization option with Linux VMs in UTM and have had zero issues whatsoever. The biggest benefit to doing so in my opinion isn't necessarily the noticeably better performance, but the RAM usage. I've noticed that Linux VMs in UTM that run using the Apple Virtualization Framework use significantly less RAM compared to otherwise because the framework provides dynamic memory allocation. This makes it very well suited to running Linux VMs on systems that have less memory (e.g., an M1 MacBook with only 8GB of RAM).
The only thing I'd like to see added to the framework is the ability for nested virtualization (which is now available in the M2 chips, but isn't built into M1 chips).
> The only thing I'd like to see added to the framework is the ability for nested virtualization (which is now available in the M2 chips, but isn't built into M1 chips).
Is there an example of this working in any context? Until then I don't see how it can be considered available.
EDIT - Confirmed below it does work in Linux on metal - Props to the Asahi team!
And FYI, this is also available in Docker Desktop for macOS, which allows better performance when running x86_64 containers compared to the older solution where Docker was using qemu.
It is used for individual binaries. The Linux kernel is arm, but the binaries running on it can be x86 running on Rosetta. This works particularly well with containers, which come with their own libc.
Rosetta is a tool for running x86-64 binaries on an arm64 OS.
I think you're asking if rosetta lets you run an x86 kernel, to which the answer is no - the whole point of this framework is to support virtualization, e.g. the OS is running directly on the hardware. The moment the OS can't do that, there is no point in doing anything other than emulation.
Yes, I've used it with Terraform plugins by setting the arch to x86 and running via docker. Needed to do this as some plugins don't have arm64 versions.
On the run so apologies for not reading the docs you attached yet but I’ve been having a helluva time trying to cross-compile rust from an M1 for x86_64. Will this method help in this situation? I’ve tried running an x86_64 vm in emulation to compile rust bits it excruciatingly slow.
Under the roots, utm can choose to use Virtualization.framework or QEMU (IIRC there should be a setting for that). It seems that Apple is actively working on this framework.
At first I thought this was being done to pump us up for new Mac Pro hardware that might be on the horizon, but then I wasn't sure, but now I'm also still not sure it isn't .. ;)
But, regardless, the subject of using a Mac to host a plethora of different platforms (in my case, for build-server duties) is indeed a fascinating subject. If I can get my target/builds set up on the next Mac Pro, whatever that is going to be, I'll be quite happy to do so .. especially if it isn't just doing MacOS/iOS builds, but whatever else my framework of choice (JUCE/Tracktion) supports (Android, Linux .. Windows .. etc.)
I think this is being done because the Mach microkernel was designed to run multiple OS's at the same time. I think Apple finally has enough developers to start pushing into new areas such as a framework to advantage Mach.
We are obviously talking past each other. On MacOS Mach provides all the resources. Mach was designed to abstract Operating systems from the underlaying hardware. It could also run more then one operating system at time since that was normal expectation.
"Being done", I'm not sure anything is happening here other than a happy user pointing out that Apple have some advances in this area, but .. your point is that this has been there all along.
Alas, if only there were the hardware at scale which supported Apples' Mach implementation, at favourable economies.
One way this is favourable to this dev is if Apple finally give me a machine which I can use to Build All the Things™, including iOS and MacOS and Windows and Linux and Android, oh my!, virtually and/or with x86/ARM cores onboard, oh yeah!, without having to resort to the typically draconian bollocks on other platforms, or rent someone elses hardware, or whatever.
A new Mac Pro that can be loaded to the gills and host several competing OS's at once, for the purpose of software development? Sign me up, because I'm tired of build-box'ing rigs and fighting that darn license ..
When Mach was defined it was considered a centralized campus computer that could run all the needed operating systems on top of centralized memory and processing using IPC.
Mach was designed to run multiple OS interfaces in the same way that Windows NT was - multiple compatibility layers on top of a common kernel (e.g. WSL1). What is being discussed here is virtualization, for which Mach is no more suited than Linux, Windows, Sys V Unix etc.
Mach was designed to make clean separation between operating systems and hardware. Notice how easily Apple moves between hardware? The unsung hero is Mach and the design patterns that NeXT/Apple framework built on top of it.
That is largely due to a mix of Apple's investment in emulation technologies, and Apple's ability to exert influence on third party developers and vendors to keep up with their latest tooling. Very little to do with the Mach kernel.
The ability to move between hardware platforms also has very little do with the ability to run operating systems in virtualization. Case in point is that Apple is one of the last major OSes to include a hypervisor in the OS.
I'm certainly not an xnu specialist but from what I understand, you've basically got a BSD kernel (a "macrokernel") running in privileged mode that (roughly) just has a few services replaced by or connected to Mach ones. So I'm not really sure they get much portability benefit specifically from Mach versus "more traditional" kernels.
In the UTM page it says "Under the hood of UTM is QEMU, a decades old..." so are we talking about the same thing as Apple Virtualization Framework. I am confused qbout the QEMU mention.
There are two frameworks, Hypervisor.framework which is a low level abstraction of the processor's virtuazionation functionality, and Virtualization.framework which is basically a QEMU implementation.
I've got some Ubuntu VMs set up already, and as per [0] I cannot find an option called "Use Apple Virtualization" neither in UTM, nor in the VM settings.
Hypervisor.framework provides only the bare necessities: the ability to manage vCPUs, stage 2 page tables, and handle hypervisor exits. It does not do any sort of virtualization except for the CPU itself (which, on ARMv8, is very little due to hardware providing good primitives already). Virtualization.framework is full operating system virtualization suitable for running modern Linux and macOS guests.
AFAIK Virtualization.framework is built on top of Hypervisor.framework. Former is high-level one allowing you to create Linux VM with few hundreds lines of code. Latter allows you to run your whatever you want with few tens of thousands lines of code, but absolute freedom when it comes to drivers and every low level detail.
It's kind of wild that Apple provides code samples you can use to directly create your own VMs. Has ended up being the most straightforward way to run virtualized macOS on a macOS host that I've found, and comes with the nice benefit that your can read the code yourself and fiddle with the knobs.
I can't upvote this enough: just take the sample code and build on it.
I tired of evaluating and learning various wrappers and helpful virtualization applications. Being able to configure directly in a normal language is great, and it eliminates software supply chain concerns.
Plus, I'm finding programs are running faster in my virtualized ubuntu than on macOS "bare metal" (still on Intel, where big memory is cheap)
Please God let them use this to virtualize macOS inside iPadOS. Not as a daily driver but for the occasional use of a full desktop OS (when plugged into a keyboard/mouse). Then I could stop traveling with two devices once and for all.
If we can’t have native macOS on the iPad this would be the next best thing. Imagine having Linux or macOS on the iPad that is treated like any other app (sans memory limits and etc.). You could have a full blown desktop at the ready when iPadOS isn’t up to the task.
The iPad could finally utilize the m1 processor to its fullest.
FWIW this is how it works on ChromeOS today. From the Linux terminal app (crostini lxc container) qemu launches macOS, Windows and Linux vms. They even support nested virtualization on my Framework Chromebook - so vms can run inside those vms.
I agree if Apple implemented virtualization on iPad/iPhone this same way it would be a huge for unlocking their full usefulness and capabilities while still maintaining host isolation/sandbox.
I'd be quite happy to see how a fat MacOS machine is being used as a build-server for multiple generations of MacOS versions, as well as how it could be used (via Linux VM's) to also build for Android, Linux, Windows, etc.
Essentially, this subject is all about the build server destination; there is an award for how many different targets you can get from the same code-base, and being able to load up a fat Mac that approaches the nexus, is certainly of interest to us developers.
VM's are key tools in modern build environments. Having a fat Mac running multiple VM's to solve the target/build infrastructure problem is fascinating on all fronts..
Fair warning, you might be better suited by QEMU since debug support in Virtualization.framework is not great. QEMU has support for debug stubs out of the box. QEMU does support Hypervisor.framework so your performance should still be good through it, assuming you're using a matching architecture as your host (i.e. don't build an x64 kernel for your ARM Mac)
I tried a bunch of different options and landed on multipass.
I'm curious whether someone has found a good option for running x86 images on m1 chips.
I'm generally happy with the multipass VMs I'm creating but they are arm architecture so then if i use docker images inside them I'm limited to arm images. Maybe I just haven't explored how to run x86 correctly within multipass?
I'm finding that usually arm images are a step behind the x86 images if they are official, or they are built by someone outside the organization, so I'm never sure I'm using a safe image.
Do any of the options here make it performant to run x86 on apple silicon? Some interesting comments on Rosetta that I don't fully understand.
The key is to ensure your x86 code is run with Rosetta instead of being emulated (with qemu for instance). By upgrading to Ventura and using the latest Docker Desktop which allows Rosetta translation, I was able to run x86 containers 6 times faster than before. Last time I checked, Multipass had no options for Rosetta translation yet (it's planned though).
Keep in mind Rosetta has a few limitations like it can't translate AVX instructions.
I've had the best experience using Parallels. Not sure what exactly goes on under the hood but x86 performance on my M1 Mac is snappier than what I've experienced in VirtualBox (have never tried Multipass).
Regardless, performance on my beefy M1 Max MacBook Pro is pitiful compared to running natively on an x86 system.
You pay for parallels, right? I always try to find something that's fully open so I'm able to control my destiny fully. That's a bit ironic trying to run on apple hardware, but if the entire stack above the bottom layer is open then I can easily switch hardware of course.
That’s noble, but open source doesn’t always deliver the same benefits of COTS products, such as stability and support. That trumps ideology for many people and businesses.
Definitely true. But I also feel like open source stuff is more honest about the capabilities. That's important to me and often trumps the marketing effort by a commercial effort.
On that thread, many vendors will implement features and capabilities through request or with with their customers, and often at their own expense if it increases their overall marketability.
Sure, you could put in a request to the project maintainer, fork it - or find a contribution yourself - but there will still be zero guarantees and support arrangements will be shaky at best.
Translating x86 -> arm or vice-versa is always going to be slow. No amount of software will ever overcome that. If you are using arm you really need to embrace it. Running x86 on arm is only a hack - whether it's apple or some random github - it will only ever be a hack. It is one thing to dev on arm and deploy to x86 but expecting x86 to perform remotely close to the same exp on arm isn't something that's going to happen.
Having said that..
If you really want to run x86 images on arm, and while we're definitely not 1:1 with docker (and don't want to be) we're interested in people's experiences with https://ops.city as there is native hardware accelerated arm support and x86 on arm. (I'm with the company behind it.) It's definitely going to be more performant than a docker image since docker/k8s not only rely on a full blown linux but also abuse iptables, et al. It also has support for bridging (outside of the vm) and 9pfs so that would help alleviate your pain.
Virtualization Framework is awesome and I'm slowly building my own VirtualBox-like app, learning Cocoa programming. Hopefully will release it some day.
One big thing that's missing is better storage support. Right now it's just disk image and that's about it. No way to implement something like qcow2 with snapshots and whatnot. It's missing a lot of opportunity for VMs.
Another big thing that's missing is USB pass-through.
Other than that: it does absolutely everything I need. It even provides some kind of hook for network which should allow to build userspace router with any network architecture (didn't try it myself, though, but I think it should work).
I was able to implement thing that I missed for years on M1 mac. Basically I run Fedora VM, I installed Rosetta helper inside, and docker. And then automagically my x86 images started to work with reasonable speed. I don't know if Docker Desktop supports it already, I don't want to touch it with a longest stick in the world. But for myself I solved this issue.
It was rock stable in my experience and just worked. qemu on the other hand managed to crash my entire macOS (hopefully that bug was fixed).
Another thing that I liked about it: protocols are "modern" and open-source. Almost every driver is virtio. No ancient cruft. No proprietary things. Eventually other operating systems could be run on this solution.
I suppose this is something else than the Hypervisor framework (https://developer.apple.com/documentation/hypervisor) - its successor or a convenient complement? Does anyone know if there's a convenient command line utility for wrangling VMs built on this framework?
Apple Virtualization Framework is a nice tool (option offered in particular by the UTM application in addition to Qemu option). And I hope it will improve with the release of the successor to Ventura and offer, for example, nested virtualization.
The only concern currently, besides for virtualized macOS the lack of support for iCloud accounts, is that it contains a number of bugs which make it difficult to use as is for non-English/US configurations.
Thus, there is a bug with ISO keyboards, which are recognized by Apple virtualization framework as ANSI keyboards, which shifts or even makes a certain number of keys disappear (different depending on the language: French, German, Danish, etc.).
Yeaahhh ... the secure enclave seems to have pooched the idea of a USB stick booter. Restore Mode works fine.
I used to use CCC (Carbon Copy Cloner) to make a mirror, but it doesn't like to boot from the mirror (it does, but I can't use any of the secure stuff). Also CCC destabilizes my system. If I have it running all the time, I crash -a lot-. Since I yanked it, I haven't had a hard system crash in months.
These days, I just make sure the Time Machine backups keep going, and I'll use CCC on a one-shot basis to make occasional disk clones (makes for a much faster restore).
For cloning systems and making them bootable, the official option is to use Apple Software Restore (asr in the Terminal). CCC uses that under the hood though, so it might crash your system as well.
I quickly glanced over docs and didn't find anything new since 2022. So it seems that nothing new indeed. Hopefully in a few weeks something new will appear.
It's great overall, and fairly convenient, but it has a fair share of bugs and limitations. The Virtualization service crashes with various combinations of Linux kernel version, macOS version, and architecture (ARM/Intel), so making it stable in each setup takes quite a bit of work. Device support is limited (no USB, etc.), and workarounds have other limitations. Rosetta is really buggy.
The popular belief is that Virtualization.framework is inherently faster/lighter than other VMMs like QEMU, but I haven't found evidence to back it up. I spent a day prototyping a custom VMM (with an open-source base) and was able to get dynamic memory allocation working [1], as well as faster file sharing in some cases. There's a lot of other things I could do better (faster, simpler, more reliable) with a custom, tightly-integrated replacement.
But unfortunately, that's not an option. Apple doesn't allow third-party VMMs to set the necessary CPU flags for Rosetta. There are too many users relying on Rosetta for fast x86 emulation for me to ditch it, despite all its bugs.
So yes, I still use Virtualization.framework, but only because I have no choice.
Happy to answer other questions about the framework!
[1] https://twitter.com/kdrag0n/status/1645645284721184768