Hacker News new | past | comments | ask | show | jobs | submit login
OS X-KVM: Run macOS on QEMU/KVM (github.com/kholia)
166 points by kristianp 53 days ago | hide | past | favorite | 70 comments



This looks new-ish, but I remember seeing something about OSX on KVM years ago. The author's name seems familiar for some reason, however.

Found it, the same repo linked in August 2018: https://news.ycombinator.com/item?id=17763855

And September 2016: https://news.ycombinator.com/item?id=12556609

Apparently the version control model of the repo is ‘scrap everything once in a while’.


"Removed history to reduce repository size"


I’d be careful about offering commercial support for this.


I think I was thinking along the same lines as you. There is a section in the linked github titled "is this legal"[0]

[0] https://github.com/kholia/OSX-KVM#is-this-legal


That appears to be in reference to the act of running macOS on non-Apple hardware. The reference to paid commercial support is what will have the Apple lawyers knocking.


If the software is sold then he's in the clear. If it is actually leased then Apple needs to make that obvious and start charging recurring fees like a normal lease agreement.


Doesnt the quickemu project do some of this automagically? https://github.com/quickemu-project/quickemu


Will this project survive the ARM transition? E.g. will we be able to run macOS for ARM on x86 when Apple pull the plug on macOS x86?


It's an insane amount of work, but people do try - https://twitter.com/JonathanAfek/status/1350000894784495617

I think we'll eventually get there. Simply because the M1 macs will upsize the pool of people working on it.


It never ceases to amaze me how much free engineering effort is expended to work around Apple's resistance to open source and power user hostility.


I would argue that the fact that sort of work on reverse engineering Apple’s products happens so consistently suggests that in some capacity, current open options aren’t meeting the needs of some number of highly technical users, and that for these users having to expend the effort required to bend Apple products is more appealing than accepting the trade offs that come with more open options.


The only time I've ever used stuff like this for is to support macOS users when I don't have Apple hardware to test on, or to bootstrap a Hackintosh (also used solely for supporting others who actually like macOS) when I don't have ready access to Apple hardware.

I imagine that similar is true for many users of this project, although some others use it just to daily drive macOS on a configuration Apple doesn't sell.


The ARM CPU would still have to be emulated on x86_64, which would be ungodly slow.


There's no reason to believe this has to be slower than x86 on ARM


Well, emulating† x86 on ARM is excruciatingly slow, just as emulating ARM on x86 is (turns out I do both, routinely, so no need to believe when I have data: it's about an order of magnitude slower than native)

† Rosetta 2 is not emulation.


Emulating x86 is slow on ARM because it's inherently a less complex instruction set. Emulating things the other way around is dead-simple by comparison, it's what made it such a cakewalk to write Nintendo Switch emulators on x86. ARM has to swing up the abstraction chain to implement x86 instructions, whereas x86 can execute ARM instructions for cheap, free, or even faster than ARM natively in the case of SIMD calls.

Plus, that's ignoring how people will actually try to get MacOS running on these machines. Once x86 is finally considered "dead" and removed from the MacOS ecosystem, the solution is going to be kernel patching. It's hard to speculate on how difficult this might be, but I'd imagine that a small team could port binary patches from BSD to Nu-Mac fairly quickly. It's not a particularly complicated process, just a mite arduous.


This is generally not true. x86 has a bunch of CISCy instructions that translate to several ARM instructions, but ARM also has a bunch of things that encode strangely and split across several x86 instructions. In general, they're not that far apart in complexity. (There are instruction sets where this is true, e.g. older architectures like 6502 or whatever. But it's unlikely to be the case for anything moderately complex.)


Rosetta 2 is an emulator. It has a JIT compiler inside of it!


I’m hoping to run macOS with QEMU/KVM with a project like this or another - does anyone know though if it is possible to pass through an intel integrated gpu (in a CPU only machine) to use for hardware acceleration?


I believe the most promising route is to not try to do passthrough but to use a "virtual" gpu, aka a paravirtualized device. Thanks to the contributions of an open-source developer*, virtio-gpu has been ported to macOS (it already works on Linux but not on Windows and soon it will be on macOS). Apple is also reportedly working on bringing better support for macOS virtualized environments [2].

It means that users will be able to create any number of macOS instances with 3D acceleration, without having to care about trying to do complicated things (mediated devices or single-gpu passthrough or pci-passthrough of a second GPU).

As of now, the performance hit is high (more than 50%), but it will certainly improve over time, and many users might already find it acceptable (= people NOT playing recent video games).

Icing on the cake, thanks to the EGL-headless display (which is independent of virtio-gpu) and VNC, it is possible to connect to these instances remotely.

Shameless plug: I am working on Phyllome OS [3], a Fedora Remix with the goal to include the necessary plumbing to ease advanced virtualization techniques for end-users, and to offer curated virtual machine models.

I currently only use virtual machines as desktop machines, including locally, and even my poor x230 laptop can do it. I keep the host operating system intact (Phyllome OS), and use VMs as desktop machines. It has drawbacks (difficult USB access; displays models that were not designed for local use; sound support), but I firmly believe it will get better over time and, perhaps, one day, end-users will be able to easily deploy any modern OS on any modern hardware...

[1] Sorry, I could not find the link to the repository [2] https://passthroughpo.st/mac-os-adds-early-support-for-virti... [3] https://phyllo.me/

[edit] Delete mention of KVM/QEMU


> without having to care about trying to do complicated things ([...] pci-passthrough of a second GPU)

Actually, GPU passthrough is quite simple: configure host Linux to assign a virtual driver to a secondary GPU on boot and configure QEMU to use that driver on guest boot[1]. If you buy a GPU that is natively supported by macOS (e.g., Sapphire Radeon RX 580 Pulse 4GB), you get 99% performance and 100% stability of a native Mac (not a single macOS/QEMU crash in years for me).

Moreover, if you configure your QEMU mouse/keyboard to use evdev[2] (event devices), you can game under macOS/QEMU or Windows/QEMU without any input lag.

[1] https://github.com/kholia/OSX-KVM/blob/master/notes.md#gpu-p... [2] https://passthroughpo.st/using-evdev-passthrough-seamless-vm...


By complicated, I mean that you have to modify your the default behavior of your host OS (which by default will grab any hardware) and pick a GPU that is supported by the guest OS. People with Nvidia GPUs cannot do that for macOS. Ideally, out-of-the-box, with a few clicks, most people including non-tech savy ones should be able to enjoy 3D acceleration in their VM, regardless of their host configuration.

Besides, in the long term, Apple is likely to drop support for AMD or Intel GPU in their OS (but yes, they might also drop support for virtio devices).

Yes, evdev is currently the best way to share your inputs devices to a VM, thank you for pointing that out (alas, it is only available in distributions with very recent virtualization-related packages).

In my (perhaps narrow) view: virtualization should be about making you less reliant on the underlying hardware. I wish that GPU manufacturers would agree on a common standard to allow users to split their beefy GPU into smaller parts (Intel is apparently dropping support for vfio-mdev (Intel GVT-d), adopting SR/IOV instead for their latest offerings. But SR/IOV is only available on professional models for AMD and Nvidia. In summary, there won't be a standard any time soon, leaving virtio-gpu as the sole hardware-agnostic contender)


> Besides, in the long term, Apple is likely to drop support for AMD or Intel GPU in their OS

Off topic but I’m curious about this. On one hand I think it’s a very Apple like thing to do but on the other hand it would practically kill Mac for pro use cases, which is barely hanging on by a thread right now. Could you imagine buying a $10,000+ workstation and Apple telling you 1 8k display is enough?

I think it really hinges on what the next Mac Pro looks like. If it’s arm based and allows expansion then it’ll likely support AMD and intel gpus and support for those will stick around for years. If not , then the current intel macs and their egpu support will be the end of the line.


You correlate an Apple GPU with a hard limit on the amount of displays you can use, but honestly I don't see the point. Apple has bittersweet limits on the amount of displays one can connect right now, but it is no guarantee that this will continue in the future.

With what they unveiled in terms of GPU performance with the Macbook Pro, the writing's on the wall; Apple is not going to support third-party GPUs going forward. The Mac Pro will only offer Apple CPUs and GPUs, I'm sure of it.


IME you’ll also need to use isolcpus and affinity on the KVM threads to get stable GPU performance (with macOS or other OSes). Plus there can be some other fun issues depending on your setup, like having to flash a different BIOS on your card or having to deal with multiple devices on the same IOMMU domain (I had to pass through a USB controller with my card).


"you get 99% performance and 100% stability of a native Mac (not a single macOS/QEMU crash in years for me)"

I am curious how this is so stable for you. Are you running this on Intel? Just last week I installed this on my AMD Linux workstation to test some stuff with Xcode and qemu would crash randomly while compiling LLVM/Clang.


Yes, I run this on Intel i7, under Ubuntu 20.04, with QEMU 4.x, as "-machine q35".


Is there some info on virtio GPU for OSX development somewhere ?

I knew virtio support was happening but hadn't seen any info on the GPU bit.


The port I had in mind is this one [1], and the use case is to make it possible to run 3D accelerated Linux guests on macOS hosts. I may have wrongly believed that it would be useful for macOS guests.

Besides, it is correct that whatever happens on the driver side of things, Apple will have to approve it first (as the other reply pointed out)

[1] https://mail.gnu.org/archive/html/qemu-devel/2021-02/msg0423...


macOS guests use Apple’s own ParavirtualizedGraphics framework, which comes with none of the downsides of virtio-gpu. It pipes through serialised Metal at native performance.


I'm quite sure virtio-gpu for OS X will _never_ happen unless Apple approves of it. 3D drivers are like the most closed things in OS X.


Wow that's very surprising apple is trying to give better support for virtual environments instead of worse.. Do you know if they're allowing their iOS dev tools to run in there so you don't have to buy a mac anymore?


I have been developing a toy iOS/macOS app running macOS with this. I am able to login with my dev program credentials and do everything I need. I haven't tried submitting anything to the store.


So the virtualization hooks aren't just in MacOS? I could compile your app on my M1 iPad and run full MacOS virtually?


No, Apple doesn't expose virtualization on iPad.


Apple is adding better support for virtual environments running on the Mac. Not virtual macs running on other machines. That is all coming from the community.


Yes, starting with CPUs circa 2014. You need CONFIG_DRM_I915_GVT from Device Drivers -> Graphics support -> Intel 8xx/9xx/G3x/G4x/HD Graphics -> Enable Intel GVT-g graphics virtualization host support.

There are a fixed number of partitions and a specific setup you need to do with /sys nodes, refer to the manual. Considering this and that support only became possible some time after 2014 it's worth realizing it does not work like normal VFIO passthrough. Why? No idea, seems like an oversight. Need to stop providing lying interfaces to users.

https://serverfault.com/questions/1048811/vfio-igpu-passthro...


Not really pass through, no. If CONFIG_DRM_I915_GVT is enabled in your kernel, you can use Intel's graphics virtualization system... basically a virtio style virtual device that shares the GPU between VM and host. IMO this is way more convenient than real passthrough, where the device is only available either to the VM or the host. The downside is that you don't get full performance in the VM.

"Intel GVT-g is a full GPU virtualization solution with mediated pass-through (VFIO mediated device framework based), starting from 5th generation Intel Core(TM) processors with Intel Graphics processors. GVT-g supports both Xen and KVM (a.k.a XenGT & a.k.a KVMGT). A virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability."

https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide


Gvt-g is pretty much dead. New hardware e.g. Tiger Lake with Xe isn’t supported and it’s my understanding that Intel has no plans to support.

See the following link for supported devices.

https://github.com/torvalds/linux/blob/bbf5c979011a099af5dc7...


New Intel hardware supports SR-IOV, the successor to GVT-g, but unfortunately driver support is apparently missing for Linux: https://www.reddit.com/r/VFIO/comments/o7l2zr/sriov_on_intel...


We are more than 2 years past the launch of Ice Lake now… so…


oh wow, I had no idea. Seems a shame... Are they offering anything to replace it? Or is it GVT-d or bust?


It didn't work when I tried to pass through a virtualized GVT instance in Catalina a couple years ago.

Although this all will not matter once macOS becomes exclusive to Apple Silicon.


See also quickemu which has support for launching macOS using information largely taken from this and similar repositories as well as Windows with TPM and enables all the good features: https://github.com/quickemu-project/quickemu

There is also another promising project called docker-osx which has the best documentation around doing the serial number generation to make iMessage etc work (you can use it in combo with most other methods but the docs are often confusing when I looked about a year ago): https://github.com/sickcodes/Docker-OSX


Quickemu already does this and many more operating systems too. https://github.com/quickemu-project/quickemu


Quickemu is built on Kholia's work



This also works surprisingly well in a docker image in Ubuntu. I use it as a relay to imessage for my windows pc to use with bluebubbles.


can you share more information about getting iMessage to work? I thought you needed a hardware id or serial number to make it work


How do you run a VM inside a docker image?


is there any way to run macOS on Hyper V ?


Not that I've seen, but you can run it on VMware and VirtualBox on Windows.

Check YouTube, there are tons of videos of how to do it.


What's the argument that this is OK from an ethical or moral perspective? Or does everyone acknowledge it's a willful violation of Apple's intellectual property and doesn't actually care?


I'm perfectly fine with running Hackintoshes for more than a decade now and I see it as being morally superior to buying the stuff from Apple, or Microsoft, or Google, you name it. I believe that the world could be a better place if the hardware and software were more fragmented and less integrated. This would lead to simpler, less opaque solutions that could be controlled by the users and developers and not by 10 international corporations. I wish all of them were destroyed by some EM storm and the tech would return to using older technologies which wouldn't allow the systems to become too complex (due to the hardware decoming very slow). Having less of convenience is not an issue when the alternative is having no control. Having to teach the population to use the plain email, FTP and the text editor is better than being herded by "I know what is better for you"-type elitist groups. I know from the experience that if the information crud is removed, most organisatons could run on the C program with an sqlite db (or even a text file) and be perfectly fine. So I'm happily stealing from both Apple and Microsoft, and I wish I could encourage more people to do likewise. "Why wouldn't you use Linux then?" - because I'm forced to use all that garbage by a workplace or the orgs I'm interacting with. I would personally use none of that. My background is 30 years of programming.


Personally, I have just about zero interest in running an Apple OS, but maybe I could be interested in the tinkering to get it to run.

However, OS X/macOs is clearly a general purpose Operating System that could work on a general purpose computer, but is only sold bundled with Apple hardware. There's a potential quandary with respect to using commercial software without paying for it, but Apple doesn't exactly let you pay for it either, so eh? A lot of people don't have a big problem with copying software that is generally unavailable, and I could see how can't buy it without buying expensive hardware I don't want is close enough to meet people's ethics. And some people's ethics allow them to use commercial software without paying without remorse too, so there's that.

I would have no problem with paying for software licensed for use on specific hardware and using that software on different hardware. That's not really a restriction I think a software seller should be able to make, so I'm not going to consider it unethical to circumvent (although in a business context when there's an enforcement mechanism, I might not circumvent it then). Just because my DVD is only licensed for use on a licensed DVD player doesn't mean I'll feel bad playing with with VLC; same with a video game that I manage to play on a system other than the one it was made for. Of course, pirating a movie or video game is different. That's what makes this issue a little touchy --- you can't pay for OS X separate from the hardware (you also can't pay for the hardware separate from the OS); if it's a no cost OS that ships with the hardware, then it doesn't seem bad to use it otherwise, but if it's really a $100 OS that comes bundled, that's different.


I never read the EULA when I put MacOS on my Thinkpad, so I'd file it under "willful ignorance" if I were you.


Thanks for answering! I am not really asking about the legality, per se, which is where the EULA comes into this. I don't want to put any words into your mouth, so let me just imagine a couple of "inner monologues" about the ethical/moral aspect of that, just so you have an idea of what I'm trying to ask.

"I know Apple doesn't want me to use it this way, and I know I shouldn't, but I'm going to do it anyway even though it's wrong"

"I know Apple doesn't want me to use it this way, but they shouldn't have a say in it, so it's not really wrong"

Would be interested to know how you're thinking about your decision to run MacOS on your Thinkpad from that perspective.


It's a little of column A and column B. I don't feel bad for breaking their EULA though, Apple definitely has bigger fish to fry than chasing down hobbyists who want to toss their free-to-download software on a laptop without their blessing. The so-called "ethics" side of it isn't even on my radar, considering how opaque Apple is with their software and hardware distribution.

> Would be interested to know how you're thinking about your decision to run MacOS on your Thinkpad from that perspective.

Frankly, I just don't care or think about it for that matter. It took maybe 30 minutes to install, I marveled at how well it worked on unsupported hardware for about an hour, then booted back into Linux and forgot about it altogether. I don't see it as anything other than a cheap party trick, any hurt feelings that the world's largest company has towards it is just a complimentary bonus in my book.


We talking about the same Apple that is accused of child labor and other human rights abuses?


Yes, but the majority of HN browsers will downvote/flag you for sharing the sentiment.


You will be downvoted for the lack of reference. Here is a link for you, don't forget to add it next time: https://www.yahoo.com/news/apple-knew-supplier-using-child-2....


It is difficult to get a man to understand something, when his salary depends on his not understanding it.


Intellectual property is a social fiction devised to serve certain ends in society, and it's one that these days is largely shaped (i.e., laws have literally rewritten) by huge corporations (e.g., Disney, Apple) to serve their own commercial interests.

EULA enforcement is not a matter of acknowledging some holy natural right, and onerous EULAs are not only ethically but often legally dubious themselves.

That projects like this are meaningfully harmful is the proposition requiring argument or justification, imo.


I'm not sure what you're trying to say here. There's a lot of railing against copyright law, EULAs, etc. which have nothing to do with what I'm asking.

I didn't ask "is this legal?", I asked "is this ethical or moral?".

Apple invests what I assume to be a huge effort into making the Mac and selling it, including MacOS. I think we all agree that MacOS is an important part of the "Mac experience" that Apple is selling.

I'm going out on a limb here to say that people who spend a bunch of effort to subvert the technical controls that stop MacOS running on other hardware know that those are in place because Apple sells the MacOS as part of Mac; regardless of the legal technicality of whether they actually clicked through the EULA, read it, or their opinions on whether that license is legally binding.

We have a situation where Apple, the creator of a piece of work - MacOS, only wants people who purchase its hardware to benefit from it. Other people understand that, and try to benefit from it anyway. What is the ethical and moral position you need to take to say "it just doesn't matter what Apple, the creator, thinks."

So what's your theory here? Apple is big enough and profitable enough that meaningful harm to Apple is the determining factor? So it's OK to infringe on IP rights of big profitable companies, or where you weren't going to buy the hardware product anyway?

Or is it that no concept of property exists outside of the physical realm - so that e.g. musicians should have no control over who records their performances, nor artists over who reproduces their work, nor developers over who uses their software and for what purposes?


> There's a lot of railing against copyright law, EULAs, etc. which have nothing to do with what I'm asking.

Intellectual property is a legal construct, and the thing being violated here is a EULA. (I'd also hardly describe what I wrote as ‘railing against’ anything. I just stated some facts about what IP is.)

> So what's your theory here? Apple is big enough and profitable enough that meaningful harm to Apple is the determining factor?

My ‘theory’ is that meaningful harm is a pretty good criterion for ethical objections in general, and that well-founded moral objections to things which are harmless are exceptional. This isn't a novel or controversial view; it's foundational to liberalism.

> Or is it that no concept of property exists outside of the physical realm - so that e.g. musicians should have no control over who records their performances, nor artists over who reproduces their work, nor developers over who uses their software and for what purposes?

I'd say that physical property is also an instrumental legal concept. Its existence, like the existence of intellectual property, is mostly grounded in histories of power and its exercise. But there are good arguments for its usefulness in certain contexts, and there are good moral arguments related to ownership. And the best arguments relating to ownership of physical things don't apply to ideas because ideas are not like physical things.

Your question, as posed, also overlooks the fact that intellectual property as it actually exists in fact does little to ensure that artists have control over their work. In the same way, Apple's policy on the use of macOS on non-Apple hardware is totally outside the control of actual developers who work at Apple.

If you'd like to tell me about how developers at Apple have an interest in ensuring that macOS is not used on non-Apple hardware... congratulations: you've just started outlining a case for how projects like the one in the OP are meaningfully harmful.


I haven't got a Linux box to run this on, I run Linux via emulation.


Im running this on Windows though WSL 2. There's definitely a performance hit but other than that it works surprisingly well.


Can you post the steps you had to go through to get it working?


Sorry for the late response, just saw this. When I did it last I just used the Docker-OSX project [1]. It should pretty much just work, the readme has a section about on running it on WSL.

If you don't want to use Docker it should still be possible with the OSX-KVM, I've done it before, but I can't remember the exact steps I took to get it running, sorry.

Should also note that I've run into issues with AMD CPUs, Intel seems to work better, but I also haven't tried it since they first added nested virtualization support with AMD CPUs to Hyper-V, so those problems might be fixed now.

[1] https://github.com/sickcodes/Docker-OSX


Thank you! I'll give it a go right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: