Hacker News new | past | comments | ask | show | jobs | submit login
Linux M1 GPU driver passes 99% of the dEQP-GLES2 compliance tests (twitter.com/linaasahi)
545 points by tpush on Oct 21, 2022 | hide | past | favorite | 187 comments



* clean-room M1 GPU driver created by (seemingly) one dev in a matter of months

* an excellent POC of Rust drivers on Linux

* the dev live-streamed almost every minute of development

* the dev is a virtual anime girl

So far I'm LOVING 2022.


> by (seemingly) one dev

I think Alyssa Rosenzweig and Asahi Lina both did pretty substantial work on this?


I believe Rosenzweig was responsible for the user-space bits; this post is about the kernel driver, which (as I understand it) was done entirely by Asahi Lina.

(Both of them deserve a ton of praise, of course.)


I'm pretty sure this post is about both bits. I don't think you can pass that test suite without both bits.


It was passing that test suite when being run as of their XDC talk 10 days ago using only Alyssa's bit. This update shows that when using both bits combined it now passes with the same level of conformance.

At least that's my understanding of it. You're right in that this is about both bits working together, but the person you're replying to is right in the sense that this particular update is showing specifically an update in Lina's bit.


Aren't they the same person though? I've only seen small parts of a presentation, this vtuber stuff is not for me.

https://www.youtube.com/watch?v=0XSJG5xGZfU&t=21330s


You're literally linking to a youtube video where 10 seconds later Alyssa is saying "this should all be sufficient proof that I'm not Lina"... and Lina engaged in an interactive Q&A while Alyssa stood around demonstrably not the person responding.


IF Lina had to be an extant member of the Asahi Linux project, it is much more likely that she would be Hector (marcan); Lina and marcan had identical dev environments when she first started making videos and Lina’s introduction to the Asahi Linux YouTube community was her “hacking” one of Hector’s streams. (https://m.youtube.com/watch?v=effHrj0qmwk)

Ultimately, though, does it matter at all?


Lina was first introduced as an April Fools joke from marcan, but after the positive response from the audience he (or now she?) probably just decided to go why not and stick with the alternate identity. But as you’ve said, it really doesn’t matter.

(Kudos from marcan trying as much as possible to keep the illusion /s: https://twitter.com/marcan42/status/1582930401118806016?s=46...)


They are not.


GPU driver is mostly by Alyssa, DCP (display controller) is mostly by Lina.


one "anime" girl, 2 "anime" girls, 10 dev bros, I don't really care at this point. Color me impressed at the skillz required to make this to the point it is in the time it is. Way past anything I might dream about doing myself. So many kudos to any/all involved male/female/other doesn't matter in the slightest for the possibility to use the hardware to whatever I want.


It is impressive. I remember reading something related to new abstractions in chip level. The most of the GPU drivers are actually on some specific chip and it works similarly regardless of the OS. Kernel developers actually develop against this new abtraction layer and not directly against GPU. Does someone has more information about this, or was this nonsense?


It means that the drivers likely have a giant 'firmware' which is really just a closed source driver loaded into the chip, but with a nicer api for the kernel to program against


That's even acknowledged in the tweet thread.


> * the dev is a virtual anime girl

Who had "skilled Linux kernel vtuber writes reverse engineered drivers for Apple hardware" on their 2022 bingo cards?


Mesa lets OpenGL(ES) drivers to share a lot of code. This doesn't take anything away from this achievement however.


Having seen some of Rosenzweig's work and talks, I would say "twenty devs".


interesting! can you share some details?


I think they are implying Rosenzweig is as capable as 20 devs.


Even 10x engineers are getting hit by inflation!


I think more code with fewer devs is deflationary. "As productivity increases, the cost of goods decreases." [1]

[1]: https://en.wikipedia.org/wiki/Deflation


It's the value of the productivity itself that's decreasing (i.e. undergoing inflation)


> the dev live-streamed almost every minute of development

Is there an archive of this dev work?




I love the "it works!" rush!


Can someone explain the virtual thing?

Who's running this operation.


The dev is Asahi Lina (I presume a pseudonym https://twitter.com/LinaAsahi ) and when she livestreams the coding sessions instead of a video feed there's an avatar (an animated anime girl) and the voice is modulated to fit the part as well

A random stream: https://www.youtube.com/watch?v=jbVQWo0kh9Q


> So far I'm LOVING 2022

Stoical irony.

Edit: Look: you can be excited by some tech advance, but no - 2022 remains part of an ongoing global disaster. The above quoted sentence is hardly manageable. The best it deserves is to be read like some twisted humour.


> 2022 remains part of an ongoing global disaster.

Which of the several dozen currently ongoing global disasters are you referring to, and, bluntly, why should we care?


Are you joking? The Russian war of conquest against Ukraine, obviously.


That is obviously terrible, and I have several personal ties to Ukraine and the conflict, but the arc of humanity is currently getting better. Russia’s invasion of its sovereign neighbor is bad, but I wouldn’t call it globally bad, unless this turns into a nuke war.


It already has global effects, plus we’re on the brink of a very serious recession, plus China is not playing ball, plus this whole war is opening pandora’s box that was closed ever since WW2. So no, it takes an incredible amount of techie privilege to ignore it to such extent.


> why should we care

Because you are living in it. (At least that.)

But what I wrote is not "that you should care" - it was that you would better avoid expressions that read naïve and possibly uselessly provocative. When somebody communicates something like "Oh sweet were the times under the iron fist", it better be for a good clear reason.

You should care about your audience - and avoid lacking respect.


> and avoid lacking respect.

Yeah, that's about what I figured it would come down to.

I normatively endorse disrespecting anyone naive, oblivious, or self-absorbed enough to think there's only one currently ongoing global disaster.


Ah yes, I sure do miss YEAR_IDENTIFIER where nothing went wrong.


It is not a matter of «[going] wrong», it is a matter of "how" "we are going", and where we are, and where we should have been. What is actually happening in chronicles is not an accident.


...And silent "snipers" are just a symptom of said illness of civilization.


Eh it's not so bad in the US. Sure the economy isn't doing too good and gas prices are still high, but most people are overall doing alright. If there was a literal disaster, 9mm wouldn't be selling for $0.20


Where can I get 9mm for 20cpr (esp if it's actual brass)?


I meant including steel case and remanufactured: https://ammoseek.com/ammo/9mm-luger

Material doesn't really matter if you don't reload and just use it for practice.


IMO remans move the risk/reward curve pretty substantially,especially from no-name brands and steel is banned anywhere within an hour of me :( Although I suppose in a true disaster people wouldn't be as concerned about following the BLM/Forest Service rules

Apples to apples it looks like new brass ammo is still about 2x what it cost in 2019.


“Ongoing global disaster”

Life is good and only getting better :- Not sure where this “global disaster”, but then again I’m not a Doomer.


> Life is good and only getting better

Is it?


Yes. Unequivocally, yes. Get out of the doom echo chambers.


There is no need for echo chambers, Bear: you can just look directly. And no, our assessment does not conclude the same.

If one wants to be optimistic, good; celebratory with awareness, with some maturity, well good then; but "locked in a box", no: this is public.


Well that's bad news for anyone hoping this driver would ever be upstreamed, linux requires real identity for participation.


[flagged]


Well of course nobody is an actual virtual anime girl.


Sure, but listening to even a minute is difficult. I don't know if there's such a thing as an audio version of the uncanny valley, but were such a concept exist this would fall right there for me. I don't have any issue with naturally high pitched voices but I just can't listening to this.


Yeah but it makes the voice too high to listen to comfortably.


Her video content is just unwatchable. Too bad as there are probably interesting bits inside.


This is such a westernized perspective. If you were Japanese or if you just grew up watching anime you'll find those videos perfectly watchable.


What a bizarre comment. Do you really think Japanese people and anime fans find heavily-roboticized voices peasant to listen to for long periods of time?


I grew up watching anime and that's not the case. It's far too cacophonous to me, anime does not sound like that.


I enjoy watching animes and I grew up watching animes. They don't have to be so syrupy.

Her vids look like parodies.


If I was Japanese I'd be watching content in Japanese.


[flagged]


I don't think it really matters who's behind Lina.


[flagged]


It doesn't matter if Asahi Lina is an avatar of Marcan or not and you're free to discourage people from speculating, especially since it does not seem that Asahi Lina wishes to disclose her real identity.

That does not mean that what you've provided is proof, however.


That isn't proof of anything. It is a denial.


From https://chromium.googlesource.com/angle/angle/+/main/doc/dEQ... it looks like drawElements (dEQP) is:

> drawElements (dEQP) is a very robust and comprehensive set of open-source tests for GLES2, GLES3+ and EGL. They provide a huge net of coverage for almost every GL API feature.

So this is some CSS Acid test for OpenGL Embedded Systems (GLES).


It's more than that, it's the official conformance tests:

https://github.com/KhronosGroup/VK-GL-CTS/blob/main/external...


It was unnecessary to mention Acid. Acid was meant to test cherrypicked unimplemented features, not to test coverage.


Awesome work, glad to see such progress. Why is this person using such a weird voice modulation? Or are they using a text to voice to engine? It makes it difficult to understand.


Well our best emulator writer in decades killed themselves over harassment for being non binary via kiwi farms murderers (not the right word but nor is simply troll) so why risk the attention if you don’t want it?

It’s tough out there in the culture wars.

Rest in peace Near you magnificent person.


I understand the sentiment, but if anything this draws more attention.


I'd do that if I posted videos online, I hate hearing my voice.


Because they want to.

I agree that it makes the videos hard to watch for me, but it's their choice to use and and my choice if I want to watch.


Can somebody explain how much of the work on M1 GPU can be reused for M2?


The vast majority of it. She and Alyssa discussed it during their talk at the Xorg conference.


How far along is running a regular linux desktop os with working ui on an m1? Recently there were articles that Linus had been using an M1 running some version of linux but there were some aspects of the system that weren't working well, believe it was the ui, maybe he was even in text mode.


See https://github.com/AsahiLinux/docs/wiki/Feature-Support . The laptops are very usable as a daily driver on Linux already, but not 100% is supported and depending on your specific requirements some of the missing hardware support might be a deal breaker. You can very much run a normal Linux desktop with a working UI, that's been working for months, it just doesn't have hardware rendering acceleration (but the CPU is fast enough to make this tolerable).


Is the battery life when running Asahi Linux close to what it is with macOS?


No, aside from anything else the missing GPU driver means that the system is running at ~60% all-core CPU even at idle (rendering a desktop). IIRC, it still lasts around 4 hours. We'll have a better idea of where they're at once the GPU driver lands.


That's not the word from the developer lead.

>That whole "optimized for macOS" thing is a myth. Heck, we don't even have CPU deep idle support yet and people are reporting 7-10h of battery runtime on Linux. With software rendering running a composited desktop. No GPU.

https://twitter.com/marcan42/status/1498923099169132545

Also, from a couple of days ago, they got basic suspend to work.

>WiFi S3 sleep works, which means s2idle suspend works!

https://twitter.com/marcan42/status/1582363763617181696


Software rendering simply stinks, doubly so if you're running a fancy composited desktop. Regardless of CPU speed or UI fluidity, your CPU and GPU processes are now fighting for the same cycles, bottlenecking one another in the worst way possible.

The M1 has a fairly good GPU, so there's hope that the battery life and overall experience will improve in the future. As of now though, I'd reckon there's dozens of x86 Linux laptops that can outlast the M1 on Linux. Pitting a recent Ryzen laptop versus an Asahi Macbook isn't even a fair fight.


>Software rendering simply stinks, doubly so if you're running a fancy composited desktop. Regardless of CPU speed or UI fluidity, your CPU and GPU processes are now fighting for the same cycles, bottlenecking one another in the worst way possible.

And yet, after first release use, they have reported that this is the smoothest they've seen a Linux desktop ever run. That is, smoother even when compared to intel-Linux on hw GPU acceleration.


I use llvmpipe currently and I can assure you that it is no where near as smooth as a setup with hardware acceleration. The 3900x is probably even faster than the M1 at software rendering and it still isn't fast enough to give a consistent 60fps with the browser using most of the screen (shrinking the browser window makes things much more smooth).

Even extremely fast CPUs suck really bad at pushing pixels compared to even the weakest GPUs. It is very much usable though!

Currently llvmpipe is able to use up to 8 cores at a time but not more, and does use SIMD instructions when available from my understanding. There is another software rendering system in MESA that allegedly uses AVX instructions but I have had a better experience with llvmpipe personally.


If your desktop idles at 60% CPU utilization, I should hope it's at least getting the frame timing right.


Where are you getting this 60% number from?


That number is absolute nonsense. Someone upthread posted it and it has no relation to reality.


llvmpipe and swpipe have been improved.


I'd hope that an idle desktop redraws ~nothing and so doesn't waste any CPU cycles. And the GPU not being used might even save power. So as long as it's idle it would ideally consume less power, not more.


CPUs use significantly more power to perform the same amount of computation that a GPU does, because they're optimized for different workloads.

GPU input programs can be expensive to switch, because they're expected to change relatively rarely. The vast majority of computations are pure or mostly-pure and are expected to be parallelized as part of the semantics. Memory layouts are generally constrained to make tasks extremely local, with a lot less unpredictable memory access than a CPU needs to deal with (almost no pointer chasing for instance, very little stack access, most access to large arrays by explicit stride). Where there is unpredictable access, the expectation is that there is a ton of batched work of the same job type, so it's okay if memory access is slow since the latency can be hidden by just switching between instances of the job really quickly (much faster than switching between OS threads, which can be totally different programs). Branching is expected to be rare and not required to run efficiently, loops generally assumed to terminate, almost no dynamic allocation, programs are expected to use lower precision operations most of the time, etc. etc.

Being able to assume all these things about the target program allows for a quite different hardware design that's highly optimized for running GPU workloads. The vast majority of GPU silicon is devoted to super wide vector instructions, with large numbers of registers and hardware threads to ensure that they can stay constantly fed. Very little is spent on things like speculation, instruction decoding, branch prediction, massively out of order execution, and all the other goodies we've come to expect from CPUs to make our predominantly single threaded programs faster.

i.e., the reason that GPUs end up being huge power drains isn't because they're energy inefficient (in most cases, anyway)--it's because they can often achieve really high utilization for their target workloads, something that's extremely difficult to achieve on CPUs.


> it's because they can often achieve really high utilization for their target workloads, something that's extremely difficult to achieve on CPUs.

This part here 100x. It's worth noting that the SIMD performance of the M1's GPU at 3w is probably better than the M1's CPU running at 15w. It's simply because the GPU is accelerated for that workload, and a neccessary component of a functioning computer (even on x86).

The particularly damning aspect here is that ARM is truly awful at GPU calculations. x86 is too, but most CPUs ship with hardware extensions that offer redundant hardware acceleration for the CPU. At least x86 can sorta hardware-accelerate a software-rendered desktop. ARM has to emulate GPU instructions using NEON, which yields truly pitiful results. The GPU is a critical piece of the M1 SOC, at least for full-resolution desktop usage.


You're not really responding to my argument? I was talking about an idle desktop where neither CPU nor GPU perform any work since they don't have to redraw anything. With neither performing work pure software-rendering should let the GPU be turned off rather than put into sleep. Granted, those are mobile chips so the power-management probably is good and there isn't much of a difference between off and deep idle.


IIRC, modern graphics APIs pretty much require you to go through the GPU's present queue to update the screen, so the GPU likely has to be involved anyhow whenever a draw happens, whether or not it's actually drawn on the GPU. Given that, I'm not sure how you could turn the GPU off during CPU rendering except in circumstances when you would already have been able to turn it off with GPU rendering. But I am basing this on how APIs like Vulkan and Metal present themselves rather than the actual hardware, so maybe there's some direct CPU-rendering-to-screen API that they just don't expose.


On the M1 the framebuffer is a separate device and can be written to directly[0]. Whether that means the GPU-proper can be powered down I don't know.

[0] https://asahilinux.org/2021/08/progress-report-august-2021/#...


Interesting, didn't realize that. This explains some of the weirder present queue requirements, I guess (it doesn't really act like a regular queue). So maybe you really can power down the GPU. I still doubt it would be lower power overall, since IME my M1 GPU takes very little power when I'm not using it intensively, but it's at least possible.


If ARM had competitive SIMD performance, then we might be seeing an overall reduction in power usage. The base ARM ISA is excruciatingly bad at vectorized computation though, so eventual GPU support seems like a must-have to me.


In my experience the M1 does have competitive SIMD performance?

https://dougallj.wordpress.com/2022/04/01/converting-integer...

https://dougallj.wordpress.com/2022/05/22/faster-crc32-on-th...

https://lemire.me/blog/2020/12/13/arm-macbook-vs-intel-macbo... (I later optimised the slower benchmark in that post: https://github.com/simdjson/simdjson/pull/1708 )

Obviously the GPU will be better, but at one point I compared the M1 CPU to other ARM GPUs (in laptops at that time) and found it had both better memory bandwidth and compute throughput, which is quite funny.


That could be true many years ago, but not anymore. GPU is way more efficient at putting many bitmaps in the right place in the output. Even your mouse cursor compositing is hardware accelerated these days because that's faster and more efficient. Doing that on the CPU is wasted power.


7-10 hours is good, but that's still around half of what these machines can do on macOS. And it's just common sense that battery life with software rendering won't be as good when hardware acceleration is enabled.


And better than any windows laptop on the market with the same performance profile.


Ryzen 6800u is comparable in performance and power consumption, on an older process node.


The 4800u was used in performance comparisons when the M1 dropped. For the record, that was an 18-month-old chip fabbed on outdated, 7nm silicon that was shipping in ~$450 laptops at the time. Brilliant little Linux devices, I've been told.


Is it still as fast when 60% of it's CPU time is being utilized to render a Retina desktop?


"Optimized for macOS" is definitely not a myth, but the reality is far more complicated than people believe. There's parts of the OS that are definitely incredibly optimized for the hardware they run on. Then there's some random app that has an O(n^3) loop in it. Operating systems are complex.


Where on Earth are you getting that 60% number? It’s absolutely not true. I just looked myself and it’s sitting at the default KDE desktop showing ‘top’ at literally 0.5% CPU usage.


Thanks. My use case is normal linux use, but I need to occasionally run x86 code in a linux vm. So (1) running linux, (2) running a different arch vm, (3) some gui apps.

I have already been reading the state of running x86 vms on macos on an m1, it is slow but my friends claim usable. Add it on an early linux impl on native m1, then x86 vm.

Why am I torturing myself? I want to get back to having a great fanless system like I used to have on a google pixel laptop, where they had underclocked x86 but running without a fan was great.


If you need to run x86 Linux software on Apple Silicon, the better option going forward will be to run an ARM Linux VM and use Apple's Rosetta for Linux to run x86 software on the ARM Linux, rather than having the kernel also running under emulation.

Since Rosetta for Linux was coerced to run on non-Apple ARM Linux systems almost as soon as it shipped in developer preview builds of the upcoming macOS, it would not be surprising if Asahi ends up making use of it outside of a VM context (though that may not comply with the license).


Can you provide links to what you write hear? Thanks!


Sibling comment, but this has links to Apple documentation: https://news.ycombinator.com/item?id=33290808

And this seems to be able to get it running: https://github.com/diddledani/macOS-Linux-VM-with-Rosetta

I haven't tested this because I'm at work, but I'll verify it when I get home!



Macos Ventura will let you run x86 code on an arm linux vm using a linux version of rosetta 2, which should fix the speed issues.

https://developer.apple.com/documentation/virtualization/run...


What makes rosetta 2 superior to box86/64 or qemu-static-...?


It's at least an order of magnitude faster due to not using software emulation


Support matrix - https://github.com/AsahiLinux/docs/wiki/Feature-Support

You can run a GUI but it would currently run on the CPU not GPU. Reports are that the UI is quite snappy running on the CPU. The basic Asahi install is Arch with KDE. Linus apparently put Fedora on his travel laptop. What Linus was complaining about is Chrome support, which is coming or maybe is already available - https://twitter.com/asahilinux/status/1507263017146552326?la...


I am using Arch/Asahi Linux on a new MacBook Pro M1 Pro since July as daily driver. I mainly use IntelliJ IDEA for Java and Scala development in KDE (which is installed by default if you choose the UI install). For IntelliJ you have to download the aarch64 JBR yourself and set it in a config file. However upgcoming version 2022.3 will support aarch64/arm OS out of the box... So far it works very very good. Rendering is done by the CPU, but the machine is so fast it's not really an issue (at least compared to my old Thinkpad). However can't wait for the GPU driver to arrive, looking at the posted tweet., I guess that should happen within the next few weeks. For Zoom calls (which I don't have so often) I still have to switch to macOS right now, but AFAIK USB 3 support should work also soon, so I can use my Elgato Facecam soon hopefully. I can listen music/watch youtube with my Pixel Buds Pro via Bluetooth, no problem. Suspending/closing the lid doesn't freeze the userspace yet, but this is already fixed and will arive soon: https://twitter.com/marcan42/status/1582363763617181696 In the last 4 months I only heard the fan once or twice, when opening dozens tabs that run aggressive adds (animations), but with the upcoming GPU driver that will not be an issue anymore. And by "hear" I mean like "what is this quite noice...? or that's the macbook fans..." - when I start my old Thinkpad, it's like a airplane in the runway preparing to start. Battery life is quite good actually, this week I was taking my Macbook out for lunch and working in a coffee for like 2,5 - 3 hours, doing Java/Scala development (but not much compiling actually, just hacking in the IDE) afterwards I had like 69% left. Also it barely gets hot - it does a bit sometimes in the ads scenario above, but most of the time it stays cool. It does get a bit hot when charging (which is normal according to other users).

Personally, after a long time Thinkpad and Dell XPS user, I am so happy about my Macbook, which is my first, they will have to try hard to win me back. And I am saying this as non Apple user (never had an iPad, MacBook, iPhone, whatever). Asahi + Macbook Pro is really really great and IMHO, even in it's early stage and IMHO for Linux enthusiastics will be the killer device in the upcoming years.

BTW: I bought the Macbook in April and gave macOS a serious chance (after being a pure Linux user since 15 years). Tried to set it up for my needs... But I gave up in July, I just can't handle it, for me it's like a toy. It's good for people (like my wife) who browse the internet and do some office work, but for software devs that need to get shit done fast and like to have an open system its... kind of crap (just my pesonal opionion).


My experience is almost exactly the same. The M1 Air is my first Apple product. I don't like Mac OS after trying it (too dev unfriendly), but love the hardware. I was only willing to purchase the Air because of Asahi Linux (as a backup to giving Mac OS a shot).


Not Linux per se, but there was a post recently about OpenBSD support for M2. https://news.ycombinator.com/item?id=33274679


That's OpenBSD -stable, the -current edition, that's it, "rolling" "release", already had it.


It is based on Asahi.


OpenBSD it's ISC licensed and is not Linux.


Besides Asahi, I've done dev work on arm64 Ubuntu running under Parallels (GUI works fine).


> How far along is running a regular linux desktop os with working ui on an m1?

The total number of people who will be doing this will number in the hundreds. Not to diminish what an interesting achievement it is.


This seems like a remarkable achievement for proprietary hardware. Doesn’t this sort of thing take years for the open Nvidia drivers?


Here's the Freedreno story:

https://lwn.net/Articles/638908/

> So in mid-2012 he decided to do something about it. He was working for TI at the time, so PowerVR-based GPUs (as used by TI) were off-limits. He found some hardware that had an Adreno 220 and started to reverse engineer it. He began work on a Gallium driver in November 2012. By early 2013, he had most of the "normal stuff" working. He could run GNOME Shell and some games on the hardware.

So it's been done before, but of course Freedreno's Rob Clark is undoubtedly exceptionally good as well.

You have to wonder if the M1 situation would be different if he had been working for another company... (M1 GPU is PowerVR descendant and there hasn't been a open source driver for that lineage historically)

For me it's a bit surprising that there are not more people working on it, the M1 having been out for 2 years already and this having wider appeal than the Freedreno initial users audience.

But, maybe the 1 person way is better, since the Nouveau history seems to be people coming and going and work progressing in fits and starts: https://lwn.net/Articles/269558/

(Also the nouveau people were first to start, so they cleared way and paid a lot of dues saved by subsequent projects)

edit: also nouveau history is somewhat chronicled in the early newsletters starting from the early 2006 days: https://people.freedesktop.org/~mslusarz/nouveau-wiki-dump/N...



Do they get help from Apple? Is there documentation of the M1? This is such a massive achievement!


>Do they get help from Apple?

Indirectly. Apple made some changes in the past that were likely geared towards helping them. Was on kernel not gpu though I think:

https://twitter.com/marcan42/status/1471799568807636994

Don't think they got active help in the "here's the docs" sense


In context of GP's question, I think this counts as a "no". Apple added a very small feature to make booting Linux easier, which we can safely assume was put there for the benefit of Asahi. Apple broke Asahi's old method in an update, so provided an alternate stable interface.


The only help is from the bootloader side and by not having Apple actively obscuring things.

Conpared to other reverse engineering projects, the M1 macs are not a moving target and they dont have to encounter new lockdowns every time, unlike other reverse engineering scenarios lile the cracking scene or emulators


> not having Apple actively obscuring things.

This seems quite a big deal for Apple - and said as a long time Apple user.


It's such a minor contribution that I always get a good laugh watching people praise Apple for this. Microsoft, for all the evil they do, contributes millions of SLOC to open source along with generous funding for various non-selfish projects. Facebook, for all the societal ills they cause, also develops multiple projects in public even when it would make more sense to withhold them. Apple, for all their exploitation of the developer community, flips a switch on the Macbook that enables parity with traditional x86 BIOS functionality, and people start partying in the street crying "this is for us!"

And since then, it's been radio silence from Cupertino HQ. Nobody ever confirmed why they did this. Was it serendipity or loving compassion? Nobody can say for sure, but if it's the latter then Apple seems to be very conservative with their blessings upon the community.


Apple contributes quite a lot to open source (WebKit, llvm, clang in particular but there are other high volume projects), though not so much in the Linux kernel. There are some patches by Apple in Linux though and they've made quite a lot of strides in expediting postbacks internally over the last few years.

I left Apple partly over their poor oss policies, but even I won't sign off on the idea that they don't do any at all.


- WebKit is a co-opted KDE project that was open source before Apple forked it (also LGPL, it would be illegal for Apple to distribute WebKit if it wasn't GPL too)

- LLVM was also open source before Apple bought the core developers

- If Clang wasn't open source then literally nobody would ever use it

There are definitely a few spotty places where Apple tosses up a CUPS patch for Linux or fixes a cross-platform WebKit bug. Relative to the rest of FAANG, though, Apple is unprecedented in their posturing against open source.


I didn't say (or at least didn't mean to imply) that any of those were originated at apple, but that they are high volume ongoing contributions to open source from apple and they could do less on them.

I also don't know why you think apple would particularly care if "no one else used clang." Their goal with developing it was clearly also not exactly altruistic but their contribution of it into open source didn't really benefit them much in the short term in any way that isn't true of other major open source work at other FAANG. Never mind swift, which has basically failed to benefit apple much at all from being open sourced. But if clang were closed lots of people would still use it - everyone developing for macs and iOS. That's probably all that matters to the brass.

In the end I agree that apple is much worse than other FAANG. That is, like I said, part of why I left a pretty nice job there after many years of frustration at the pace of change.

But it's hard to pin down a position on all this that applies to all of apple. Some parts are very friendly (compilers), some parts are obligate mildly friendly (core Darwin/xnu and WebKit), some parts are bristling for change, but while change has been slow contributions have been ramping up as processes change (all of services, the growth area of the company). In the 2 years before I left it went from months to approve a minor patch to OSS to more like a week, with certain internal projects having regular unreviewed postback rights to designated high volume OSS projects and official champions of upstream efforts.

Where there seemed to be zero progress at all was personal projects, where apple has a catch-22 process that never progresses while insisting that you can't do it outside that process (regardless of the legality of that assertion).


CUPS was maintained by one single guy for several years.


To be fair, Apple actively participates in a long long list of open source projects, to give you some comparisons between Apple vs Microsoft -

* MS still refuses to open source its windows kernel to the public. Apple has its kernel accessible by the entire world.

* MS still refuses to open source its c++ compiler. Apple has swift built by the open source community and there are llvm/clang.

* MS still refuses to open source its sql server, when Apple has FoundationDB for the world and actively contributing to Apache Cassandra.

We are talking about a company that for many many years called open source a cancer.


1. The XNU kernel isn't novel, and isn't used by anyone other than Apple. Open-sourcing the NT kernel would effectively kill the project, since everyone would port the Win32 bindings to Linux (same thing would happen if Apple open-sourced their userland software)

2. Apple didn't get to choose whether they could make LLVM/Clang open source. They were forced to release it under GPL, and if you actually look at the license for those projects you'll see that Apple's contributions are dual-licensed.

3. What chance would FoundationDB have if it wasn't OSS? Why would Apple maintain a stale, internal fork of Apache Cassandra?


Nether llvm nor clang are GPL, dual licensed or otherwise. They're Apache 2.0[1] (or formerly a quasi-MIT/X11 license[2]), as is swift[3].

The binaries of the compilers they release with the platform are also built with internal forks that have some differences with the public llvm/clang/swift trees and that wouldn't be possible with gpl.

[1] https://github.com/llvm/llvm-project/blob/main/llvm/LICENSE....

[2] https://github.com/llvm/llvm-project/blob/7555c589af006c9c4d...

[3] https://github.com/apple/swift/blob/main/LICENSE.txt


1. NT kernel is neither novel nor used by anyone other than MS. Open sourcing the NT kernel would effectively kill the project as its security issues would be fully exploited with wide range impacts on literally everyone.

2. LLVM/Clang are not GPL licensed, they are apache 2.0 licensed. Open sourcing contribution is just open source contribution, it is not about whether you get to choose the license or not.

3. Well, SQL Server is a pretty good demonstration that proprietary database software can survive for decades.


Not true. Their list is short, their definition of open source is technically in name. You're deliberately pretending MS' current form doesn't exist.


>Was it serendipity or loving compassion?

Yes.

Or, more accurately, it's historical inertia. Macs are "computers", iPhones and iPads are "devices".

The arguments that Cupertino used to defend against Epic in court were roughly that Macs were inherently less secure than their locked-down devices, and that this was a deliberate security trade-off that they were not willing to allow iOS users to make. In other words, something you carry around in your pocket is such a potent malware and spyware vector that users cannot be trusted to decide for themselves.

I personally think this is a half-truth. If Apple had started with portable devices first and then scaled up to the Mac, rather than them starting with PCs[0], they absolutely would have tried to sell it locked-down. Both the tech and the impetus to lock down the iPhone did not exist when the Mac was first made, and the iPad is evidence of this. While there are legitimate user-interface reasons for iPads not running macOS, there is no argument for why they can't have the same security policy as a Mac[1]. But it was borne from iOS[2] and intended to displace personal computers, and thus inherits all of the iPhone's baggage, including the App Store lockout with no owner override.

Since Macs were always open, Apple's management can't insist on locking the system down without pushback both internal and external. Software developers expect to be able to ship their own software ecosystems and users expect to be able to have full control over their Macs. Locking down the Mac makes it a worse Mac, because the product is defined by it's openness.

[0] The Mac and everything that came before it.

[1] Nor is there an argument for why Apple TV can't have an owner override, either. It's even less capable of spying on you than your Mac!

[2] This is actually backwards; the iPad came first as a tablet tech demo and then it's tech lifted to make the iPod Phone work. But then Apple backported the finished phone OS onto the iPad, which is where all the iPhone-isms come from.


> Locking down the Mac makes it a worse Mac, because the product is defined by it's openness.

s/mac/computer and you're right on the money. All computers are defined by their openness - even Apple products. A basic Turing-capable machine should have the capability to copy arbitrary software to-and-from the memory of the device; iOS doesn't allow this. Now, I've heard every philosophical waxing about "product segmentation" and "muh iPhone security", and all of that is fine. Nobody has to leave the App Store if they don't want to. But Apple's limitation of iPhone capabilities are entirely arbitrary, and there is quite literally nothing that stops them from adding a little toggle-button in iPhone settings that says "Reduced Security Mode" or something. Nothing besides greed.

Apple's ideal of a smartphone is so far-removed from the ideals of general-purpose computing that they need to be litigated into submission, just for common-sense upgrades like a universal charger or upgrading to USB-3.0. They chose this state of affairs for themselves.


Apple generally doesn't get in the way when users want to boot something other than macOS on their Macs, and this has been true for decades. In the late 90s there were distros of Linux for PowerPC Macs (Yellow Dog mainly) and BeOS could run on several models of PPC Macs. And of course, Intel Macs had Boot Camp which originally mainly just a BIOS/CSM compatibility layer that sat on top of Apple's EFI implementation to allow OSes that didn't support EFI or supported it badly (at that point, mainly Windows) to boot on them.

The one hitch in that was the brief period where Macs used a chopped down A-series SoC in place of traditional x86 motherboard components (T1/T2 chips), but they provided Windows drivers for those making the difficulties Linux had with them an issue of drivers rather than inability to boot.


How likely is this to make some kind of stride in the macOS gaming world? Perhaps some kind of "headless" Linux VM that functions similar to parallels or wine that can essentially then run a proton layer to bring macOS up to par with Linux gaming. Can someone with more knowledge than me explain if that is feasible in the coming months/years?


You might see a few games like Diablo 2 or DOOM running, but modern games are almost totally out of the question. The biggest roadblock is getting a complete Vulkan driver, which is at least a few years off. That would allow for DXVK to run, which enables the majority of Proton titles.

And then there's the issue of the stack itself. It's hard enough getting Wine and DXVK to run on hardware designed to run Windows/DirectX respectively, but running it through Rosetta/Box86, on Asahi, using half-finished video drivers, is probably not great for stability. Just my $0.02.

Edit: I thought you were talking about Asahi gaming. The state of Proton on MacOS is hopeless (Valve devs abandoned it after the Catalina announcement), but you can definitely run it in a VM if you're so-inclined. Wouldn't necessarily be any faster than using Windows in a VM, but it's still an option.


Box86, Box64, or FEX-Emu. The latter is quite promising.


I don't think that's at all the goal or even vague direction of the Asahi Linux project. It's to replace macOS, not to run as a VM within it.

It's already possible to do what you want, with Parallels and any modern arm64 Linux distro, as long as the games are compiled for arm64 / you're willing to take the performance hit of emulation.


Not really macos gaming but it could be a bit thing for macbook gaming. My only laptop is a macbook and its powerful enough to run many of the games I play but they only run on windows/linux.


> How likely is this to make some kind of stride in the macOS gaming world?

It doesn't make any strides unfortunately.

> Perhaps some kind of "headless" Linux VM

Linux narrowly supports the Steam Deck's goals. It doesn't make sense to use on a device like a Mac.

> Can someone with more knowledge than me explain if that is feasible in the coming months/years?

An end user is best served by Parallels. Overall, it doesn't make sense to run Windows-only games on a Mac.


Well, this could be done this way,

1. A Linux VM which has steam installed on it. 2. A project like virglrenderer[1] for macOS and which processes Vulkan commands from the VM with virtio-gpu and sends data back. (I don't exactly know how it's done.)

This would allow games on the Linux VM's steam run at near-native GPU performance. On the CPU side, there still a need for instruction emulation like QEMU. Or, Apple posted somewhere that you can run x86_64 Linux binaries inside the VM with the help of Rosetta. or an ARM64 Linux VM with Steam running via FEX-Emu.

[1]: https://gitlab.freedesktop.org/virgl/virglrenderer


Impressive! Is this the same test suite that was already ran on OSX using Alyssa's user space driver but now on Linux using the Rust kernel driver?

If so that seems to be shaping up REALLY well!


I can't wait till we get a Linux iPad Pro


I’m afraid most of the Linux classic applications will not be designed with multitouch inputs in mind. It may improve eventually, with newer software being created.


It's open, we will add the gestures.


But with a Bluetooth mouse and keyboard, could be a compelling form factor


If you count size of all the devices is not that compelling anymore.



iPads don't have the open bootloader of the Macbooks that would allow that.


That's an obstruction, but usually someone eventually finds a way around that part. But we never get a usable OS for the hardware even if we can run arbitrary code. We have been able to run linux on older iphones for a while now but I'm not sure it's ever been a useful thing to do.


Also another drive by question, but I was surprised that the Apple TVs have A-series chips. Has anyone worked with Asahi on the A-series? I assume the performance isn’t great but curious about the technical challenges.


AFAIK, the latest A-series chip to boot Linux is the A7/A8 (so, iPhone 7-ish): https://konradybcio.pl/linuxona7/


A7-A11 via checkm8.


The A-series don't support booting from a third party OS, unless someone found a security issue that can be exploited to make it boot something else it won't be possible to run Linux on it.


If you pick up an older device running on an A11 SOC or earlier, they have an unpatchable vulnerability in the bootloader code, so those devices could be repurposed with worrying about a locked bootloader.

https://arstechnica.com/information-technology/2019/09/devel...


I know, but my mind automatically thought about the latest which contains an A15 for some reason :|


Does this mean there’s a decent chance to get this working in UTM? How much is this stuff transferrable to QEMU?


I'm not an expert on this, but my understanding is that GPU acceleration in virtual machines uses a "passthrough" method where the graphics calls on the guest system are simply translated to public APIs on the host system, and go through macOS's GPU drivers as usual. VMWare Fusion has had this capability on M1 since about a month or two ago.

This project is designed to provide an open-source replacement for macOS's GPU drivers in order to boot Linux bare-metal, so it solves an entirely different problem. Maybe some of the knowledge will carry over -- especially the userspace side of things Alyssa was working on -- but as far as I know this is not automatically transferrable.


GPU passthrough is the opposite, there the hypervisor exposes the HW to a guest and the guest runs the bare-metalndriver.


For utm the interesting tech is venus / virgl (https://www.collabora.com/news-and-blog/blog/2021/11/26/venu...) Like the other comments say this work is not really applicable to qemu.


Sorry for the off topic question, but can a person dual boot Mac OS and a native Linux install on the M1???


Apple designed a new bootloader that enforces security settings per partition and not per machine, so not only can you run a third party OS, doing so does not degrade the security when you boot the Mac partition.


I had to read that three times to make sure I was reading right. That's very...open...of them.


It really doesn't look like Apple has any intention of locking the Macbooks down. They just don't care all that much about building stuff that's incompatible with other OSs, if someone wants to put the hard work in to making Linux work, there are no unreasonable obstructions.


Yes, it's very un-Apple of them, which some are taking as Apple being accepting and welcoming of other OSes, in particular Linux, on Macbooks. Which is an optimistic take, IMO, even if not unfounded.


>it's very un-Apple of them

Apple has never locked down Macs.


It’s not un-Apple of them, people just make up the idea that Macs are going to be locked down next year and have continually made it up for the last 10 years (or more).


Don’t assign to Apple what you can blame to Intel.

I sincerely believe the per-partition security to be an improvement Apple wants, but couldn’t deliver with EFI. It allows developers to have unsigned Beta versions of macOS without degrading security for the other OSes.


Yes. That's what happens by default when you run the Asahi installer


The correct straightforward answer is, not yet. (The installer is neither reliable or stable)

This is why everyone else (non-developers) are waiting for a reliable release first rather than trying it out, something goes wrong and another person will find out and say: 'it's not ready for general use.'


Your first sentence is not accurate as, by default, the Asahi installer sets up a separate partition for dual-boot purposes. The Asahi developers have even recommended keeping this configuration in order to receive future firmware updates from Apple.


Are we any closer to eGPU support for apple silicon?


I remember seeing this wasn't possible because the CPU uses 16KB page tables but GPUs only know how to work with 4KB but I can't find the reference anymore. I don't know if this just requires a change in VBIOS or a whole new GPU but it would mean it's up to nvidia, AMD, or Intel to make a Mac-specific GPU even though no Apple Silicon devices have PCIe slots so it would only be for the small minority of Mac users inside the small minority of eGPU users.


Under macOS? Not until Apple acts, I guess.


Presumably on linux this would "just work" (assuming drivers for the IO ports are written)?


Thunderbolt drivers are still WIP for Asahi so I’d assume eGPU support is still in the future.


I am skeptical that we will ever see that.


given that thunderbolt 3 is basically PCI, the only far fetched part would be having to write a 3rd party driver for the card. Given the high end businesses that exist in this world to work with video on the (eg Davinci on the iPad which was recently on the front page), combined with a demand for absolute highest end GPU performance, I think there's a business case for it. I wouldn't hold my breath waiting for it to be open source though.


AMD gpu drivers are open source and already work on multiple platforms thanks to being part of the kernel and Mesa.


The Linux kernel, yes. I was thinking of Darwin when I wrote that


Considering how little commercial support there is for basic GPU functionality on regular Linux, it's wishful thinking to expect a mysterious stranger to write this code for you. Especially for black-box hardware.



I don't think this is generic eGPU support. AFAIK, Nvidia's render offloading is baked into their driver. Not sure if it has an AMD/Intel equivalent.


The mysterious strangers I have in mind are Blackmagic Design, who write Davinci Resolve, for both Mac and Linux, and have the economic incentive to. They made $576 million in revenue last year so it's not that far fetched.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: