Hacker News new | past | comments | ask | show | jobs | submit login
Linux Running on Apple M1 (twitter.com/cmwdotme)
378 points by ig0r0 on Jan 20, 2021 | hide | past | favorite | 275 comments



Linux might run, but good luck with the drivers. And good luck with it keeping working with future upgrades of both, hardware and software. Heck, even today it is pretty difficult to get things working reliably unless it is a thinkpad or some other Linux friendly brand.

A Mac with subpar support for a webcam, energy saving, suspend/resume, the trackpad, brightness/volume controls, etc is not a laptop, it's an expensive paperweight.

In my opinion, other than doing it for the sake of learning and the challenge of it, this will lead nowhere in the long term unless you get any kind of commitment or support from Apple. And I hope I'm wrong but that's very, very unlikely to happen any time soon.


Funny, that's how I feel with Windows and Mac.

I grab a machine and install Linux, and it works more or less out of the box. Maybe a few fixable quirks. And I don't use Thinkpads.

I try to use Windows or macos, and it's a coin toss. Windows handling USB like hot garbage (hub balancing/buffer sizes leading to devices unable to activate, webcams glitching, input lagging at random intervals), display issues (macos doesn't support displayport MST), dock connectivity problems (macos freezing or crashing when connected except when it suddenly works for a day, Windows playing disconnect/reconnect sounds in a loop when the machine goes idle), and more, including just today out inexplicable crashing.

Install Linux or plug all the "made for windows/macOS" hardware into a Linux box, and all the problems went away.

My conclusion, supported by having been a proprietary kernel driver developer for Windows, FreeBSD and Linux, is that any hardware and driver combo tends to be a coin toss, irrespective of platform.

But with Linux, drivers that would otherwise be abandoned after a project was shipped, get a chance at being fixed and improved that their proprietary counterparts can never even dream of.


The thing is, many more people are using Windows, than Macs and many more people are using Macs, than desktop Linuxes. So your anecdata goes against a mountain of anecdata from people using those platforms.


You are incorrectly assigning platform popularity as equal to driver maintenance.

First, non-desktop uses of Linux significantly outnumber uses of Windows and macOS in general, and a large majority of drivers and functionality is shared there.

Second, drivers not made by the usual giants are, as I mentioned earlier and experienced first hand, written and shipped once, maybe with a few updates for obvious issues, but this shit is harder than you think, with obscure uncaught bugs littered all over the place. Drivers need pretty much perpetual maintenance for all but the simplest blinkenlights.

Third, even "desktop" drivers are run through their places in server environments, through e.g. workloads like Stadia.

Fourth, proprietary drivers are not subject to any scrutiny whatsoever, whereas you won't get past the gatekeepers if you don't pull your shit together for the open source kernels.

Popularity does not fix unmaintained code. Giving even just one annoyed and skilled person the ability to change the code does. And sometimes, external companies like Collabora will decide to overhaul things, which they again can't do for a proprietary driver.


I know all of that, but it does not square up with real life.

Go to your favorite computer store. Randomly pick 5 laptops, regardless of the price. So no cherry picking.

Use these laptops with Windows installed on them, for a reasonably long period, for example 1 month, as your daily driver. Perform varied tasks on them, such as printing, connecting to external displays, to projectors, other peripherals, playing modern 3D games on them, etc.

Then do the same with Linux.

I'm willing to bet $100 that on average Windows will run better on them, have a longer lasting battery life, better network connectivity, etc.

And if you're trying to tell me that on average Linux runs better than MacOS on Apple laptops and desktops, then this discussion is not worth continuing, we both probably have better things to do with our time.


You will pay me just $100 to out of my own pocket buy five random laptops regardless of price, some of which will be several thousand dollars, from the local electronics store.

You're very right that there is no value in continuing a discussion if that's the level of debate you're presenting. I'm off.


There are many people like me, however, who choose to run Linux in a VM under Windows so they don't have to deal with bugs.

Linux as a personal operating system got WAY better over the last years, sure, but you can't seriously argue that it got Windows AND MacOS beat.


I am arguing just that, with a reasonably large and varied sample-size, although heavily biased towards macs over windows-powered machines. I get a lot of machines from clients.

Now, a mac on its own tends to work quite well out of the box, but this does not hold for peripherals, and I feel like the machines always end up developing... quirks.

I always felt that the path you picked just gives you the sum of all problems with few of the benefits. I could maybe do the other way around for compat, but the laptop wouldn't be able to stay on this side of the balcony railing for long if I did it the way suggested... :|


hmmm ... yes. anecdata.


The thing is, many more people are using phone and tablets, than Windows and many more people are using Windows, than Macs and many more people are using Macs, than desktop Linuxes and many more people are using desktop Linuxes, than desktop BSDs. So the anecdata goes against your mountain of anecdata that goes against a world of anecdata from people using those platforms.


Yeah, because when I use a phone or tablet I have to worry about installing my own drivers :-)

Apples, oranges.


When was the last time you installed your own drivers on a desktop Linux system, and for what?


2018, for keyboard and touchpad support on a 2016 MacBook Pro. (That driver was eventually upstreamed, but wasn't at the time.)

But I guess that's kinda in-line with this thread, that Macs are a pain to run Linux on.


I don't remember installing any drivers in Linux in last... I don't even know how many... years. For the hardware I happened to use, it was Mac-like experience.

And actually, Windows 10 is also approaching this state, but it is not there yet.


The great thing about Linux is that almost every driver is included with the kernel itself, so you don't need to worry about installing drivers. Of course, there are vendors who don't like to cooperate with Linux developers, and release kernel-tainting drivers outside of the mainline kernel.


I last installed drivers on Linux... in 2007? 2008?

I last installed drivers on Windows late last year.


I've had to on Linux for a very common wireless networking chip. Also for a scanner from a very common printer brand (that ironically has excellent support for printers on Linux). And this is Debian on a ThinkPad--a very common combination. Almost as good as Windows but given I had to load my wireless drivers via USB I still consider Windows the gold standard when it comes to built in drivers. It at least does well with networking drivers which is the most important and essential drivers to have. Everything else windows can easily download and install automatically over the network. I last did a manual install for drivers on Windows years ago cos it just does it by itself now.


Hm? Debian is famous for not including nonfree wireless firmware on purpose (drivers are an entirely different thing). They also provide an optional installer with these included.

So your example seems to be completely unrelated to the question.


There is no choice in OS for mobile. Its strictly tied to the hardware so I don't see how that could show any kind of user preference without being dominated by hardware preference.


And smartphones runs mostly Android, that is Linux kernel.


>Funny, that's how I feel with Windows and Mac.

Based on...? Apple is pretty straightforward: they support hardware until they don't. And they're quite explicit about ending support and it's almost always a major release. You might be able to hack support after that, but I've literally never had it be a "toss-up" about when Apple was or wasn't supporting their own hardware.

As for Windows... I've got a 10 year old 2600k based desktop that runs the latest version of windows flawlessly. I guess if you go back 20? 30? 40? years you might find something that can't run the latest version of windows, but you're going to be down a really, REALLY obscure rabbit hole. I can't say I ever recall it being a coin toss, it was about 10 seconds on google of finding or not finding a driver.

Linux on the other hand... the support of hardware is awesome, but determining if something is or isn't supported is generally an afternoon of reading mailing lists.


I'm a huge apple fanboy, but there are still issues that crop up on macOS, like it or not.

My favourite example is one that the OP mentioned - not supporting multiple stream transport on Displayport. For those who aren't familiar, Displayport MST is a feature that allows multiple streams over one Displayport cable. Some monitors support this directly, meaning you can have

    Macbook -> Display1 -> Display2
rather than

    Macbook -> Display1
    Macbook -> Display2
This is great, and it really helps clean up your desk in multi-monitor setups and maintain that "one cable" philosophy that I, personally, love. And macOS supports MST too, which is great.

Except they don't support it for this.

What they support MST for is to allow vastly-higher-resolution displays on macOS, such as 5K displays, by splitting the display image over multiple stream transports to bypass the limit on resolutions and refresh rates that Displayport provides (or provided at the time).

For some reason, they just haven't bothered to implement MST to allow for multiple displays; it exists and is supported, but only for high-resolution displays. This is great if you're googling around and see that macOS supports MST, then you buy monitors which support MST and hey surprise it doesn't work and there's literally no indication why.


Huh, I've only ever heard of MST in the context of too-high-res displays o_0 Never heard of raw DP supporting daisy chaining, I thought only Thunderbolt does that.


Thunderbol/USB C use displayport signal to drive monitors. I have a apple "thunderbolt" display which happily runs from laptops that don't do thunderbolt.

Dell makes some Displayport monitors which support daisy chaining.


I run all three here. On Windows 10, if I say on the "happy path" everything "just works". I have a Lenovo laptop, and our desktops are Xeon boxes with Supermicro motherboards. No weird USB problems, etc.

Linux, however, is another story. Bad sleep support, forget about printing, scanning, GPGPU computing, etc. We use it when we have to.


For fixing sleep look for an option in Lenovo's BIOS. I had it set to windows sleep mode by default. Switched it to Linux mode and it has been working reliably on my x13 AMD.

I upgraded from thinkpad x240 on which sleep worked perfectly too.


MacOS a coin toss and Linux being robust regarding drivers/hardware support on desktop? Are you talking about Hackintosh, or do we not live on the same planet?


It's the smaller things. Obviously MacOS won't have trouble with mac hardware, but my work macbook can't wake up my monitor through HDMI, or chain DP displays, or connect to my phone's storage through USB, etc...


I’ve never experienced the wake issue, but I always use usb-c to DP or HDMI and apparently those aren’t affected? Assuming it’s the same issue, a little googling shows the problem was fixed a year ago.

What phone are you having issues with? Every android phone I’ve used will communicate with adb. iPhone has never used USB mass storage, and support for that has nothing to do with MacOS.

Can’t comment on the daisy chain issue, I just learned that was a thing.


My 2019 Macbook Pro has consistent issues waking from sleep. Nothing plugged in except the included power supply.

Screen will remain black. Or, screen will power on, display nothing. Or, screen will power on, mouse cursor will appear, but no password prompt.


You should return it and get a free replacement. It’s clearly defective.


No, this is a software problem, not a hardware problem. The replacement would exhibit the same behavior.


Things like printers/scanners can be a problem. With Mac, I cannot scan on Samsung M2070w MFP in color (with the Samsung/HP driver installed, which must be done manually).

No such problem with Linux.


I've had far less problems with Linux. Just an hour ago I was dealing with someone whose Roboteq motor controller wouldn't work in Windows or Mac without additional drivers but it's basically plug-and-play on Linux.

Yeah there are things Linux doesn't support but just don't buy them. The things that it does support generally don't require any driver installation.


i miss the golden days of the Hackintosh! Would love to see someone bring this back somehow. Running linux on a machine like an m1 macbook pro would be a dream.


I'll do you one better:

macOS does support MST! It actually does! But it only supports it for splitting one display image over multiple streams, to overcome bandwidth limits on Displayport streams.

For providing a signal to very-high-pixel-count displays, macOS uses MST.

For providing multiple displayport signals to multiple displays? Nope, not implemented.

Imagine my frustration after a day of googling and finding out that macOS supports the feature we wanted, but not the use that we wanted.


You're mostly wrong; MST does not increase bandwidth, it splits the bandwidth of a single DisplayPort link. To increase raw bandwidth for 5k/6k they combine multiple HBR2/HBR3 links (over Thunderbolt or with multiple cables), which is the opposite of MST.

MST is supported specifically for early 4k displays that had scalers that couldn't handle 4k60, but could handle half the resolution, so they sent two streams. But that wasn't a bandwidth limitation; old MST 4k displays and modern SST 4k displays both used a single HBR2 link.


I believe they meant increases bandwidth utilization.

MSTs primary usecase today is multiple displays, by "daisy-chaining" through built-in MST bridges, dual DP dongles or docks.

Support for hacky displays is less interesting, and hopefully not relevant today.


That is basically what happens when you don't support MST.

The screens light up from the same DP stream, which is also why you can have, say, 2x 4k@60hz monitors in this configuration where true MST would need to drop it to at least 30Hz, or lower res.


> having been a proprietary kernel driver developer for Windows, FreeBSD and Linux

May I ask how you learned how to do this? I'd like to learn too! I had a great experience developing a Linux user space driver for my own laptop's LEDs. Couldn't figure out how to control the fans though.


1. Accept that printf (preferably over serial) is The One True Debugger. It is the tool you always have - if you can't get a print over serial, you're in too deep to use a debugger anyway.

2. Play around with embedded. You can use an arduino, but get rid of the Arduino IDE. Once you've ridded yourself from their weird environment and code in C, you're pretty close to what kernel programming is: Direct hardware control, debugging over a serial console, and if you mess up you don't get saved by a segfault.

You can upgrade to playing with ARM boards later if you want. Things like a Raspberry Pi can also be useful to boot random kernels you've built later on for HW stuff, otherwise you can use VMs. QEMU can boot a kernel file directly, which makes debugging easier.

3. Look at one of the tutorial for writing hello-world kernel modules. There's also usually smaller cleanup tasks you can do to get started submitting work. Looking at Linux and FreeBSD both can be useful, and things like Plan9 have very small kernels that can be used as reference. Linux and FreeBSD are not that different. (Windows is a pain with really weird interfaces, but it can be made to work.)

4. Find something you want to do with the kernel or fix in it.

Kernel developers aren't that common, so I imagine a lot of places are willing to train people. The first job I had doing kernel work was pretty open, and just threw minor stuff to begin with at me, e.g. "things stopped working after kernel X.Y, figure out what happened". Bisecting, testing in VMs, printk'ing a lot to compare state, stuff like that. I later ended up being the owner of the kernel drivers of all our platforms, so I guess I did okay. :)


I got to build a driver once, making an NDIS LWF encap/decap driver for Windows. I found it extremely soothing, and kind of old school - I had to use a real machine in my office with firewire debugging, and use windbg like a greybeard.

But not having the right documentation was a challenge. MSDN is okay, but the weird mechanics of MDL chains don't really get discussed on Stack Overflow.


> But not having the right documentation was a challenge.

How do people overcome this? I managed to reverse my laptop's LED commands: they were implemented via USB so I used wireshark to intercept and analyze the data sent by the proprietary vendor software. What if it's some ACPI thing though? Or some memory mapped I/O chip? How do people figure out how it works?


Then tell me why my Elantech touchpad keeps freezing randomly on my Huawei Matebook 14d running Linux, but works fine on Windows.


Agreed. Check out the state of Linux support on Intel Macs that were released in and after 2016[1], it's abysmal. Once Apple started adding non-standard hardware, Linux support never caught up.

It isn't like the lack of support is Linux developers' fault. Apple doesn't provide datasheets for their hardware, and they don't cooperate with developers writing drivers for their custom hardware.

There are hundreds, if not thousands, of ARM SoCs that "run Linux", but that doesn't mean much because they're actually running Linux forks, and someone needs to maintain those forks, and build and release custom images for each SoC. I don't see M1 Macs diverting from that fate without significant support from Apple themselves.

[1] https://github.com/Dunedan/mbp-2016-linux/


> that doesn't mean much because they're actually running Linux forks

Many of these Arm SoCs are running linux forks because there are some terrible baked in drivers, and no spec associated with them. Still, that is a step ahead of the M1 as there are at least drivers that don't have to be decompiled and reverse engineered.


These drivers are baked, even if there are specs for them.

The thing is, that these SoCs have no PCIe or other enumerable bus. You (=your kernel) must know what hardware it is running on and which drivers to load without being able to ask the hardware, what is really present. One wrong POKE and the entire system can hang.

And that is on top of the problem, how to boot in the first place. There is not such thing as UEFI on these SoCs. Every single one is a special snowflake with its own special way to boot.

Hence, kernels built for specific systems, even if it is just Device Tree.


Things have gotten a lot better in ARM-land recently. It's possible to build a generic ARM kernel that will boot on a lot of hardware, as long as the bootloader passes the right device tree blob.

Most device vendors don't really have a need or incentive to use this, though. A generic kernel is much larger and wastes flash space, and most vendors don't care about running anything but their OS image on it, so they of course just build a kernel specific to their hardware.

But in theory it should be possible at some point to use a generic ARM Debian installer to install your own OS on a random ARM device, assuming the relevant drivers and device tree have been upstreamed.


This is a little out of date. Device tree exists explicitly to avoid needing a machine file for your board. Bootloaders like uboot will load your kernel and dtb together, and the kernel uses the dtb to enumerate the hardware on the system.

Again, the problem isn't that the mainline kernel _couldnt_ support these boards, is that no one wants to put in the work to bring their implementations up to the standards of the mainline kernel.


> Every single one is a special snowflake with its own special way to boot

Eh, the majority of embedded crap comes with U-Boot (sometimes built without EFI support but always with Linux image support, at least).


I've run into more than a few ARM SoCs that use custom U-Boot forks, too.


I don't think I have used an Arm SoC that _didnt_ use either a custom uboot, or some custom stage one loader (often a stripped down uboot).


Apple is a hardware vendor. How are the parts they use "non-standard"? What are the "standard" parts that Linux uses and further what consumer or business hardware has Linux indicated is standard?

It appears what should be being asked for is Apple to commit resources to natively support things like: Docker, Kubernetes, Virtualization and Linux drivers on their M1 Macs going forward.


Interfacets, it is mentioned in the article

> Apple designed their own interrupt controller, the Apple Interrupt Controller (AIC), not compatible with either of the major ARM GIC standards. And not only that: the timer interrupts - normally connected to a regular per-CPU interrupt on ARM - are instead routed to the FIQ, an abstruse architectural feature, seen more frequently in the old 32-bit ARM days. Naturally, Linux kernel did not support delivering any interrupts via the FIQ path, so we had to add that.

and many others

https://corellium.com/blog/linux-m1

It is on top of HN now https://news.ycombinator.com/item?id=25859907


The T2 chip is non-standard and unique to Macs. It affects the boot process and disk access in non-standard and proprietary ways. Apple's SMC is also non-standard, as is their UEFI implementation.

You can see a list of differences between Macs and standard PCs here: https://en.wikipedia.org/wiki/Apple%E2%80%93Intel_architectu...


Not many people want to run Linux on intel based mac because you can run Linux on a much cheaper intel machine.


No one should be purchasing a brand new M1 Mac with the expectation of perfect Linux support any time soon.

However, I’m optimistic that these will be mostly usable on Linux before they’re too obsolete or outdated. The platform is so popular and iconic that it’s drawing a lot of attention from Linux devs and reverse engineering crowds.


> No one should be purchasing a brand new M1 Mac with the expectation of perfect Linux support any time soon.

That's sound advice.

As someone who doesn't use Linux as their daily driver, can I expect Linux to run perfectly in a VM on an M1 Mac?


I was able to install an ARM build of Ubuntu Server using the Parallels beta, and it works fine.


Good to hear this. I am waiting for my M1 Mac, and if I can get Linux arm64 virtual machines running for development purposes, I can hack the rest.


It doesn’t work fine, UI resolution restricted, sticking cursor issues, no parallel tools available


I'm also interested on this. I usually run Ubuntu via Vagrant + Virtualbox on my Mac.


I'm not at all as optimistic as you are. You still need out-of-tree drivers to run Linux on a 2017 MacBook Pro, and various things don't work at all on Intel Macs from 2016 (possibly earlier) to the present.

Sure the M1 is the new shiny, and people will be attracted to it in the short term, which might boost reverse-engineering efforts. But I expect that to die down as people get frustrated, and we'll have the same (or worse) situation as we do running Linux on Intel Macs.


Right. And by the time there is halfway decent support for first gen M1s a new generation will come out with a ton of new breakages.


So the advice is to buy a second hand one 5 years from now?


Maybe. But consider that a non-touchbar 2016 MacBook Pro, which is turning 5 years old this year, still doesn't flawlessly run Linux. And that doesn't even have the custom Apple T1/T2 chip. The ARM hardware will be much harder to deal with.


Macbooks almost never have seamless wifi experiences on linux. At best you will have to manually install a wifi blob and for the newer ones the advice is to just get a wifi usb.


Even on old + modern thinkpads it feels like a lottery which hardware feature will stop working at a particular time, with the distros i've worked myself through.

Ethernet, waking up from sleep mode, brightness adjustment... take your pick.


I discovered last week, to my chagrin, that the RAM on my x1 carbon Stinkpad was starting to fail--memtest x86+ confirmed it. Memory corruption is insidious, so I rushed to the internet to buy new DIMMs, opened the sucker up...and there's no DIMM. The RAM is just soldered into the motherboard. No upgrade, no fix. This is now a pile of unreliable junk that randomly flips bits in memory. So data evac commences, with MD5 hashes and double, trip-checking everything.

A reminder that not only software bitrots, but hardware too. Make sure not to buy a Stinkpad (or any hardware) with non-replaceable parts!

(and I just bought an M1 Macbook Pro...)


Buying a mac after complaining about how you can't replace ram seems a bit odd.


Feelings, not reason, probably applied there. At least it the logic was consistent in both cases.


It was a decision I was considering for months, the memory corruption just forced it. I didn't expect software to be this far behind. I was aware that the RAM and everything is even less upgradeable than a Stinkpad, but I am hoping this machine is more reliable overall and it will give me 5+ years of service. (My last Macbook Pro from 2010 was still kicking up until 2018, when I switched to the Stinkpad because keyboard.).


Yeah, I know. But the chip kicks butt :)

Software will catch up with the M1, I hope!


> Software will catch up with the M1, I hope!

Can't wait for [Your least favourite Electron App] to use even more CPU cycles!


I'm not sure how buying a M1 macbook pro will help considering it also has soldered ram?


> Make sure not to buy a Stinkpad (or any hardware) with non-replaceable parts!

Correct - that is why I bought a Thinkpad X1 Extreme - both SSD and RAM are user replaceable. Granted its not as thin as X1 Carbon but I don't need to carry it around much so that is not a big issue for me.


I dont really understand why modern devs can't carry a 5 pound laptop. Its not exactly a burden.


5 pounds is pretty heavy.


If you can pinpoint which pages are affected (in most cases you can), you can just add that range to the blacklist.

Can work in the meantime, before you have a new machine.


There were dozens of ranges, too many to blacklist. Which made me suspect that it might actually be the memory controller. In which case, it's totally F'd. But given this I went for total data evac.


Couldn’t you just move the disk to another device and read it from there instead of risking data corruption or going through all the hassle of comparing checksums?


Perhaps. It has an SSD, so I'd need a thingie like this adapter to do that: https://www.amazon.com/SATA-Adapter-Thinkpad-Lenovo-Carbon/d...

But I don't have a SATA reader either, so I'd need one of those :-)


I was more so referring to the Linux experience, haven't had any troubles with ThinkPad hardware as of yet but I know that there are complaints around it as of late :)

Though, if you want a serviceable system, I'd stay away from Apple - the RAM in your new M1 mac is actually included in the SoC, not sure how much you'd like to get in there!


This is a depressing thought to hear on a site titled Hacker News. Looking back at the past 20 years, I am quite confident it is just a matter of time before people get a couple drivers working fine on M1 systems (esp. considering Apple does not seem to be actively fighting this). I am sure folks will make it work considering the relative popularity of the device compared to all the esoteric devices that no one uses and Linux still supports.

Sure, documentation helps, but it is overrated--people get lots of things working by reverse engineering.


On the contrary, this could be fantastic for a headless server running on an M1 Mini to build and test ARM code before deploying to AWS Graviton. It doesn't all have to be about laptops.


For the about price of a mac mini:

https://shop.solid-run.com/product-category/embedded-compute...

You get upgradable ram, a PCIe slot for your favorite GPU/etc, and actual mainline linux support sufficient to boot most random linux distros (ok the onboard 10G+ nic might not work without patches).

Its not the fastest machine around (A72s) but there are 16 of them, so it does a decent job of building ARM software, and running VMs.


Well... for the price of a Mac Mini, and some, you get a bare motherboard with a small amount of onboard storage (64G eMMC). No RAM, no video, no power supply or case. And it's not like you can just throw an off-the-shelf GPU in, either; neither nVidia nor AMD graphics cards work on ARM devices -- so if you're expecting something you can use as a desktop, you're in for a hard time.


Both amdgpu and nouveau work absolutely fine on ARM devices, why wouldn't they?!

In fact I made the FreeBSD port of amdgpu work on my Macchiatobin :) Absolutely smooth experience btw, video output works even in UEFI (it actually runs the GOP driver from the card in QEMU), amdgpu works perfectly (played vkquake, supertuxkart, openmw, etc.)


The results reported at [1] had led me to believe that the AMD/NV ARM drivers were still unready. Am I mistaken? Are those issues specific to BCM283x?

[1]: https://www.jeffgeerling.com/blog/2020/external-gpus-and-ras...


Yes, the BCMwhatever's weird PCIe controller literally returns corrupted data when the driver tries to read from the card, possibly due to not supporting some kind of 64 bit read.


That board is pretty close to the same experience you would get with a random x86 mITX board (once you flash the UEFI firmware on it). It has a m.2 nvme slot, sata, etc. Its one of the lower cost standards based arm boards.

So, yes you have to bring some storage, ram, ps and case. But one can cheap out and probably get all three for <$200, or go crazy and put 64G of ECC, and a few TB of storage in it. The cheaper off the shelf RAM/storage makes it a lot cheaper than a loaded mini with similar specifications.

And as another commenter points out, yes fairly common GPUs tend to work in it. Its not perfect, for that you have to spend more, but the time you will spend fighting with the mac mini+linux is going to be worth the difference.


While a valid use case, it feels like overkill. There are much less expensive and more stable ways to accomplish this.


This is how a lot of people used Linux on PowerPC macs in the early aughts. It set them up for larger projects on IBM hardware and the goofy risc engineers got to walk around looking cool because they had a powerbook.


Agree, this could be a good use case.


Well about the hardware support, i.e. the Ubuntu Desktop certified hardware list is quite long [1], and I'm quite sure there are a lot more perfectly working brands/models that are just not appearing there. I've been using a Dell Inspiron model, for 8 years now not appearing in that list as a daily driver and every major distro worked perfectly out of the box, some tweak required from time to time but that's it.

[1] https://certification.ubuntu.com/desktop


I have confidence in the parallel efforts being spearheaded by Marcan. He knows what he's doing and he's working on it full time.


And that's why this has no long term future. There are not that many people able to achieve this. One single person can't do it forever.


Presumably Apple isn't going to redesign their GPU architecture every generation? How much maintenance do you see this needing?


Uhh... Linux supports a lot of hardware. Probably a lot of in-tree drivers that still get use were started as reverse engineering efforts by a small number of people.

Perhaps you're meaning to say Apple will iterate the hardware faster than people can add Linux support? That's plausible. It also was the status quo of Linux on PCs for many years. (Still the case with some hardware I guess.) That hasn't killed Linux yet.


Maybe someone else will also pick up some skills from this. He is streaming all his reversing sessions on Youtube, it is quite interesting.


Yup. The last Mac I ran Linux on was a 2016 MBP. And even then, audio and suspend/resume didn't work (well, suspend was fine, resume... not so much), and battery life was terrible. When I first installed it, the keyboard and touchpad required an out-of-tree driver.

That driver has been upstreamed, but I hear the 2017 and onward touchbar MBPs still require and out-of-tree driver, and a similar level of things don't work.

In 2019 I finally gave up and got a Dell XPS13. It's... basically perfect? The keyboard doesn't suck like recent MacBooks, though the touchpad isn't quite as nice (though really, it's perfectly fine). The only thing that doesn't work is the fingerprint reader, but I knew that going in and didn't care. And subjectively the hardware isn't as pretty, but... oh well. On the plus side, I really do like the "soft touch" palm rest better than the harsh aluminum of the MacBook.

I'm definitely interested in the Linux-on-M1 project's progress from an intellectual curiosity standpoint, but I'm expecting long-term as a user it'll be just as -- if not more -- frustrating as running Linux on any Intel MacBook from the last 5 years, which means I won't bother.


My last Mac running Linux was a 2011 MacBook Air 13". My Linux (work) laptop is an XPS15. Now that optimus on Linux is completely transparent (merged into Xorg in April '20). Sleep doesn't seem to work well though. Fans keep spinning and it drains about 10% of the battery per hour. If anyone knows how to get around that "active sleep" mode I'd appreciate it.

My personal laptop is an HP Envy X360 with a Ryzen 5 2500U. That was rough when it came out. Windows update updated the bios on the laptop which installed a new GPU firmware that wasn't supported by the Linux driver for a few months. It's been solid since kernel 5.0 came out.

IMHO, the Linux experience on the AMD APUs is better than the Windows experience due to how rough the AMD windows drivers can be for the APUs. The fans are always shrieking at me under Windows but at idle they are off in Linux and the battery lasts significantly longer.


> And even then, audio and suspend/resume didn't work (well, suspend was fine, resume... not so much)

The computer never woke and needed to be hard-rebooted if it was ever allowed to suspend? I ran into this after installing Linux on a MacBook, downgrading the kernel to an old LTS release eventually worked.


Yep. The issue was that the NVMe controller wouldn't wake up properly. Downgrading the kernel wouldn't have helped, as support for that NVMe controller itself had just recently been added.


Yeah, other than an experimental exercises I don't get why anyone would ever want to do this. You will spend WAY more money for the Apple hardware, you will struggle to get things to stay consistent. If your goal is to run linux then don't buy a Mac.


> You will spend WAY more money for the Apple hardware

I think right now the new M1 machines are actually priced pretty reasonable. If they'd run Linux, I would buy them in an instant.

I mean, try to match the Air with a Thinkpad, especially if you consider the screen. I think there is not even one recent AMD Thinkpad with a > FHD screen. And who wants a 2020 Intel machine?

Edit: To make this clear: No Linux, no Mac for me (as I don't believe in MacOS's future at all). But even considering the horrible keyboard, I think the M1 macs are totally worth it just for the hardware. That's a first. And I am the kind of person who feels all warm and fuzzy over ejecting an ultrabay hard drive, or changing RAM/display in < 5 minutes in a laptop.


If you care about display, trackpad, and battery life, the M1 macs blow everything else out of the water, and for pretty cheap.


> A Mac with subpar support for a webcam, energy saving, suspend/resume, the trackpad, brightness/volume controls, etc is not a laptop, it's an expensive paperweight.

There are Mac desktops as well. That said, I'd say the list of mandatory components on a laptop is:

- Decent GPU Drivers

- Workable power management with sleep/ awake

- Trackpad (this should be straight forward)

The webcam, and brightness controls are stretch goals. I can't imaging the trackpad is vastly different from currently shipping ones. The Webcam might cause some significant headaches since Apple secures that fairly tightly.

The onboard wifi modem is likely worthless, from what I recall, many of them are even in Windows laptops due to proprietary drivers.


Most Windows laptops thankfully use Intel Wi-Fi. Apple is unfortunately completely married to Broadcom. There are open drivers for Broadcom Wi-Fi, at least not-the-newest ones, but they do suck.


Good, sounds like things have improved a lot since I was using Linux. Sucky drivers are a starting point, last time I was on Linux as a daily driver, I had to use a PCMCIA card with a binary blob from Broadcom because the internal wireless card was dead.

Hopefully with a little bit of money and some enthusiasm from the team, they'll be able to make more progress.

A lot of people bitching about this being a waste of effort, but all the things they do here have follow on effects. If they improve the Broadcom drivers for the M1, much of that work will directly benefit everyone else on Broadcom.


I'm not sure why you are so pessimistic. There are many community supported linux devices that work fine. The Microsoft Surface line being a prominent example. There are patches available for all recent kernels. And afaik all generations are supported. Here custom drivers have also been written.


Microsoft Surface don't use special hardware as much as Apple's laptop. Especially for M1, they uses original GPU, boot system, power management, etcetc...


>subpar support for a webcam, energy saving, suspend/resume, the trackpad, brightness/volume controls

Honestly this sounds exactly like my experience with linux on any laptop, including three thinkpads.

Linux is a great desktop and server OS but the UX of a laptop actually demands a certain level of polish/fit-and-finish that linux desktop environments just don't have, otherwise it's constantly getting in your way.


Suprisely to me even, all these work work fine on Debian. Debian needed me to slide load the intel wireless drivers (a disappointing aspect I grant) but otherwise everything just worked right out of the box. Lenovo X1 Carbon 5th Generation.


I haven't run into that problem in years. As long as you are running a mainstream laptop (Asus,Acer,Dell,HP etc...) you will likely not have a problem. I will grant that you should still do a review of the hardware against supported drivers (especially around Wifi)... 9.8/10 it will just work.


Apple's poor support for Linux on Mac hardware may reverse if the company gains data center ambition for the M1. Partnering with a cloud provider (maybe Microsoft?) to deliver seamless Linux deployment on M1 would be significant...


Microsoft already has a good partner in Ampere, a company fully dedicated to servers and standards.

Apple clearly doesn't seem interested. The M1 is just a scaled-up version of the iDevice SoCs. From the reverse engineering we've seen so far, it is extremely clear that not even a single step towards any standardization was taken. It's very much the ad-hoc hodgepodge of embedded crap companies build when they only care about their own complete product and don't allocate any budget towards any "refactoring" that doesn't directly benefit the end product.

They use some ancient Samsung UART for debug (because the first iPhones used a Samsung SoC?), old P.A. Semi I2C controller, the Synopsys DesignWare USB 3 controller (just like some random cheap Allwinner/Rockchip/etc), and here's the fucking kicker, a custom Apple interrupt controller and a custom IOMMU too. These probably predate GICv2/3 at least. But there was no reason for them to switch to the standard Arm GIC and SMMU so they didn't >_<


Interesting details. Is using Synopsys DesignWare problem?


Not necessarily – well, it does work :D and at least it mostly speaks XHCI – but it does require non-standard code for reset and initialization.


I don't see any reason to do it from practical standpoint, but to do it for fun - why not.

If you want something working, I'd also not recommend using hardware from someone like Apple.


Not all M1 chips are in laptops. I’m kind of intrigued by the potential of a cool and quiet linux server.


The first week that XNU ran on ARM, it didn't have very good hardware support either.


Apple has all the documentation needed to write drivers for all the custom hardware found in an iPhone (the original XNU-on-ARM device) or in an M1 Mac. The Asahi volunteers and Corellium don’t, they will need to do much more work, including reverse-engineering. So, what’s your point?


My point is that throwing up one's hands and saying "it's hopeless and it sucks!" literally the first day linux has been brought up on a new board is... premature.

Yes, it will have to be RE'd. Yes, that takes time.


Linux does run, and it also runs on the M1.


There's a bit of competition now going between asahi and corellium. While most of the spicy tweets have been removed, there's a summary https://twitter.com/AsahiLinux/status/1350547056679477250

Essentially, while this project may be quicker to get visible results, they may not be able to release all the code kosher for merging upstream. It will be interesting to see the next steps.


Competition is fine, but the "spiciness" was really just drama+pettiness on both sides :/ I'm hoping they're both past that now, as they should know better, but having two competing projects unable to assume good faith from each other is generally not healthy at all.


The more spiciness and completive things are the better the overall results. We all like to think that collaboration, harmony and love drives innovation but much of innovation is built around intense rivalry and competition as well. Don't be idealistic to a fault, much of the qualities we view as petty exist because they passed millions of years of natural selection. Those who compete, thrive.

Winner takes all.


I think that's a bit zero sum. The world needs both to advance.


Obviously.

When teams of people compete intensely , the collaboration within the teams themselves must be just as intense.

I find it slightly offensive that someone would accuse me of discounting collaboration when 1. I never discounted it, 2. It's obvious that society is full of people who collaborate. In what universe are the words "zero sum" apt for my response? None. I never described a zero sum game. Obviously, the replier added the description with his biased imagination.

I'm just saying spiciness and intense rivalry and competition can lead to results beneficial to society. There are tons of examples of intense competition and rivalry leading to great results in science. The decoding of the human genome for one.


> I find it slightly offensive that someone would accuse me of discounting collaboration when 1. I never discounted it, 2. It's obvious that society is full of people who collaborate. In what universe are the words "zero sum" apt for my response? None. I never described a zero sum game. Obviously, the replier added the description with his biased imagination.

You wrote, "Winner takes all" which seems pretty zero sum to me, hence my comment.


Your post implies that I discounted collaboration which I obviously did not. Look at the original post again, it deliberately says that competition is important as well as collaboration. Hence why your reply is categorically baseless.

As for “winner takes all” why don’t you look up the definition of a “zero sum game”. A zero sum game usually applies to simplistic games like chess or an island with limited resources aka things that have a measurable gains and losses. Complex situations like the one described are rarely zero sum.

When I say winner takes all its more of an “expression” symbolizing the intensity of competition. I think it’s quite obvious that the situation here is not some contest setup so that a single winner takes everything. There’s no need to make your self sound smart and use the words “zero sum game” redundantly. Only certain types of people use the words “zero sum game” colloquially for the purposes of sounding smart even though the majority of situations in nature aren’t actually artificial games setup to be zero sum.

The word is also used negatively as if zero sum games can’t ever exist. Like it’s obviously wrong if your describing a zero sum game. It’s rare but zero sum games do exist so stating that something is a zero sum game doesn’t move the conversation forward. Like so what? Yeah I could be describing a zero sum game, it doesn’t make me wrong, what’s your point?

Case in point, the credit for the First person or team who can get Linux running on the m1 IS a zero sum game and there already is a winner for that “game.”


> The more spiciness and completive things are the better the overall results.

That line made you sound all-in on zero sum competition. But you’ve clarified now and softened it to say it can lead to beneficial results.


I never edited any of my posts. That line is just a fragment of the post, which specifically has this line:

“ We all like to think that collaboration, harmony and love drives innovation but much of innovation is built around intense rivalry and competition as well.”

Keyword here is “as well”. If you feel the need to respond or vote someone down please read the post carefully rather then respond or vote baselessly.

More clarification is necessary. Competition is the driver of natural selection. Your entire biological form exists as an evolution of winning traits because your ancestors out competed and defeated others who fought to reproduce so someone else could take your place.

Competition is therefore a primary driver of your existence while collaboration is secondary. It’s not that competition can lead to benefits, the phenomenon that occurs is that collaboration can actually work but only as a tertiary driver behind competition.

See communism if you want to know the results of a society formed with collaboration over competition as the primary driver.


I take it that you are wholly unfamiliar with the jailbreaking scene?


^ Precisely what I was thinking of. Some friendly competition is absolutely fine, but when it gets toxic, it drives out talented people!

These things can escalate surprisingly quickly, so you really need to be careful. I've watched it happen.


I'm familiar. The jail breaking scene and other examples are one offs though. You just need to take a holistic view of life and civilization to know how critical competition is to success.

All of society and the development of capitalism to evolutionary biology is founded on competition. Competition is, in fact, the primary success story and collaboration is the side story. Citing the failure of one community discounts the view of the entire world. Competition works, and it works better than collaboration. See communism if you want an example about a community founded on collaboration as the primary driver.


Putting politics aside, which I probably don't want to discuss in a thread about Apple silicon, competition where you argue about licensing and code sourcing for stuff that you are vying to upstream stuff to the same open source upstream does really not seem healthy.


We're not talking about health. Competition and the cred received for for being first to market will drive people to compete and therefore innovate. Whether that's healthy or not is not only a big convoluted topic, but a separate topic.

Either way you're citing singular examples and calling it "seemingly unhealthy." It's a weak argument against my example of the entire modern world as a competitive arena.


We are talking about health because I started this thread with a discussion about the healthiness of the situation. Bringing up jailbreaking is an extremely strong argument because many of the same people are involved, rather than whatever vague "competition the real world" example you have shows. And I know that the currently situation has already turned off some very capable people from contributing to either "side" when they were perfectly willing to do so before, or waste their time on arguing who is in the right here.


But of course many people who are working on this are driven by the competition as well. Both collaboration and competition have their place, but make no doubt, the world itself is proof that competition is likely the primary driver behind why Linux was up and running on the M1 so quickly.


Haven't followed this at all so I have no idea how far I'm off, but I could easily imagine that the type of person who dives deep to take on the M1 Linux challenge is exactly the kind of person who could enjoy, after noticing another team taking on the same project, to celebrate each other by not only agreeing to disagree, but agreeing to disagree * with all the drama they can muster*. "Hey, you're cool, let's publicly sling mud at each other!"

I wouldn't dare considering it the most likely scenario, but it's surely the most lovely.


Publicly slinging mud to cause drama is childish, not "agreeing to disagree".


But patches have already been submitted upstream

https://lore.kernel.org/linux-arm-kernel/20210120132717.3958...


Everyone can submit, doesn't mean it'll be included.


This is from the Asahi project though, not the corellium one linked here. Also, it's only a tiny part of what's needed.


It is not.


The patch submission is from Marcan who started asahi. This (HN) post links to a tweet from the CTO of corellium.


Hello,

marcan is CCed on that set of patches as a courtesy, and he can help with figuring out better approaches before it's merged. Because it's set in stone forever after that.

- someone


You're right! I should've paid more attention to the headers. Still, this is only very basic support for the CPU. Much more work is needed.


Some more explanations about the choices taken for the first submission: https://threedots.ovh/blog/2021/01/linux-on-apple-silicon-ma...


Reading the comment thread that arose from your comment reminded me, and maybe you'll appreciate this...

I like to view competition as a form of collaboration, for example:

When companies compete, they compete in a collaboration we call the market.

When sports teams compete, whether as groups or individuals, they compete in a collaboration we call the game, or even a tournament.

When the other side is beating you, you might look at their strengths and weaknesses, their knowledge and ability, and work out if there's anything you can learn, how to adapt your behaviour to strengthen your position.

Employees, or players, can move from one company to another, through hiring or acquisitions, bringing skills and experience with them. Here we have companies competing for employees, and then those employees collaborating anew.

Or a company might license certain IP from another, this is a form of collaboration too.

In this sense, collaboration and competition aren't necessarily opposing paradigms, they can be tools to apply to the situation.


marcan, who is working on AsahiLinux, is streaming at this time of writing and just got the framebuffer working, this is one of the most interesting live-code-streams I have seen.

https://www.youtube.com/watch?v=GxnWuXgj3JI


saving this for later. this is where the rubber meets the road. super interested in seeing the workstream these super-techncially-skilled devs use


> Started streaming 8 hours ago

WTF.


> Started streaming 9 hours ago

Still going. Don't think even stopped to eat, drink, or bio break. I'm also surprised his computer hasn't crashed or something! Really fun to watch him work.


Yeah, fascinating and a bit morbid. It's 10h now.

I really think this is getting unhealthy hyperfixation and I really hope this is unusual for marcan, as this comes at a cost you can never make up for again.

Btw. if you enjoy these extended coding streams, you may like the Scanlime livestreams: https://www.youtube.com/c/scanlimeinprogress/videos. She's also a fascinating person to observe in her natural environment. The videos are also low key trippy and artsy.

Main channel here: https://www.youtube.com/c/scanlime/videos


Love to see the competition between the projects trying to get linux running on the m1. Though I think they will hit a wall trying to enable GPU support.

I think opening up just enough to enable this effort also serves as some good nerd marketing for Apple. This race keeps the m1 in the news regularly, giving hope to those who want a low-power and performant ARM machine that wouldn't otherwise consider an apple machine.


Someone has begun digging the GPU.

https://rosenzweig.io/blog/asahi-gpu-part-1.html


It's not just someone, she's responsible for the Panfrost drivers so there's hope :)


Ha nice, she expects to find an eldritch horror in there somewhere.


Nice, that was a HN worthy article by itself.


You might like the discussion. It was posted 13 days ago, so many people were a little preoccupied with US events.

https://news.ycombinator.com/item?id=25673631


heh, between my clicky keyboard and his, I'm losing my mind a bit lol


Does it actually require any serious development or it is just mostly tweaking and changing things here and there? I am sorry if that sounds ignorant, but I thought these are just low hanging fruits to harvest. Also why Apple wouldn't do that themselves? To me it is an opposite to "nerd marketing", a big middle finger to Linux users - essentially "you are on your own".


A middle finger from Apple would be locking the bootloader–keeping it open, plus providing minimal tooling and telling people to figure it out, is about as close to "we'd love to see what you'll do with it" as Apple could possibly give.

(Dealing with the GPU is going to be the majority of the work, I'd think.)


> "we'd love to see what you'll do with it"

...without the documentation that would help you.

When Broadcom act like this they're considered villains and we're recommended to stay away from their hardware. But when Apple do it, they're being benevolent?

And that's ignoring the fact that I could actually get Broadcom documentation in exchange for dollars and NDA.


Broadcom is selling hardware with the intention to run Linux on it, I guess that is the main difference.


Apple unlocked the bootloader which is certainly signaling intent.


Maybe. Certainly people within Apple would have thought about Linux for this. But Apple would need to provide some form of mechanism for unlocked bootloaders anyway to facilitate kernel/driver developers and security researchers, so I'm pretty sure other OSes is not the main reason they do this.

It does work out for Apple in the end. Their current standard 10 years' support will look quite short now Moore's law is dead and their hardware has barely any moving parts. But they'll shush some complaints if up to date third party OSes are available in 2030.


> ...without the documentation that would help you.

Well, it's not really worse than their usual documentation on products they officially support: https://www.caseyliss.com/2020/11/10/on-apples-pisspoor-docu...


While Correlium is a bit of a minefield for obvious reasons, I don't understand why Apple hasn't blessed Marcan's work. I wouldn't expect them to commit any development resources of their own, but I'd think it would be in their interest to (A) provide Marcan with some documentation and (B) make an engineer available to answer occasional questions.

Apple makes money selling hardware, and Linux support will sell more hardware. Perhaps not much more, but for a commensurately low amount of effort. What does Apple gain by forcing Marcan to reverse engineer everything?


I can't see why they would? Marcan is capable to be sure, but he's also a random guy with a Patreon. Why would Apple ever officially bless his work? I'd sooner seem them collaborating with Corellium, because that at least gives them a corporate entity to interact with. Plus, like, releasing documentation without giving away the stuff they want to keep to themselves, and without promising too much and having it break later, is work in and of itself that Apple is really not getting anything from. I mean, this is the company that still FairPlays apps, so…


Hypothetical question: If Microsoft wanted to port Windows to the M1 (as Phil Schiller said was their choice), would you expect Microsoft to have to reverse engineer everything? Or would Apple share documentation and expertise, under the logic that Windows support will increase Mac hardware sales, if only a small amount?

I realize that Marcan isn't Microsoft—but he's not quite "a random guy with a Patreon" either. He's a professional freelancer, and I'm sure he has an LLC† and a set of professional references he can point to.

Put another way—on a scale between "Marcan" and "Microsoft", where is the threshold in which Apple would be helpful? I don't personally see a huge difference between a one-person LLC and a 100,000-person company in this regard. If anything, the 100,000 person company offers more opportunities for things to get leaked.

---

† Or something similar.


It always comes down to "it doesn't benefit them" eh…

There are some interesting situations though. Like in the big GPU world it's very common to document the ISA. Even nvidia does. I suppose that's because it benefits the vendor when games and GPGPU compute programs optimize for their GPUs, down to the assembly level. It's sad that Apple's approach is "just use Metal" rather than fully enabling developers to get to the actual… well, "metal"


That would definitely be crossing the line. But I wonder if this behaviour of Apple is another avenue of stifling competition - that is how many smart people this will get involved that otherwise could have worked on a competing product? Then you can see how much resources any company dealing with Apple has to commit just to make sure their software keep up with Apple updates - that inhibits their growth and thus keeps Apple on top.


> Then you can see how much resources any company dealing with Apple has to commit just to make sure their software keep up with Apple updates

Apple‘s competitors have sufficient cash flows to pay whatever they need to get the best people. They just don’t want to.


To be honest I’m amazed Apple even cares enough one way or another that they mentioned Linux virtualization in the M1 announcement. But it’s not a middle finger, this was more or less exactly how they handled multiboot on Intel: let the community figure out a solution, see if it gets uptake, support it with a first party solution if it does. That’s how we got Boot Camp, as there was a lot of interest in booting Windows at the time.

It’s a good sign that the latest betas (11.2 IIRC) officially support multiboot in the UI. That’s a good indication Apple sees the level of interest in Linux on M1 that they intend to at least let it happen.

I’d say it’s still up in the air whether they’ll go for full first party support with drivers or an open spec, but it’s definitely not out of the question. And they may even have direct interest in it, as I’m sure they’d like to get the benefits of their hardware in their data centers.


So essentially Apple is exploiting its consumers and get free research and development without having to pay salaries and tax? Probably that's why they are worth so much. I am only amazed that there is so many people willing to do this job for Apple for free - it would be a different ball game is macOS was open source, but sacrificing your own time and resources to enhance a commercial product... people are weird.


That’s... an incredibly bold take, especially on a forum operated by VCs, who certainly are familiar with the concept of finding product-market fit. Apple is observing the interests of people who use their products to help prioritize product development decisions.

Maybe the people doing this for free are just interested in benefitting from the result? As many people who work for free on open source do.

As far as I’m aware, there is no free (as in beer) hardware that runs Linux. Someone has put the effort into running Linux on every single for-profit/for-pay platform it runs on.

Are you under the mistaken impression I was suggesting that Apple waits for a community solution to be developed then packages that as a product? As far as I’m aware they didn’t do that with Boot Camp, but instead offered in-house drivers and blessed boot loaders and proprietary UI/UX for accessing both.


Well, that's why all the big companies open source stuff.

They're hoping to get increased for themselves, increased adoption of their internal tools outside of the company (easier recruiting plus purely internal tools are notorious for rotting quickly) and... free labor.


Just from a skim it seems like there is a new interrupt controller driver. The copyright header credits Linus Torvalds so they probably got started based on copying an existing one, but that sounds like substantial work and ongoing maintenance.


The interesting thing here is that now, when they own the CPU and GPU, and when MacOS is free, they probably might be more open to letting anyone install any OS on it. You want a Linux/BSD/Windows on M1, and you're not hurting any of their possible revenue streams, so why the hell not let you buy their hardware and throw whatever OS on it.

Somebody wants to copy the hardware over and sell it for half price? Yeah, good luck reproducing the M1.

Opening the boot loader to allow for Linux is quite an opposite of a middle finger, tbh. I don't know if they will divert some guys from working on MacOS towards Linux support, but this is already looking much better than before.


Were they ever opposed to running a different OS on their hardware? They released bootcamp, to help you do it for windows at least.

They don’t want you installing their OS on other hardware, not the reverse.


FreeBSD that didn't have a trillion-dollar company behind it never got such a treatment. They _might_ get it now.


I know it’s entering (dark?) gray territory, license wise, but has anybody ever attempted to wrap a Mac OS driver in a Linux compatibility layer?


Oh, like the old ndiswrapper approach for various windows XP NIC drivers. Which worked reasonably well...I remember using it for some laptop with a Realtek chip. https://en.m.wikipedia.org/wiki/NDISwrapper


marcan had answered it in one of the live streams - his opinion was that it was a last-ditch effort that shouldn't be required for most usecases.

Perhaps for some of the peripheral stuff (such as the touchbar), but the GPU ought not to need it.


Even Nvidia publish stuff for the Nouveau drivers now.

I wouldn't bet on it. Apple are very insular on matters like this, it's their toy, end of.


What will be interesting though is to measure the raw performance of this initial CPU only rendering. For now on it's a still image we don't know if it's sluggish as hell or actually pretty decent even without a GPU.

If the latest is true that could prove to be another prestige point for the M1.


A good number of comments here mention dealing with the GPU is going to be a major hurdle. What makes porting GPU drivers significantly more challenging than everything else?


They're very complex, very stateful devices which also run their own compiled shader code. Not to mention auxiliary DSPs like video decoders (not sure if M1 has it as part of GPU or a separate block), power gating control and many many more.

They may have in order of 100 registers to talk to them and they're horribly proprietary with pretty much no standardisation.

Reverse engineering that is hellish at best - you can see projects like nouveau which barely manages to get nVidia cores up and working without help from the manufacturer. And that's after years and years of development.


The hurdle Noveau is facing is that some things, like reclocking, need firmware loaded onto the card. The firmware is not in non-volatile memory on the hardware, but a file shipped with drivers, the one shipped with proprietary drivers is not redistributable and if you wanted to make your own, it needs to be signed by Nvidia anyway.

That's pretty much game over for Noveau, and it is not due to difficulties in figuring out registers and NV ISA.


The reclocking never really worked well on chips that were older than this signing requirement either.

But that's a bit besides the point :)


Why would you make your FW blobs be non-redistributable? An FW blob has only the purpose of running your hardware for your customers.

I can understand preventing FW patching, maybe. But redistributing the manufacturer's genuine FW, why not?


> What makes porting GPU drivers significantly more challenging than everything else?

Multiple reasons:

1) GPU manufacturers are notorious for not publishing documentation out of IP/patent concerns. Worst offender is NVIDIA here.

2) For embedded GPUs there isn't much interest in open source drivers... the big customers (think Samsung and the likes) have direct support from the chip design vendor and get drop-in drivers as part of the board support package (BSP, basically a conglomerate of bootloader, kernel+modules+initrd, firmware blobs for components such as wifi) so they don't need OSS drivers

3) The mobile GPU space is... splintered. With desktops you got the three major players AMD/ATI, NVIDIA and Intel's built-in Iris, in the GPU space there are more.


> GPU manufacturers are notorious for not publishing documentation out of IP/patent concerns. Worst offender is NVIDIA here.

I think easily Apple takes the cake from nVidia - they don't even provide drivers for anything but their platforms (that is for their proprietary GPU core). The GPU core that's actually in the M1.


A lot of this comment I don't understand how it applies to the Apple M1. I'm not saying it doesn't. I'm completely ignorant of these things. Am I just missing it?


Apple's M1 chip has a custom GPU built into it. There is no documentation on how that GPU works and Apple hasn't released any.

Making any modern GPU work is a lot of work because of how complicated they are. That's even with the full documentation.

In the Apple M1 case, the GPU will have to be reverse engineered to understand how it works, then a driver will need to be written for Linux that supports it.


Great progress already thanks to Corellium for the M1 linux projects [0][1], the PongoOS project and to all others involved.

Getting GPU acceleration is now the real challenge.

[0] https://github.com/corellium/preloader-m1

[1] https://github.com/corellium/linux-m1


Well. GPU, networking, power management, disk... There's more than 1 huge challenge


Networking and disk will be relatively trivial I'd expect. GPU and power management will be very hard.


Disk isn't all that complicated. Once people figured out that apple's NVME implementation was custom, writing a driver for it wasn't all that hard, even with the t2.


Can someone explain what a .dts file is?



(Waiting for page to load) "Nice, so they finally got a kernel to boot. I wonder what fancy way they'll be showing the dmesg infor--"

"Oh."

I think I need to recalibrate my idea of a 10x developer...

(The standard would seem to have ratcheted up somewhat as the years have gone by!)


When the kernel boots, and you have a framebuffer working, all of userspace and gnome will probably just work with no modifications.

You might have to play shenanigans like copying a complete filesystem into a ramdisk from the bootloader if your kernel doesn't have support for any IO/networking/storage devices. But you'll still be able to get this screenshot!

Having said that, they have USB working, which is quite an effort, although I'd guess it's an IPCore that a driver already existed for, so it was a simple matter of figuring out memory mappings etc. With USB working, you can make a very usable system, because pretty much any peripheral will work over USB.


Yes, USB is the same Synopsys DesignWare piece of..hardware that you can find on all the Allwinners/Rockchips/Amlogics/whatever. They wrote a driver for the "binding" crap around this to make the controller work, yeah. And some Type-C stuff (Apple handles it in software)


Note that this team already ported Linux to the iPhone, so what they did recently was update their iPhone port to support M1.


I see. The hardware history stories/narratives are getting absolutely hilarious, IMO... :)


A Raspberry Pi powered by something similar to the M1 or A14 bionic processor would be something. The Broadcom 1.5 GHz 64-bit quad-core ARM Cortex-A72 that powers the latest raspberry pi, is nothing compared to the new processor beasts from Apple.


An Ampere or other Arm server chip would fit the bill. Aside from Apple, no one is making arm chips to serve the laptop/desktop segment. So there's this HUGE gap between the endless sea of embedded/mobile Arm chips and then jumps to a handfull of mega-many-core server chips.

Apple is finally filling in one of the Arm gaps with a mid-tier chip that can handle a desktop load efficiently while an efficient GPU can handle graphics without thermally clobbering each other.


> Aside from Apple, no one is making arm chips to serve the laptop/desktop segment.

You can argue that their chips in this segment are not very good, but Qualcomm is actually specifically addressing the laptop market, and they're not "no one".

https://www.anandtech.com/show/15210/qualcomm-expands-lineup...


While Raspberry Pi may not be as powerful as an M1, I'd not discount any of these small boards.

I have a OrangePi Zero w/ 512MB RAM at home and it handles a lot of stuff (Syncthing, DNSMasq, rsync based backups and more) without a glitch.

The only thing it doesn't like to handle SFTP encryption at high speeds. Processor gets visibly strained and overheats after a 4MB/s or so.


We already have the Mac Mini, but an A14-based SBC would be less of a stretch than an M1. A picoMac, so to speak? That would almost inevitably run MacOS by default rather than Linux, though.

The GPU is make or break for Linux. No GPU means it's just another server in a different form factor.


That's basically the Apple TV.

Let that thing have more standard IO, and run Linux, and it'd be a heck of a fanless machine.


The latest Apple TV has a fan :(


This was quite fast considering that Apple opened up booting custom objects sometime last week…it seems to still be software rendering for now, but it's good to see this progressing so quickly. It seems that there is effort being put into upstreaming this too (despite a couple of unfortunately hitches :/) so we might be seeing this ready for general-purpose use quite soon!


Apple officially opened up booting custom images on the M1? Do you have any more information about this? I had a quick search and couldn’t find any news stories.


It’s not something that would probably get reported widely, but the bputil command (which manages boot policy, as the name suggests) and kmutil now work as of the second beta of 11.2 and allow for you to provide your own code to run.


This is probably what was being referenced: https://news.ycombinator.com/item?id=25772462

"macOS Big Sur 11.2 beta 2 is out with full custom kernel support" ... "The OS now finally includes the firmware and bootloaders and tools necessary to replace Big Sur with not-Big-Sur. That was previously not possible."


I understand there is a thing with trying to run Linux everywhere, and it’s a fun exercise.

Even if Apple does do some open source support, Even if that hardware isn’t quite as nice, I prefer to buy/support hardware and vendors that support Linux at this point.


It’s also just a more enjoyable experience to run Linux on hardware that’s actually supported...I’ve been using Macs since I was a kid in 1993, and have at some point or another run Linux on each of them. It has pretty much always been easy to get up and running but frustrating to actually use. There is a wide chasm between “it runs” and “I want to use this every day”, usually involving sleep/wake issues, wifi problems or driver issues for other internal hardware (backlights, camera etc).

Perhaps the M1 is sufficiently compelling to muster the engineering resources needed to get every piece working nicely, we’ll see!


Understandable and I was ready to buy a Dell XPS as I like the "small and light" form factor. I was just waiting for the COVID buy spike to subside and see how my financial situation would evolve.

But I have pushed the decision back to see how Linux will be running on the M1. From first reports, it is simply tailor-made for a laptop computer that doesn't get burning hot or dog slow because of thermal throttling.

I hope intel and to a lesser extend AMD are looking at this thing red-faced. But they will need some time to play catch-up.


If you're planning that device to be your daily driver, I'd definitely get the XPS. A stable Linux distro running on M1 is probably 3-5 years away, at least.


Thanks. I do understand that. I'll probably be running Linux in a VM for the time being and switch to a native installation once it is ready.

I wish Apple would help Linux on their hardware as it won't take away a lot from their software business and they do have the resources, but yeah, they are just a big company like others in that regard.


>I hope intel and to a lesser extend AMD are looking at this thing red-faced. But they will need some time to play catch-up.

I think it raised their eyebrows a bit, but don't agree that they need some time to play catch-up. Bottom line is the vast majority of Intel and AMD chips are running Windows. You can't even buy Windows 10 for an ARM processor. Microsoft flirted with the idea at one point, but has pretty much abandoned it. So why would Intel or AMD market ARM chips at all when Microsoft isn't supporting them at all?

If you are talking about servers that is a different story (since lots of servers run Linux). AMD already made a server with an ARM processor, the latest "Opteron" series was ARM https://www.amd.com/en/amd-opteron-a1100 These came out in 2016 and don't think they sold nearly as well as Epyc has. Taking a second look at offering ARM for the datacenter in the fture might not be a bad idea for AMD though.


I wasn't talking about ARM chips in particular, but chips with great power management, chips that are able to pull great performance if it is required without hitting the thermal envelope instantly. Chips with a reasonably good GPU that doesn't exacerbate the thermal problems.

The M1 isn't a Xeon in disguise, it has its limitations. But for the sub-notebook form factor, it's in its own class and freaking intel can't match it. And they've been treading water since about 6y now.

Also: me personally, I don't care about Windows. I want to run Linux. I understand that's a niche market, but we are writing in a Linux thread, so...


Their catch-up need not be in the form of an ARM chip. If they can get an x86-compatible chip to match the M1's performance at with similar thermals/battery, that's even better.


I have fairly recent XPS and indeed it is a shame how poorly it runs in terms of throttling and getting hot. On Linux when you launch Teams, forget about being able to keep it on your lap for too long. Teams eats the CPU for breakfast and probably you could fry an egg on it if you closed the lid. Of course it has to be plugged in at all times as battery life is a joke after few months.


I hadn't thought about Yellow Dog Linux in years until seeing this. Discontinued in 2012, which is understandable after Apple's switch to Intel.


Ah the memories of yaboot blessing /dev/hda1 with holy penguin pee...!


Their great contribution to the rpm world is yum


For some reason it was the distro that ran best on the ps3, I seem to recall.


The reason is that Apple used PPC CPUs in their Mac's before Intel

And this was a distro geared towards PowerPC hardware.


These days it would be T2

https://t2sde.org/


I wonder whether linux could benefit from the specific architecture changes of the M1. Apple's made a big technical claim of the hardware acceleration of reference counting - another OS and toolchain optimizing around it to any extent would test their claims.

This both reads a little overly optimistic to me knowing how proprietary things are - but linux and foss software have history of running on all manner of platforms. Arguments based on: "you can only get a client machine with at max 16 gig of ram" or "you can't plunk an M1 into a server chassis with all the hardware you need" and so on may be various amounts of preliminary (we only have 1 chip from them yet and it's clearly a 'first out the door' type near-beta) to actually relevant.


That is some insanely fast progress. Well done and hats off to all involved.


This is cool but technically I'd say it would need to use the disk in the Mac to be running "on" the Mac.


Why's that? If I netboot Linux on my desktop PC, is it not running "on" my desktop PC?

Is the CPU/SoC not the thing on which then operating system is running?


Technically yes. But people want to have an M1 laptop on which they can run Linux. Not a laptop + external disk + other baggage.


I agree with that. But given the context -- an experimental build targetting a new SoC, I wouldn't personally dispute the claim that it's running on an M1 machine.

If the tweet was claiming that it was ready to go for general use and that everyone should install it and use it as their daily driver, then I could see that the external boot disk caveat would be more significant... but it seems kind of irrelevant in the context of what they've achieved so far.


I think running it on disk is probably the least of their worries. As I understand it, the GPU is the biggest hurdle to jump.


Until I have a post-EOL OS option for an M1 mac, I'm not buying.


Can I have a Beowulf cluster of those? (yes, I am a grey haired BOFH)


One thing I've always wondered is... what did one do with a Beowulf cluster back when that was a thing?


Compile linux kernels and boast about the speed?


I'm excited for Linux support on the M1. I don't even have a Mac Book, but more incentive for Linux apps to support ARM is a good thing.


Most distros have had ARMv8 (AArch64) support for a while now, Apple didn't invent RISC/ARM this year.

The most common AArch64 distro usage is on RaspberryPies


Yeah, but there wasn't consistent ARM support like there was for x86. It was only a few months ago that the desktop release of Ubuntu didn't support ARM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: