Hacker News new | past | comments | ask | show | jobs | submit login

Linux might run, but good luck with the drivers. And good luck with it keeping working with future upgrades of both, hardware and software. Heck, even today it is pretty difficult to get things working reliably unless it is a thinkpad or some other Linux friendly brand.

A Mac with subpar support for a webcam, energy saving, suspend/resume, the trackpad, brightness/volume controls, etc is not a laptop, it's an expensive paperweight.

In my opinion, other than doing it for the sake of learning and the challenge of it, this will lead nowhere in the long term unless you get any kind of commitment or support from Apple. And I hope I'm wrong but that's very, very unlikely to happen any time soon.




Funny, that's how I feel with Windows and Mac.

I grab a machine and install Linux, and it works more or less out of the box. Maybe a few fixable quirks. And I don't use Thinkpads.

I try to use Windows or macos, and it's a coin toss. Windows handling USB like hot garbage (hub balancing/buffer sizes leading to devices unable to activate, webcams glitching, input lagging at random intervals), display issues (macos doesn't support displayport MST), dock connectivity problems (macos freezing or crashing when connected except when it suddenly works for a day, Windows playing disconnect/reconnect sounds in a loop when the machine goes idle), and more, including just today out inexplicable crashing.

Install Linux or plug all the "made for windows/macOS" hardware into a Linux box, and all the problems went away.

My conclusion, supported by having been a proprietary kernel driver developer for Windows, FreeBSD and Linux, is that any hardware and driver combo tends to be a coin toss, irrespective of platform.

But with Linux, drivers that would otherwise be abandoned after a project was shipped, get a chance at being fixed and improved that their proprietary counterparts can never even dream of.


The thing is, many more people are using Windows, than Macs and many more people are using Macs, than desktop Linuxes. So your anecdata goes against a mountain of anecdata from people using those platforms.


You are incorrectly assigning platform popularity as equal to driver maintenance.

First, non-desktop uses of Linux significantly outnumber uses of Windows and macOS in general, and a large majority of drivers and functionality is shared there.

Second, drivers not made by the usual giants are, as I mentioned earlier and experienced first hand, written and shipped once, maybe with a few updates for obvious issues, but this shit is harder than you think, with obscure uncaught bugs littered all over the place. Drivers need pretty much perpetual maintenance for all but the simplest blinkenlights.

Third, even "desktop" drivers are run through their places in server environments, through e.g. workloads like Stadia.

Fourth, proprietary drivers are not subject to any scrutiny whatsoever, whereas you won't get past the gatekeepers if you don't pull your shit together for the open source kernels.

Popularity does not fix unmaintained code. Giving even just one annoyed and skilled person the ability to change the code does. And sometimes, external companies like Collabora will decide to overhaul things, which they again can't do for a proprietary driver.


I know all of that, but it does not square up with real life.

Go to your favorite computer store. Randomly pick 5 laptops, regardless of the price. So no cherry picking.

Use these laptops with Windows installed on them, for a reasonably long period, for example 1 month, as your daily driver. Perform varied tasks on them, such as printing, connecting to external displays, to projectors, other peripherals, playing modern 3D games on them, etc.

Then do the same with Linux.

I'm willing to bet $100 that on average Windows will run better on them, have a longer lasting battery life, better network connectivity, etc.

And if you're trying to tell me that on average Linux runs better than MacOS on Apple laptops and desktops, then this discussion is not worth continuing, we both probably have better things to do with our time.


You will pay me just $100 to out of my own pocket buy five random laptops regardless of price, some of which will be several thousand dollars, from the local electronics store.

You're very right that there is no value in continuing a discussion if that's the level of debate you're presenting. I'm off.


There are many people like me, however, who choose to run Linux in a VM under Windows so they don't have to deal with bugs.

Linux as a personal operating system got WAY better over the last years, sure, but you can't seriously argue that it got Windows AND MacOS beat.


I am arguing just that, with a reasonably large and varied sample-size, although heavily biased towards macs over windows-powered machines. I get a lot of machines from clients.

Now, a mac on its own tends to work quite well out of the box, but this does not hold for peripherals, and I feel like the machines always end up developing... quirks.

I always felt that the path you picked just gives you the sum of all problems with few of the benefits. I could maybe do the other way around for compat, but the laptop wouldn't be able to stay on this side of the balcony railing for long if I did it the way suggested... :|


hmmm ... yes. anecdata.


The thing is, many more people are using phone and tablets, than Windows and many more people are using Windows, than Macs and many more people are using Macs, than desktop Linuxes and many more people are using desktop Linuxes, than desktop BSDs. So the anecdata goes against your mountain of anecdata that goes against a world of anecdata from people using those platforms.


Yeah, because when I use a phone or tablet I have to worry about installing my own drivers :-)

Apples, oranges.


When was the last time you installed your own drivers on a desktop Linux system, and for what?


2018, for keyboard and touchpad support on a 2016 MacBook Pro. (That driver was eventually upstreamed, but wasn't at the time.)

But I guess that's kinda in-line with this thread, that Macs are a pain to run Linux on.


I don't remember installing any drivers in Linux in last... I don't even know how many... years. For the hardware I happened to use, it was Mac-like experience.

And actually, Windows 10 is also approaching this state, but it is not there yet.


The great thing about Linux is that almost every driver is included with the kernel itself, so you don't need to worry about installing drivers. Of course, there are vendors who don't like to cooperate with Linux developers, and release kernel-tainting drivers outside of the mainline kernel.


I last installed drivers on Linux... in 2007? 2008?

I last installed drivers on Windows late last year.


I've had to on Linux for a very common wireless networking chip. Also for a scanner from a very common printer brand (that ironically has excellent support for printers on Linux). And this is Debian on a ThinkPad--a very common combination. Almost as good as Windows but given I had to load my wireless drivers via USB I still consider Windows the gold standard when it comes to built in drivers. It at least does well with networking drivers which is the most important and essential drivers to have. Everything else windows can easily download and install automatically over the network. I last did a manual install for drivers on Windows years ago cos it just does it by itself now.


Hm? Debian is famous for not including nonfree wireless firmware on purpose (drivers are an entirely different thing). They also provide an optional installer with these included.

So your example seems to be completely unrelated to the question.


There is no choice in OS for mobile. Its strictly tied to the hardware so I don't see how that could show any kind of user preference without being dominated by hardware preference.


And smartphones runs mostly Android, that is Linux kernel.


>Funny, that's how I feel with Windows and Mac.

Based on...? Apple is pretty straightforward: they support hardware until they don't. And they're quite explicit about ending support and it's almost always a major release. You might be able to hack support after that, but I've literally never had it be a "toss-up" about when Apple was or wasn't supporting their own hardware.

As for Windows... I've got a 10 year old 2600k based desktop that runs the latest version of windows flawlessly. I guess if you go back 20? 30? 40? years you might find something that can't run the latest version of windows, but you're going to be down a really, REALLY obscure rabbit hole. I can't say I ever recall it being a coin toss, it was about 10 seconds on google of finding or not finding a driver.

Linux on the other hand... the support of hardware is awesome, but determining if something is or isn't supported is generally an afternoon of reading mailing lists.


I'm a huge apple fanboy, but there are still issues that crop up on macOS, like it or not.

My favourite example is one that the OP mentioned - not supporting multiple stream transport on Displayport. For those who aren't familiar, Displayport MST is a feature that allows multiple streams over one Displayport cable. Some monitors support this directly, meaning you can have

    Macbook -> Display1 -> Display2
rather than

    Macbook -> Display1
    Macbook -> Display2
This is great, and it really helps clean up your desk in multi-monitor setups and maintain that "one cable" philosophy that I, personally, love. And macOS supports MST too, which is great.

Except they don't support it for this.

What they support MST for is to allow vastly-higher-resolution displays on macOS, such as 5K displays, by splitting the display image over multiple stream transports to bypass the limit on resolutions and refresh rates that Displayport provides (or provided at the time).

For some reason, they just haven't bothered to implement MST to allow for multiple displays; it exists and is supported, but only for high-resolution displays. This is great if you're googling around and see that macOS supports MST, then you buy monitors which support MST and hey surprise it doesn't work and there's literally no indication why.


Huh, I've only ever heard of MST in the context of too-high-res displays o_0 Never heard of raw DP supporting daisy chaining, I thought only Thunderbolt does that.


Thunderbol/USB C use displayport signal to drive monitors. I have a apple "thunderbolt" display which happily runs from laptops that don't do thunderbolt.

Dell makes some Displayport monitors which support daisy chaining.


I run all three here. On Windows 10, if I say on the "happy path" everything "just works". I have a Lenovo laptop, and our desktops are Xeon boxes with Supermicro motherboards. No weird USB problems, etc.

Linux, however, is another story. Bad sleep support, forget about printing, scanning, GPGPU computing, etc. We use it when we have to.


For fixing sleep look for an option in Lenovo's BIOS. I had it set to windows sleep mode by default. Switched it to Linux mode and it has been working reliably on my x13 AMD.

I upgraded from thinkpad x240 on which sleep worked perfectly too.


MacOS a coin toss and Linux being robust regarding drivers/hardware support on desktop? Are you talking about Hackintosh, or do we not live on the same planet?


It's the smaller things. Obviously MacOS won't have trouble with mac hardware, but my work macbook can't wake up my monitor through HDMI, or chain DP displays, or connect to my phone's storage through USB, etc...


I’ve never experienced the wake issue, but I always use usb-c to DP or HDMI and apparently those aren’t affected? Assuming it’s the same issue, a little googling shows the problem was fixed a year ago.

What phone are you having issues with? Every android phone I’ve used will communicate with adb. iPhone has never used USB mass storage, and support for that has nothing to do with MacOS.

Can’t comment on the daisy chain issue, I just learned that was a thing.


My 2019 Macbook Pro has consistent issues waking from sleep. Nothing plugged in except the included power supply.

Screen will remain black. Or, screen will power on, display nothing. Or, screen will power on, mouse cursor will appear, but no password prompt.


You should return it and get a free replacement. It’s clearly defective.


No, this is a software problem, not a hardware problem. The replacement would exhibit the same behavior.


Things like printers/scanners can be a problem. With Mac, I cannot scan on Samsung M2070w MFP in color (with the Samsung/HP driver installed, which must be done manually).

No such problem with Linux.


I've had far less problems with Linux. Just an hour ago I was dealing with someone whose Roboteq motor controller wouldn't work in Windows or Mac without additional drivers but it's basically plug-and-play on Linux.

Yeah there are things Linux doesn't support but just don't buy them. The things that it does support generally don't require any driver installation.


i miss the golden days of the Hackintosh! Would love to see someone bring this back somehow. Running linux on a machine like an m1 macbook pro would be a dream.


I'll do you one better:

macOS does support MST! It actually does! But it only supports it for splitting one display image over multiple streams, to overcome bandwidth limits on Displayport streams.

For providing a signal to very-high-pixel-count displays, macOS uses MST.

For providing multiple displayport signals to multiple displays? Nope, not implemented.

Imagine my frustration after a day of googling and finding out that macOS supports the feature we wanted, but not the use that we wanted.


You're mostly wrong; MST does not increase bandwidth, it splits the bandwidth of a single DisplayPort link. To increase raw bandwidth for 5k/6k they combine multiple HBR2/HBR3 links (over Thunderbolt or with multiple cables), which is the opposite of MST.

MST is supported specifically for early 4k displays that had scalers that couldn't handle 4k60, but could handle half the resolution, so they sent two streams. But that wasn't a bandwidth limitation; old MST 4k displays and modern SST 4k displays both used a single HBR2 link.


I believe they meant increases bandwidth utilization.

MSTs primary usecase today is multiple displays, by "daisy-chaining" through built-in MST bridges, dual DP dongles or docks.

Support for hacky displays is less interesting, and hopefully not relevant today.


That is basically what happens when you don't support MST.

The screens light up from the same DP stream, which is also why you can have, say, 2x 4k@60hz monitors in this configuration where true MST would need to drop it to at least 30Hz, or lower res.


> having been a proprietary kernel driver developer for Windows, FreeBSD and Linux

May I ask how you learned how to do this? I'd like to learn too! I had a great experience developing a Linux user space driver for my own laptop's LEDs. Couldn't figure out how to control the fans though.


1. Accept that printf (preferably over serial) is The One True Debugger. It is the tool you always have - if you can't get a print over serial, you're in too deep to use a debugger anyway.

2. Play around with embedded. You can use an arduino, but get rid of the Arduino IDE. Once you've ridded yourself from their weird environment and code in C, you're pretty close to what kernel programming is: Direct hardware control, debugging over a serial console, and if you mess up you don't get saved by a segfault.

You can upgrade to playing with ARM boards later if you want. Things like a Raspberry Pi can also be useful to boot random kernels you've built later on for HW stuff, otherwise you can use VMs. QEMU can boot a kernel file directly, which makes debugging easier.

3. Look at one of the tutorial for writing hello-world kernel modules. There's also usually smaller cleanup tasks you can do to get started submitting work. Looking at Linux and FreeBSD both can be useful, and things like Plan9 have very small kernels that can be used as reference. Linux and FreeBSD are not that different. (Windows is a pain with really weird interfaces, but it can be made to work.)

4. Find something you want to do with the kernel or fix in it.

Kernel developers aren't that common, so I imagine a lot of places are willing to train people. The first job I had doing kernel work was pretty open, and just threw minor stuff to begin with at me, e.g. "things stopped working after kernel X.Y, figure out what happened". Bisecting, testing in VMs, printk'ing a lot to compare state, stuff like that. I later ended up being the owner of the kernel drivers of all our platforms, so I guess I did okay. :)


I got to build a driver once, making an NDIS LWF encap/decap driver for Windows. I found it extremely soothing, and kind of old school - I had to use a real machine in my office with firewire debugging, and use windbg like a greybeard.

But not having the right documentation was a challenge. MSDN is okay, but the weird mechanics of MDL chains don't really get discussed on Stack Overflow.


> But not having the right documentation was a challenge.

How do people overcome this? I managed to reverse my laptop's LED commands: they were implemented via USB so I used wireshark to intercept and analyze the data sent by the proprietary vendor software. What if it's some ACPI thing though? Or some memory mapped I/O chip? How do people figure out how it works?


Then tell me why my Elantech touchpad keeps freezing randomly on my Huawei Matebook 14d running Linux, but works fine on Windows.


Agreed. Check out the state of Linux support on Intel Macs that were released in and after 2016[1], it's abysmal. Once Apple started adding non-standard hardware, Linux support never caught up.

It isn't like the lack of support is Linux developers' fault. Apple doesn't provide datasheets for their hardware, and they don't cooperate with developers writing drivers for their custom hardware.

There are hundreds, if not thousands, of ARM SoCs that "run Linux", but that doesn't mean much because they're actually running Linux forks, and someone needs to maintain those forks, and build and release custom images for each SoC. I don't see M1 Macs diverting from that fate without significant support from Apple themselves.

[1] https://github.com/Dunedan/mbp-2016-linux/


> that doesn't mean much because they're actually running Linux forks

Many of these Arm SoCs are running linux forks because there are some terrible baked in drivers, and no spec associated with them. Still, that is a step ahead of the M1 as there are at least drivers that don't have to be decompiled and reverse engineered.


These drivers are baked, even if there are specs for them.

The thing is, that these SoCs have no PCIe or other enumerable bus. You (=your kernel) must know what hardware it is running on and which drivers to load without being able to ask the hardware, what is really present. One wrong POKE and the entire system can hang.

And that is on top of the problem, how to boot in the first place. There is not such thing as UEFI on these SoCs. Every single one is a special snowflake with its own special way to boot.

Hence, kernels built for specific systems, even if it is just Device Tree.


Things have gotten a lot better in ARM-land recently. It's possible to build a generic ARM kernel that will boot on a lot of hardware, as long as the bootloader passes the right device tree blob.

Most device vendors don't really have a need or incentive to use this, though. A generic kernel is much larger and wastes flash space, and most vendors don't care about running anything but their OS image on it, so they of course just build a kernel specific to their hardware.

But in theory it should be possible at some point to use a generic ARM Debian installer to install your own OS on a random ARM device, assuming the relevant drivers and device tree have been upstreamed.


This is a little out of date. Device tree exists explicitly to avoid needing a machine file for your board. Bootloaders like uboot will load your kernel and dtb together, and the kernel uses the dtb to enumerate the hardware on the system.

Again, the problem isn't that the mainline kernel _couldnt_ support these boards, is that no one wants to put in the work to bring their implementations up to the standards of the mainline kernel.


> Every single one is a special snowflake with its own special way to boot

Eh, the majority of embedded crap comes with U-Boot (sometimes built without EFI support but always with Linux image support, at least).


I've run into more than a few ARM SoCs that use custom U-Boot forks, too.


I don't think I have used an Arm SoC that _didnt_ use either a custom uboot, or some custom stage one loader (often a stripped down uboot).


Apple is a hardware vendor. How are the parts they use "non-standard"? What are the "standard" parts that Linux uses and further what consumer or business hardware has Linux indicated is standard?

It appears what should be being asked for is Apple to commit resources to natively support things like: Docker, Kubernetes, Virtualization and Linux drivers on their M1 Macs going forward.


Interfacets, it is mentioned in the article

> Apple designed their own interrupt controller, the Apple Interrupt Controller (AIC), not compatible with either of the major ARM GIC standards. And not only that: the timer interrupts - normally connected to a regular per-CPU interrupt on ARM - are instead routed to the FIQ, an abstruse architectural feature, seen more frequently in the old 32-bit ARM days. Naturally, Linux kernel did not support delivering any interrupts via the FIQ path, so we had to add that.

and many others

https://corellium.com/blog/linux-m1

It is on top of HN now https://news.ycombinator.com/item?id=25859907


The T2 chip is non-standard and unique to Macs. It affects the boot process and disk access in non-standard and proprietary ways. Apple's SMC is also non-standard, as is their UEFI implementation.

You can see a list of differences between Macs and standard PCs here: https://en.wikipedia.org/wiki/Apple%E2%80%93Intel_architectu...


Not many people want to run Linux on intel based mac because you can run Linux on a much cheaper intel machine.


No one should be purchasing a brand new M1 Mac with the expectation of perfect Linux support any time soon.

However, I’m optimistic that these will be mostly usable on Linux before they’re too obsolete or outdated. The platform is so popular and iconic that it’s drawing a lot of attention from Linux devs and reverse engineering crowds.


> No one should be purchasing a brand new M1 Mac with the expectation of perfect Linux support any time soon.

That's sound advice.

As someone who doesn't use Linux as their daily driver, can I expect Linux to run perfectly in a VM on an M1 Mac?


I was able to install an ARM build of Ubuntu Server using the Parallels beta, and it works fine.


Good to hear this. I am waiting for my M1 Mac, and if I can get Linux arm64 virtual machines running for development purposes, I can hack the rest.


It doesn’t work fine, UI resolution restricted, sticking cursor issues, no parallel tools available


I'm also interested on this. I usually run Ubuntu via Vagrant + Virtualbox on my Mac.


I'm not at all as optimistic as you are. You still need out-of-tree drivers to run Linux on a 2017 MacBook Pro, and various things don't work at all on Intel Macs from 2016 (possibly earlier) to the present.

Sure the M1 is the new shiny, and people will be attracted to it in the short term, which might boost reverse-engineering efforts. But I expect that to die down as people get frustrated, and we'll have the same (or worse) situation as we do running Linux on Intel Macs.


Right. And by the time there is halfway decent support for first gen M1s a new generation will come out with a ton of new breakages.


So the advice is to buy a second hand one 5 years from now?


Maybe. But consider that a non-touchbar 2016 MacBook Pro, which is turning 5 years old this year, still doesn't flawlessly run Linux. And that doesn't even have the custom Apple T1/T2 chip. The ARM hardware will be much harder to deal with.


Macbooks almost never have seamless wifi experiences on linux. At best you will have to manually install a wifi blob and for the newer ones the advice is to just get a wifi usb.


Even on old + modern thinkpads it feels like a lottery which hardware feature will stop working at a particular time, with the distros i've worked myself through.

Ethernet, waking up from sleep mode, brightness adjustment... take your pick.


I discovered last week, to my chagrin, that the RAM on my x1 carbon Stinkpad was starting to fail--memtest x86+ confirmed it. Memory corruption is insidious, so I rushed to the internet to buy new DIMMs, opened the sucker up...and there's no DIMM. The RAM is just soldered into the motherboard. No upgrade, no fix. This is now a pile of unreliable junk that randomly flips bits in memory. So data evac commences, with MD5 hashes and double, trip-checking everything.

A reminder that not only software bitrots, but hardware too. Make sure not to buy a Stinkpad (or any hardware) with non-replaceable parts!

(and I just bought an M1 Macbook Pro...)


Buying a mac after complaining about how you can't replace ram seems a bit odd.


Feelings, not reason, probably applied there. At least it the logic was consistent in both cases.


It was a decision I was considering for months, the memory corruption just forced it. I didn't expect software to be this far behind. I was aware that the RAM and everything is even less upgradeable than a Stinkpad, but I am hoping this machine is more reliable overall and it will give me 5+ years of service. (My last Macbook Pro from 2010 was still kicking up until 2018, when I switched to the Stinkpad because keyboard.).


Yeah, I know. But the chip kicks butt :)

Software will catch up with the M1, I hope!


> Software will catch up with the M1, I hope!

Can't wait for [Your least favourite Electron App] to use even more CPU cycles!


I'm not sure how buying a M1 macbook pro will help considering it also has soldered ram?


> Make sure not to buy a Stinkpad (or any hardware) with non-replaceable parts!

Correct - that is why I bought a Thinkpad X1 Extreme - both SSD and RAM are user replaceable. Granted its not as thin as X1 Carbon but I don't need to carry it around much so that is not a big issue for me.


I dont really understand why modern devs can't carry a 5 pound laptop. Its not exactly a burden.


5 pounds is pretty heavy.


If you can pinpoint which pages are affected (in most cases you can), you can just add that range to the blacklist.

Can work in the meantime, before you have a new machine.


There were dozens of ranges, too many to blacklist. Which made me suspect that it might actually be the memory controller. In which case, it's totally F'd. But given this I went for total data evac.


Couldn’t you just move the disk to another device and read it from there instead of risking data corruption or going through all the hassle of comparing checksums?


Perhaps. It has an SSD, so I'd need a thingie like this adapter to do that: https://www.amazon.com/SATA-Adapter-Thinkpad-Lenovo-Carbon/d...

But I don't have a SATA reader either, so I'd need one of those :-)


I was more so referring to the Linux experience, haven't had any troubles with ThinkPad hardware as of yet but I know that there are complaints around it as of late :)

Though, if you want a serviceable system, I'd stay away from Apple - the RAM in your new M1 mac is actually included in the SoC, not sure how much you'd like to get in there!


This is a depressing thought to hear on a site titled Hacker News. Looking back at the past 20 years, I am quite confident it is just a matter of time before people get a couple drivers working fine on M1 systems (esp. considering Apple does not seem to be actively fighting this). I am sure folks will make it work considering the relative popularity of the device compared to all the esoteric devices that no one uses and Linux still supports.

Sure, documentation helps, but it is overrated--people get lots of things working by reverse engineering.


On the contrary, this could be fantastic for a headless server running on an M1 Mini to build and test ARM code before deploying to AWS Graviton. It doesn't all have to be about laptops.


For the about price of a mac mini:

https://shop.solid-run.com/product-category/embedded-compute...

You get upgradable ram, a PCIe slot for your favorite GPU/etc, and actual mainline linux support sufficient to boot most random linux distros (ok the onboard 10G+ nic might not work without patches).

Its not the fastest machine around (A72s) but there are 16 of them, so it does a decent job of building ARM software, and running VMs.


Well... for the price of a Mac Mini, and some, you get a bare motherboard with a small amount of onboard storage (64G eMMC). No RAM, no video, no power supply or case. And it's not like you can just throw an off-the-shelf GPU in, either; neither nVidia nor AMD graphics cards work on ARM devices -- so if you're expecting something you can use as a desktop, you're in for a hard time.


Both amdgpu and nouveau work absolutely fine on ARM devices, why wouldn't they?!

In fact I made the FreeBSD port of amdgpu work on my Macchiatobin :) Absolutely smooth experience btw, video output works even in UEFI (it actually runs the GOP driver from the card in QEMU), amdgpu works perfectly (played vkquake, supertuxkart, openmw, etc.)


The results reported at [1] had led me to believe that the AMD/NV ARM drivers were still unready. Am I mistaken? Are those issues specific to BCM283x?

[1]: https://www.jeffgeerling.com/blog/2020/external-gpus-and-ras...


Yes, the BCMwhatever's weird PCIe controller literally returns corrupted data when the driver tries to read from the card, possibly due to not supporting some kind of 64 bit read.


That board is pretty close to the same experience you would get with a random x86 mITX board (once you flash the UEFI firmware on it). It has a m.2 nvme slot, sata, etc. Its one of the lower cost standards based arm boards.

So, yes you have to bring some storage, ram, ps and case. But one can cheap out and probably get all three for <$200, or go crazy and put 64G of ECC, and a few TB of storage in it. The cheaper off the shelf RAM/storage makes it a lot cheaper than a loaded mini with similar specifications.

And as another commenter points out, yes fairly common GPUs tend to work in it. Its not perfect, for that you have to spend more, but the time you will spend fighting with the mac mini+linux is going to be worth the difference.


While a valid use case, it feels like overkill. There are much less expensive and more stable ways to accomplish this.


This is how a lot of people used Linux on PowerPC macs in the early aughts. It set them up for larger projects on IBM hardware and the goofy risc engineers got to walk around looking cool because they had a powerbook.


Agree, this could be a good use case.


Well about the hardware support, i.e. the Ubuntu Desktop certified hardware list is quite long [1], and I'm quite sure there are a lot more perfectly working brands/models that are just not appearing there. I've been using a Dell Inspiron model, for 8 years now not appearing in that list as a daily driver and every major distro worked perfectly out of the box, some tweak required from time to time but that's it.

[1] https://certification.ubuntu.com/desktop


I have confidence in the parallel efforts being spearheaded by Marcan. He knows what he's doing and he's working on it full time.


And that's why this has no long term future. There are not that many people able to achieve this. One single person can't do it forever.


Presumably Apple isn't going to redesign their GPU architecture every generation? How much maintenance do you see this needing?


Uhh... Linux supports a lot of hardware. Probably a lot of in-tree drivers that still get use were started as reverse engineering efforts by a small number of people.

Perhaps you're meaning to say Apple will iterate the hardware faster than people can add Linux support? That's plausible. It also was the status quo of Linux on PCs for many years. (Still the case with some hardware I guess.) That hasn't killed Linux yet.


Maybe someone else will also pick up some skills from this. He is streaming all his reversing sessions on Youtube, it is quite interesting.


Yup. The last Mac I ran Linux on was a 2016 MBP. And even then, audio and suspend/resume didn't work (well, suspend was fine, resume... not so much), and battery life was terrible. When I first installed it, the keyboard and touchpad required an out-of-tree driver.

That driver has been upstreamed, but I hear the 2017 and onward touchbar MBPs still require and out-of-tree driver, and a similar level of things don't work.

In 2019 I finally gave up and got a Dell XPS13. It's... basically perfect? The keyboard doesn't suck like recent MacBooks, though the touchpad isn't quite as nice (though really, it's perfectly fine). The only thing that doesn't work is the fingerprint reader, but I knew that going in and didn't care. And subjectively the hardware isn't as pretty, but... oh well. On the plus side, I really do like the "soft touch" palm rest better than the harsh aluminum of the MacBook.

I'm definitely interested in the Linux-on-M1 project's progress from an intellectual curiosity standpoint, but I'm expecting long-term as a user it'll be just as -- if not more -- frustrating as running Linux on any Intel MacBook from the last 5 years, which means I won't bother.


My last Mac running Linux was a 2011 MacBook Air 13". My Linux (work) laptop is an XPS15. Now that optimus on Linux is completely transparent (merged into Xorg in April '20). Sleep doesn't seem to work well though. Fans keep spinning and it drains about 10% of the battery per hour. If anyone knows how to get around that "active sleep" mode I'd appreciate it.

My personal laptop is an HP Envy X360 with a Ryzen 5 2500U. That was rough when it came out. Windows update updated the bios on the laptop which installed a new GPU firmware that wasn't supported by the Linux driver for a few months. It's been solid since kernel 5.0 came out.

IMHO, the Linux experience on the AMD APUs is better than the Windows experience due to how rough the AMD windows drivers can be for the APUs. The fans are always shrieking at me under Windows but at idle they are off in Linux and the battery lasts significantly longer.


> And even then, audio and suspend/resume didn't work (well, suspend was fine, resume... not so much)

The computer never woke and needed to be hard-rebooted if it was ever allowed to suspend? I ran into this after installing Linux on a MacBook, downgrading the kernel to an old LTS release eventually worked.


Yep. The issue was that the NVMe controller wouldn't wake up properly. Downgrading the kernel wouldn't have helped, as support for that NVMe controller itself had just recently been added.


Yeah, other than an experimental exercises I don't get why anyone would ever want to do this. You will spend WAY more money for the Apple hardware, you will struggle to get things to stay consistent. If your goal is to run linux then don't buy a Mac.


> You will spend WAY more money for the Apple hardware

I think right now the new M1 machines are actually priced pretty reasonable. If they'd run Linux, I would buy them in an instant.

I mean, try to match the Air with a Thinkpad, especially if you consider the screen. I think there is not even one recent AMD Thinkpad with a > FHD screen. And who wants a 2020 Intel machine?

Edit: To make this clear: No Linux, no Mac for me (as I don't believe in MacOS's future at all). But even considering the horrible keyboard, I think the M1 macs are totally worth it just for the hardware. That's a first. And I am the kind of person who feels all warm and fuzzy over ejecting an ultrabay hard drive, or changing RAM/display in < 5 minutes in a laptop.


If you care about display, trackpad, and battery life, the M1 macs blow everything else out of the water, and for pretty cheap.


> A Mac with subpar support for a webcam, energy saving, suspend/resume, the trackpad, brightness/volume controls, etc is not a laptop, it's an expensive paperweight.

There are Mac desktops as well. That said, I'd say the list of mandatory components on a laptop is:

- Decent GPU Drivers

- Workable power management with sleep/ awake

- Trackpad (this should be straight forward)

The webcam, and brightness controls are stretch goals. I can't imaging the trackpad is vastly different from currently shipping ones. The Webcam might cause some significant headaches since Apple secures that fairly tightly.

The onboard wifi modem is likely worthless, from what I recall, many of them are even in Windows laptops due to proprietary drivers.


Most Windows laptops thankfully use Intel Wi-Fi. Apple is unfortunately completely married to Broadcom. There are open drivers for Broadcom Wi-Fi, at least not-the-newest ones, but they do suck.


Good, sounds like things have improved a lot since I was using Linux. Sucky drivers are a starting point, last time I was on Linux as a daily driver, I had to use a PCMCIA card with a binary blob from Broadcom because the internal wireless card was dead.

Hopefully with a little bit of money and some enthusiasm from the team, they'll be able to make more progress.

A lot of people bitching about this being a waste of effort, but all the things they do here have follow on effects. If they improve the Broadcom drivers for the M1, much of that work will directly benefit everyone else on Broadcom.


I'm not sure why you are so pessimistic. There are many community supported linux devices that work fine. The Microsoft Surface line being a prominent example. There are patches available for all recent kernels. And afaik all generations are supported. Here custom drivers have also been written.


Microsoft Surface don't use special hardware as much as Apple's laptop. Especially for M1, they uses original GPU, boot system, power management, etcetc...


>subpar support for a webcam, energy saving, suspend/resume, the trackpad, brightness/volume controls

Honestly this sounds exactly like my experience with linux on any laptop, including three thinkpads.

Linux is a great desktop and server OS but the UX of a laptop actually demands a certain level of polish/fit-and-finish that linux desktop environments just don't have, otherwise it's constantly getting in your way.


Suprisely to me even, all these work work fine on Debian. Debian needed me to slide load the intel wireless drivers (a disappointing aspect I grant) but otherwise everything just worked right out of the box. Lenovo X1 Carbon 5th Generation.


I haven't run into that problem in years. As long as you are running a mainstream laptop (Asus,Acer,Dell,HP etc...) you will likely not have a problem. I will grant that you should still do a review of the hardware against supported drivers (especially around Wifi)... 9.8/10 it will just work.


Apple's poor support for Linux on Mac hardware may reverse if the company gains data center ambition for the M1. Partnering with a cloud provider (maybe Microsoft?) to deliver seamless Linux deployment on M1 would be significant...


Microsoft already has a good partner in Ampere, a company fully dedicated to servers and standards.

Apple clearly doesn't seem interested. The M1 is just a scaled-up version of the iDevice SoCs. From the reverse engineering we've seen so far, it is extremely clear that not even a single step towards any standardization was taken. It's very much the ad-hoc hodgepodge of embedded crap companies build when they only care about their own complete product and don't allocate any budget towards any "refactoring" that doesn't directly benefit the end product.

They use some ancient Samsung UART for debug (because the first iPhones used a Samsung SoC?), old P.A. Semi I2C controller, the Synopsys DesignWare USB 3 controller (just like some random cheap Allwinner/Rockchip/etc), and here's the fucking kicker, a custom Apple interrupt controller and a custom IOMMU too. These probably predate GICv2/3 at least. But there was no reason for them to switch to the standard Arm GIC and SMMU so they didn't >_<


Interesting details. Is using Synopsys DesignWare problem?


Not necessarily – well, it does work :D and at least it mostly speaks XHCI – but it does require non-standard code for reset and initialization.


I don't see any reason to do it from practical standpoint, but to do it for fun - why not.

If you want something working, I'd also not recommend using hardware from someone like Apple.


Not all M1 chips are in laptops. I’m kind of intrigued by the potential of a cool and quiet linux server.


The first week that XNU ran on ARM, it didn't have very good hardware support either.


Apple has all the documentation needed to write drivers for all the custom hardware found in an iPhone (the original XNU-on-ARM device) or in an M1 Mac. The Asahi volunteers and Corellium don’t, they will need to do much more work, including reverse-engineering. So, what’s your point?


My point is that throwing up one's hands and saying "it's hopeless and it sucks!" literally the first day linux has been brought up on a new board is... premature.

Yes, it will have to be RE'd. Yes, that takes time.


Linux does run, and it also runs on the M1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: