Hacker News new | past | comments | ask | show | jobs | submit login
Asahi Linux for M1 Macs: progress report for September 2021 (asahilinux.org)
474 points by fanf2 52 days ago | hide | past | favorite | 186 comments



> The NVMe hardware in the M1 is quite peculiar: it breaks the spec in multiple ways, requiring patches to the core NVMe support in Linux, and it also is exposed as a platform device instead of PCIe. In addition, it is managed by an ASC, the “ANS”, which needs to be brought up before NVMe can work, and that also relies on a companion “SART” driver, which is like a minimal IOMMU.

Stuff like this makes me wonder: why does Apple do this? If I try to give them the benefit of the doubt, I can assume that these changes are done for performance, cost, power-saving, or maybe even security reasons. Otherwise it just seems like Apple does these things in order to make it harder for other OSes to run on their hardware. Which is certainly their prerogative, but it just makes me think less of them.

On the other hand, this is really cool:

> However, Apple is unique in putting emphasis in keeping hardware interfaces compatible across SoC generations – the UART hardware in the M1 dates back to the original iPhone! This means we are in a unique position to be able to try writing drivers that will not only work for the M1, but may work –unchanged– on future chips as well.

... but cynically (or perhaps just realistically), I can easily believe that this isn't done for reasons of openness, but because this makes maintenance of macOS itself easier for Apple.


Everything Apple does they do because it makes sense for them. They are neither trying to help nor hinder third-party OSes. They just don't care.

The NVMe design makes sense in the context of their common ASC architecture and how their SoC works. Various quirks are due to things like supporting their storage encryption. Even for things which we can't quite explain, I have no trouble believing that it made sense for them for whatever reason.


Thanks for the even-handed reply, which I imagine is much easier to have after diving into all this stuff first-hand (I assume you're the same 'marcan' who wrote the progress report).

As much as many of us want to attribute positive or negative reasons or motivations to things companies do -- especially secretive companies like Apple -- it's a nice reminder that most decisions are made without malice, because they make the most sense based on the requirements at hand.

Having control over the entire hardware and software ecosystem means that there's no reason to follow standards to the letter when those standards get in your way and make things harder, more expensive, or even just not possible.

Anyhow, just wanted to finish by saying all of this work is truly impressive. Obviously there's still a lot to be done to support everything to the same level as, say, a Dell XPS machine, but the progress made so far is pretty amazing, and even though I have no plans to buy an M1 Mac any time soon, I'm always excited to read these updates.


>As much as many of us want to attribute positive or negative reasons or motivations to things companies do -- especially secretive companies like Apple -- it's a nice reminder that most decisions are made without malice, because they make the most sense based on the requirements at hand.

Keep in mind too that "requirements at hand" can include a significant degree of path dependency, and that it can be a mistake to read too much reasoning into something too. Sometimes there just isn't any master plan, some decision made years earlier has created a dilemma due to dependencies built on it since and there just isn't the resources (or ROI, particularly without any spec compatibility to worry about) to deal with it right then. For a hardware company with necessarily very long lead times (they can't exactly start having chips fabbed and making electronics two weeks before launch) Apple runs a pretty hectic schedule with major new launches every single year. Even a vertically integrated company on their scale is not completely immune to the challenges of tight coupling or thinks of it all ahead, and there are definitely decisions Apple has made that they regret but can't easily get out from under. For example, just a few weeks ago HN had a sizable thread on "Swift Regrets" [0] by Jordan Rose:

>"I worked on Swift at Apple from pre-release to Swift 5.1. I’m at least partly responsible for many things people like about Swift and many things people hate about Swift. This list is something I started collecting around when I left Apple, and I’m putting them up so other language designers can learn from our mistakes. These are all things that would be hard to change in Swift today, because they’d break tons of people’s code. That’s what happens with real-world languages and libraries: the more users you have, the fewer breaking changes you can make."

For Apple same definitely turns up in hardware once in a while too, though of course they try to be careful and have a lot of institutional knowledge about pitfalls by this point. Can't always look at something they're doing now and assume that if they were greenfielding it they'd do the exact same thing again.

That's part of why they've long been (even before iOS) such hardasses about stuff like 3rd party use of private APIs under development. They certainly don't have any active incentive to break existing stuff, if everything could magically work forever that'd be awesome, but simultaneously they know they fuck up sometimes and want to be able to make changes. Hard to balance those in software without some kind of push to devs on putting experimental OS frameworks into production software or spraying all over the drive for installs. Users will inevitably come to depend on it anyway and now you're stuck.

For hardware I think it's obscure enough they're not worried about it, as absolutely awesome as it is Asahi Linux is never going to pinch them there.

----

0: https://news.ycombinator.com/item?id=28603794


> Keep in mind too that "requirements at hand" can include a significant degree of path dependency, and that it can be a mistake to read too much reasoning into something too.

This is why I'd say there's some value in sticking to spec even if it doesn't 100% meet your needs or do things the way you think they ought to be done. Tacking away is as likely to screw you in the long run as it is to benefit you. The benefits would be much more obvious, of course, in a world that's almost an unimaginable utopia from our perspective: one without intellectual property, in which major advances like Apple's ARM silicon were widely shared and put to the good of humankind in general, rather than held privately for the profit of a single corporation.


It's also the cost of those decisions would have quite the impact on the investment in R&D which is already bonkers as it is.

A lot of people try to attribute some human aspect to a large multinational (as they are mostly just sociopaths controlled by shareholders in a sense), but technology-wise it's often just "it made sense for us at our scale".

Same with the weird USB-C PD controllers that are 'almost normal' but then specialised for Apple; it's not that they want to make it hard to repair, use custom software with or patent it and screw with other companies.. it just makes sense to change that part instead of changing another part. At scale, that is a choice you have, but also a choice you must make in such an implementation. This is of course not exclusive to Apple, a lot of the really high quantity mass produced electronics contain slightly modified versions of existing parts because that turns out to be cheaper/more reliable/better fit-to-spec than modifying the rest of the design around an existing part.

This is one of the baffling things about calculators or toy electronics (for light and sound effects) in the same vein; modifying an ancient chip with very crude software on a single-sided extra thin older style PCB material with no silk screen and only partial solder mask with a die-on-PCB with wire bonding and epoxy blob... all to make the assembly 3 cents instead of 4 cents, and reduce the lack of connections to make that 3 cent assembly have less possibilities of defects and causing less of a multi-thousand-cent service process to be activated (swap in a store, callcenter, website, email etc). Every time you get one less customer to call about a crappy firetruck where the lights stopped flashing in a 3 cent assembly is a huge win. This is of course an extreme example, more complicated hardware and software extrapolates quite extremely from there.

Oddly enough, some of those practises are implemented at a much less impactful scale in HP and Dell desktops where they might have one large non-standard PCB hosting all the normal PC components but also all the DC-DC converters, front I/O and on-board WiFi. It makes almost no sense at all to do that, but the reduced number of connectors apparently makes the products cheaper to support, last longer and since people don't upgrade them anyway they are often just 'used up' and thrown away. That last part is bad of course but something that a design-for-manufacture department is unlikely to care about or mitigate using a trade-in or recycling program (which often just ends up meaning: ship the trash to china or Africa and let them deal with it).

The amount of details and their impact at this scale is astounding. Add custom silicon and you're almost in a new dimension.


> Everything Apple does they do because it makes sense for them. They are neither trying to help nor hinder third-party OSes. They just don't care.

That's my read on this as well. They designed it so it would work well with their other hardware/software and this is what they landed on. I seriously doubt they paid even a second of thought to other OSs running on their hardware. Some people will call that "user-hostile" but it's much closer to apathy IMHO and as a macOS and Apple hardware user I'm fine with that. It's cool that Linux can/will run on the M1 but I'll never be doing that myself. It reminds me of that scene in Mad Men "I don't think about you at all" [0] but Apple's feeling isn't even as sinister or hostile as the comment made by Don.

[0] https://youtu.be/LlOSdRMSG_k?t=40


> They just don't care.

Exactly this. I can easily imagine somewhere in some OS slack channel in Apple:

hardware dev: hey dear @channel we're breaking NVME spec in couple places and we're kinda running out of time to fix it, would be so kind to address it in you drivers if it's not too hard?

os guy: yeah, no worries those seem trivial, we can easily adjust.

hardware guy: cool, thanks a bunch.


> They are neither trying to help nor hinder third-party OSes.

That's naive.

It is absolutely part of their design goals to plan for obsolescence and to prevent third party OSes to run perfectly on their device. The move to soldering SSDs and RAM on laptops and desktops is designed to prevent users from extending the life of the device. Mac Mini's experience a delay of 2-3 minutes to even start booting alternate OS to frustrate the users. The T2 chips even prevent booting a lot of other OSes. There are umpteen examples like these.

And that's all ok when you consider the business goals of Apple. But please don't pretend that it's all an "accident" or Apple doesn't care if a user wants to break free from the stranglehold of the Apple ecosystem.


Perhaps that's their stance on competing software, but certainly not competing hardware. I recall an internal memo that had to do with intentionally being poorly compatible with other brands in implementing standards so that it could be written off as programming error.


> it makes sense for them. They are neither trying to help nor hinder third-party OSes. They just don't care.

That sounds eerily similar to a paperclip maximizer (https://wiki.lesswrong.com/wiki/Paperclip_maximizer). I suppose we all know what Apple is maximizing, but I never drew this parallel before.



> Stuff like this makes me wonder: why does Apple do this? If I try to give them the benefit of the doubt, I can assume that these changes are done for performance, cost, power-saving, or maybe even security reasons. Otherwise it just seems like Apple does these things in order to make it harder for other OSes to run on their hardware. Which is certainly their prerogative, but it just makes me think less of them.

The reason for most of these things is most likely that Apple didn't start from scratch with the M1. M1 Macs in many respects appear to be an exercise in "how can we make our existing iPhone / iPad system architecture into a general purpose computer", and so comes with a surprising amount of legacy idiosyncrasies. For example, Apple started using NVMe-like storage all the way back in the 2015 iPhone 6S, and therefore was 1) very concerned with fitting everything into a small, power efficient package and 2) not very concerned with being standards compliant at all.

If Apple started from scratch with the M1, it would have most likely been more standards compliant. Out of Apple's self interest and ease of maintenance, not to help the community. If there's anything Apple has shown - on Macs only - is that they're not really hostile with respect to standards compliance and helping third party OS support. It's just that they don't care _at all_ about supporting that, and prefer supporting their own internal processes every step of the way.

(Not an expert on this topic at all, just echoing my impression of the system architecture choices)


> If there's anything Apple has shown - on Macs only - is that they're not really hostile with respect to standards compliance and helping third party OS support. It's just that they don't care _at all_ about supporting that, and prefer supporting their own internal processes every step of the way.

Apple wrote Boot Camp drivers and the Boot Camp Installer for Windows on Intel Macs. The Boot Camp Assistant itself also paves the way for users to install Windows, for example, by partitioning storage for Windows installation.


That's fair. I suppose Apple does care about third party OS support as long as it sells them a bunch more Macs with comparatively low engineering effort. The ability to run Windows probably helped them sell a significant percentage more Macs, probably especially so when the Mac market was smaller just after they switched to Intel.

The Linux community had to reverse engineer Apple T2 drivers itself though. Apple didn’t restrict anything there, but the community had to figure it out on its own.


I believe this has come up multiple times on HN but the reason Apple breaks NVMe is because NVMe command structure isn't large enough to hold all the crypto state information they need to shove down to the device. The spec would not support what they are doing, so their choice was to either tweak NVMe to suit, or abandon their feature.


Why are they even doing crypto at the NMVe level?! Why is it not good enough for them to do it at the filesystem level (like ZFS native encryption) or at least block device level (GELI/dm-crypt/etc)?


Because that way the OS does not need to ever see the low level keys. It allows them to feed them straight from the Secure Enclave, which means an OS compromise can never result in compromise of storage encryption. Plus they do the crypto in a hardware accelerator, so it's free for the CPU.


What's the threat model here? If you install malware that runs as your user, it can see and edit any of your files regardless of whether or not the OS knows decryption keys for them. If you require a password to cold boot the device, then stealing someone's powered-off laptop doesn't get the data. (That's without any special hardware.)

So the scenarios that remain are: fingerprint to unlock device to boot (need to do some crypto before the OS, unless you want the fingerprint to just flat-out give up the key to anything sniffing the SPI bus), or somehow resisting data modification without requiring the user to type a password. I feel like Bitlocker tries to do the latter, but I don't know what attacks they are trying to protect against. (It's on by default on new laptops, but you can just sniff the key over SPI when the OS is booting, so what security does it actually provide?)


I'd guess the threat model is imaging the encrypted storage, which can be done even if the computer is turned off, in the hopes of acquiring the keys later to decrypt the image. If you install malware and the encryption is not visible to the OS, all you get is the data on the machine at the time of implant, since storage image decryption is coupled to the hardware. I also imagine it is much easier to exfiltrate encryption keys stolen from the OS undetected than to rummage around and exfiltrate the contents of a hard drive.


> What's the threat model here?

I'm guessing they're protecting devices against APTs: state-level actors with lots of funding, competitors intent on discrediting their ecosystem, NSO Group, etc.

As a side benefit for users and Apple it makes the entire chain more difficult to introspect/attack.


> If you install malware that runs as your user, it can see and edit any of your files regardless of whether or not the OS knows decryption keys for them.

The OS now prompts to grant file access permissions to applications.


Also gives them a bigger lock-in opportunity.


… what lock-in opportunity? Who’s stopping anyone from copying files off a Mac?


Their evil plan is to make their platform so good nobody will want to use anything else. Dastardly!


Ability for swap SSD


Are you asking why they don't the expose the plain-text keys to the kernel software? That's your answer.


> UART hardware in the M1 dates back to the original iPhone!

UART as in a simple serial debug interface?

I thought they are fairly simple - isn't that like celebrating Apple for having the same transistor design since some date?!


They are simple, and yet Samsung managed to come up with about 4 incompatible variants in their other SoC lines, that Linux has to support the explicitly. Apple stuck to the same one they had from the start.


Asahi Linux progress reports rank up there with Dolphin[1] progress reports for being excellent examples of technical writing as well as simply reminding me why I fell in love with computers in the first place. Love reading this stuff even if I may never use it (although I would love to run Linux on M1 hardware someday!).

[1] https://dolphin-emu.org/blog/


Still a bit sad that in both cases its effectively righting of wrongs caused by closed down undocumented computing platform.

On the other hand if the respective authors like accepting such challenges and working on these projects, there is nothing wrong with that. :)


Can't write a good detective story without a mystery, after all.


How about running Dolphin in Linux on M1 software?


Should work just fine. It already runs on macOS on M1, and on ARM64 on Linux in general. Just needs a GPU driver to perform well :)


I wonder if Alyssa (or other devs that took a deeper look at the DCP) have any idea why the builtin HDMI port of the Mac Mini would not send DDC messages to the connected monitor.

I’ve been struggling with this in Lunar (https://lunar.fyi) for a long time and while I tried my best by comparing ioreg dumps and looking at the DCP driver in IDA, I couldn’t find any obvious logic would block this communication voluntarily.

I should mention that on M1, the DDC messages are sent to the monitor by calling IOAVServiceWriteI2C on the DCPAVServiceProxy of the monitor as seen here: https://gist.github.com/alin23/b476a02a8cd298436848e28476aed...

I’m thinking that this logic might either exist in the DCP firmware which is not accessible from userspace, or it might just be a side effect of some out of spec behavior that the HDMI port might have.


The DCP firmware has multiple endpoints. Currently, we only use and understand the main endpoint, which lacks raw I2C/DDC/EDID interfaces. Presumably this is available on another endpoint, but we haven't looked at this yet. Your gist gives me hope.

The HDMI port on the Mini is funny. The M1 supports exactly one internal display (the panel on a MacBook or iThing) and one external display (over DisplayPort/Thunderbolt). This is why M1 MacBooks can only drive a single external monitor.

For the Mini, the "internal" display is an internal DisplayPort connection, converted to HDMI with a mcdp29xx chip, and stuck on the HDMI port. Expect weirdness.


I thought it must be something related to the DisplayPort transport. I noticed in ioreg that the HDMI AppleCLCD2 had downstream=HDMI upstream=DP as its transport params.

So its possible that it isn’t DCP related after all, the mcdp29xx chip might not be implementing the DDC part at all like most USB-C hubs that I have to deal with.

Thanks for the insight!


It's worth noting that the MDCP29xx has its own DCP endpoint and proxy driver, so it likely implements its own DDC channel that way. Obviously, DCP itself needs to get the EDID one way or another. We'll see once we start looking into those add-on endpoints :)


I’ll keep a close eye on your work then!

I don’t have an M1 Mac Mini to do more thorough tests, but one Lunar user reported that the DDC set brightness command worked for a short second on the initial connection of the monitor.

So from that I’m guessing that the MDCP29xx keeps the DDC channel open until it gets the EDID and does whatever handshake is needed and then.. maybe it closes the channel?

Another weirdness anecdote is a user which reported that the DDC commands sent to the Thunderbolt-connected monitor were also sent to the HDMI monitor at the same time, but (as usual) writing directly to the HDMI monitor service did not do anything.

Could you maybe point me to where I should start looking for the MDCP29XX assembly? I previously looked at `/System/Library/Extensions/AppleMobileDispH13G-DCP.kext` but I’m thinking that what you’re talking about might not be a kext.


Thank you for making Lunar. I installed it right away and it's great.


This looks super cool! I just got an M1 MBA, it's fast as lightning. Its nice knowing a project like this is percolating and that in a few years when this daily driver becomes an extra machine there will be fun linux alternatives to try.


If your MacBook is still working by then. Looks like there is a lawsuit against Apple, because there are hardware issues with the screens of M1 Air and Pro. Before there was the USB-C PD to brick issue, but that may have been fixed now. Overall, not the best track record for longevity.


Observing the achievements of this group of young talented devs only increases my imposter syndrome.

Awesome job, love the enthusiasm.


Agreed, I have a M1 mini sitting around doing mostly nothing and I'd love to use it to contribute to the project but I'm not a developer so I don't know where to start. I am definitely looking forward to the mentioned installer release so I can try it out myself though. I can believe the hype about how blazing fast it is on the desktop even in software rendering, given how powerful a Linux VM on Parallels can be on it. It rivals my Ryzen 5 3600 machine and even surpasses it in some metrics, and that runs Linux bare metal.


Do you know about Marcan's YouTube channel? He livestreams (some of) the hacking, so you can actually see exactly how to start! They are super huge packages, but make for excellent ambient stimulation.

I wish he was doing tutorial style videos, too. Pleasant voice, well-spoken and incredible knowledgeable. I bet he could do videos which don't have you thinking "Yeah, but why does this work?".


I tried doing one of those! It's not exactly a tutorial, but it's an attempt at a walk through everything that went into building the hypervisor that we use for hardware reverse engineering and testing.

https://youtu.be/igYgGH6PnOw


I don't follow your feed too closely, as it's usually not within my time budget, so I missed that.

Thank you very much for doing what you do! You are an inspiration to me and I admire the calm and structured way you work. Hope you keep enjoying what you do!


Reverse engineering and hacking can be a crazy time sink. It’s truly a game for those with lots of free time.


There are many things that are crazy time sinks - either because they are difficult, or because they are emotionally or intellectually engaging. Or because they are mandatory in the context of the person doing them.

And the concept of “free” time is also relative. For example, you could argue that child rearing is a game for those with lots of free time.


I used to have the same sentiment when the same dev hacked the original Wii and PS3.


"On typical SoCs, drivers have intimate knowledge of the underlying hardware, and they hard-code its precise layout: how many registers, how many pins, how things relate to each other, etc......

However, Apple is unique in putting emphasis in keeping hardware interfaces compatible across SoC generations ..... the device tree then can be used to represent the dependency relationships between these power domains dynamically. ... This approach is unfamiliar to most upstream subsystem maintainers, but we hope they recognize the benefits over time. Who knows, perhaps this will inspire other manufacturers to do it this way!"

This is really weird commentary to me, as far as I know Device Tree has been the standard for embedded ARM drivers in the Linux kernel for years, and for several years on PowerPC before that. When I worked in embedded linux, often the "bringup" for components in new ASICs was to create the device tree definition. What am I missing here?


This is about compatible properties. On typical DTs and SoCs, you end up with entirely new compatibles for tons of stuff every SoC generation, and it'd never work with the old ones. What we're doing is putting on generic compatibles that old drivers can continue to bind to, and designing the rest of the binding to be generic enough to describe parameters of the device.

So, a GPIO driver for a random SoC might hardcode that it has 42 pins. Ours uses a property instead. The vast majority of clock and power management drivers for other SoCs hard code the clock or power hierarchy and provide static sets of outputs. We put every single clock control in a separate DT node and describe their relationships. A typical cpufreq driver has intimate knowledge of the clocking controls for the whole SoC, and hardcodes the layout of the clock registers. My prototype for that just has a separate instance for each CPU cluster, and describes the performance states in the DT. And so on and so forth.

Basically, on a typical SoC, either a hardware block is identical to last generation's, or it's incompatible. Apple's blocks instead follow patterns, so we're building parameterizable bindings that can handle any configuration of those blocks with a single driver.


This is just embarrassing.

> the M1’s CPUs are so powerful that a software-rendered desktop is actually faster on them than on e.g. Rockchip ARM64 machines with hardware acceleration.


It’s so good that I decided to sponsor the project a while back. I will probably never use it but I really like these guys talent and dedication!


This fascinates me, all these negative comments that seem to stem from projections of fear of failure. Yeah, this project has a chance to fail, like everything else in life.

But to me I think this project will be indeed ready in a few years, and I will certainly be running this on my M1 as soon as it is stable and useful. It will be interesting to see how this turns out, I just wanted to comment on the next-level narcissism going on. Why most of you choose to be pessimistic and make not your problem, your problem, is beyond me.

Keep up the great work Team, Asahi!


> all these negative comments that seem to stem from projections of fear of failure.

No. I criticise this project because it adds value to a device that we should all be boycotting.

We absolutely do not want to see the proliferation of custom, locked down SoCs on the desktop platform with each one being incompatible with each other, and limiting our freedom to run what code we want on it. That is why this project is extremely short-sighted for the future of our computing freedom.

The M1 is a locked blackbox. It is designed to take away user control on both hardware and software. These are legitimate criticism that many Apple fans try to deflect. They claim that the M1 isn't a locked down machine by comparing it with the ios platform as proof. After all, iPhones and iPads have locked bootloaders that prevent you from even running any other OS, while this is not so with the M1 computers.

That's just plain denial. Just look at what has been happening to the Mac Mini:

1. The first few Intel Mac Minis allowed you some level of customisation of both the hardware (change RAM or HDD / SSD) and software (install other full featured OS).

2. Then came the Mac Minis with soldered RAM and SSD. You could no longer customise the hardware. Software was still customisable and you could still install other OSes. (Recall that Apple even offered free drivers for another OS, i.e. Windows).

3. The current generation of M1 Mini now doesn't allow you to customise both the hardware (everything is soldered) and the software. Technically you can install other OSes, but the reality is that currently only crippled versions of Linux and xBSD is available and practically the only full-featured OS available for it is macOS.

These are clear indicators of how Apple has been working slowly to lockdown the Mac platform like their ios platforms. (Right now, projects like these give Apple and M1 publicity without harming their end goals. And so they are tolerated. Want to bet that as soon as some alternate fully featured viable OS appears for the M1, the bootloader will be locked, and the next Apple SoC will cripple it again?).

The frog is still slowly boiling - https://en.wikipedia.org/wiki/Boiling_frog - to keep you in denial.

There's another reason why I call this particular project short-sighted. Remember what happened when Apple introduced the Mini with soldered RAM and SSD's? It wasn't popular and didn't sell. Apple was forced to backtrack and the next Mini didn't have soldered RAM (but the SSD was still soldered). A similar thing could have been possible with the M1 too. Apple has bet their future on the success of their ARM processors. But if people boycotted Apple Silicon desktop platform for not being as open as AMD / Intel, Apple would have been forced to compromise a bit (at least strategically for the short-term) and released more literature to make the platform seem a little more open. And we might have seen Linux and xBSD being supported on the platform by now.


> No. I criticise this project because it adds value to a device that we should all be boycotting.

That's a pretty loaded, holier than thou attitude.

Keep your politics out of my computing and open source, please.


> Keep your politics out of my computing and open source, please.

Open source started as a political movement.

Are you gonna ask me to keep the government out of your Medicare next? :)

(no, that's not an invitation to debate broader politics, it's just the first example that springs to mind)


> Open source started as a political movement.

GNU is not the sole foundation of open source.


Do you really think Berkeley, of all places, working to make BSD available to the world for free, including excising all that proprietary AT&T code, wasn't engaging in a political act?

That the open source and hacking cultures of the 70s, 80s, and 90s weren't founded on an intentional rejection of the increasing balkanization of software and technology after the 1974 determination that software was copyrightable?

I understand, at an individual level, it might seem like throwing some code up on Github with a permissive license might not seem like a political act. But the history of open source is inseparable from the politics of intellectual property, copyright, and all those things that follow, including issues like the right to repair.


I've been involved in open source for a long time (well before GitHub). My experience is that there are different groups with different motivations wrt open source.

As someone who has been in the "open source for the common good" camp, there seems to be a more extreme "open source as a religion" camp.

You're not off and have given me something to thing about.

FWIW Berkley wasn't immediately what I was thinking about, but I am more on the BSD vs GNU side of things.


> Keep your politics out of my computing and open source, please.

Then ignore me and go do the politics that you want rather than unnecessarily choosing to target someone with whom you don't agree or want to meaningfully engage.


Hm.

I've got an M1 Mini laying around (Apple annoyed me enough in the past six months that I've gone away from them entirely, replacing a M1 Mini with an ODroid N2+, among other things). If it's daily drivable, I suppose I should see about getting Linux installed on it. I've not gotten around to selling it yet...


If Apple has really annoyed you, put that Mini on the used market ASAP where it can potentially displace a new unit sale.

Not that I'm unsupportive of the Asahi project, it's just a fact that you're dong Apple a favor by keeping that M1 Mini around collecting dust, while every passing day it becomes increasingly irrelevant WRT offsetting new unit sales.


> replacing a M1 Mini with an ODroid N2+,

Hey, if you're running Linux, you're using my drivers either way ;-)

Now back to typing away at Panfrost on my M1 Linux to debug an issue on the Odroid N2 I have Ethernet connected to the M1...


I put Manjaro on my 2012 Mac Pro and it was so frustrating to even get the boot loader running that I'll never run Linux on a Mac again. Once it was running, Apple's garbage BIOS continued to present issues too.

If I don't have to work with iOS anymore then I'll never even buy another Mac.


> It’s not perfect, as it can’t support a select few corner case drivers (that do things that are fundamentally impossible to support in this situation), but it works well and will support everything we need to make 4K kernels viable.

I'm curious what these corner cases are; could anyone share?


eGPUs for one, but there are bigger problems to solve there. Other than that, IIRC some v4l stuff did this, and a few others. It's a tiny subset of drivers.


> bigger problems to solve there

It's not the fucking "maximum BAR size is tiny" problem again is it? (hello Rockchip :D)


Nah, but I was informed the other day that GPU drivers apparently like to map BARs as normal cacheable memory and... I'm not sure the M1 supports that.

And then if you don't do that you run into problems with apps mapping GPU memory and doing unaligned accesses (plus it's a performance problem).


drm likes uncacheable write-combining as an optimization… but that's disabled on arm64 https://patchwork.kernel.org/project/linux-arm-kernel/patch/... because it can cause image corruption glitches (saw them myself, had to find and cherry-pick that commit into FreeBSD's drm back then)

Generally I don't think "normal" is necessary? In FreeBSD/aarch64 we interpret most ioremaps (all other than WC and WB) as "device": https://reviews.freebsd.org/D20789

and there doesn't seem to be a performance problem. Well, I haven't scientifically tested the performance but SuperTuxKart can do >100fps at 4K on an RX 480 :)


With Device memory you can't do unaligned accesses, so userspace apps that map GPU memory and expect that to work (as it does on x86 and on ARM if you can do a Normal mapping) will break.


Looking at amdgpu_ttm, for '"On-card" video ram' it sets TTM_PL_FLAG_WC, so that would use ioremap_wc, that's not device, we have it as WRITE_COMBINING which is actually WRITE_THROUGH. So yeah normal mappings might be used somewhere.

Where does that limitation on the M1 come from anyway?


The M1 bus fabric is very picky about access modes. We had to add a whole new ioremap_np() to the kernel, because it requires nGnRnE mappings for on-die peripherals, while Linux used nGnRE for everything on ARM64 until now. For PCIe BARs it wants nGnRE instead, and I'll be very surprised if it'll take a normal mapping...


I was initially critical of this endeavor before it started. However, the issues I had in mind have been addressed and this effort is truly impressive. I really appreciate that marcan is committed to going beyond doing this the right way.


I cannot wait for this to come to fruition. Thanks for all the hard work, I'll be installing it as soon as sound and GPU acceleration hit.

Amazing work :-)


Amazing progress. However I don’t imagine GPU driver progress will be fast, or ready in the near future.


Reminder that GPU userspace is passing 90% or so of the GLES2 tests. It's just missing kernelspace, which is arguably the easier part.


That’s absolutely exciting. Thanks for all your efforts and the team’s! I hope to contribute with the project somehow, but you guys are on another “lower”-level :) Cheers


Mind blowing progress! So impressed with the Asahi team.


I wonder if it would be legal for a veteran of Apple, who has worked on porting MacOS to M1, to take that knowledge of the hardware and assist with BSD or Linux porting? On the one hand its just porting to a piece of hardware, but on the other hand she may be using some proprietary knowledge of a closed interface.

Anyone have experience with doing this for Apple hardware? Does the company come after you if you reveal some of this knowledge to the wider public?

Not asking for myself, BTW, as I have had no affiliation with Apple.


Meta but does the name have any relation or maybe give homage to the other Asahi?

https://en.wikipedia.org/wiki/Asahi_Pentax


There are lots of things called Asahi (a beer, a newspaper, an ISP, ...), but our project is specifically named after the Asahi apple, which is the Japanese name for the McIntosh Apple.

https://asahilinux.org/about/


That's ridiculously clever.


Their about page only says:

> Asahi means “rising sun” in Japanese, and it is also the name of an apple cultivar. 旭りんご (asahi ringo) is what we know as the McIntosh Apple, the apple variety that gave the Mac its name.


There are tons of "other Asahi" - among other things it's a beer (super dry), baseball team, ramen noodle moniker


This is a truly amazing project. I'm contributing financially (as I don't have the time to contribute code) to help Alyssa and the rest of the team. If you can, you should too.


Anyone know if the *BSDs are following this work in parallel or plan on doing the same? It would be nice to have some choice — yea I know MacOS is Darwin plus the BSD userland but I like the idea of throwing FreeBSD on an old M1 when I get sick of Linux (even if I eventually go back to a distro…)


So far NetBSD and OpenBSD are working on the M1 port.

https://wiki.netbsd.org/ports/evbarm/apple/ for NetBSD install instructions.

I don't see a focus on the FreeBSD side (yet?) however.


wow they've made a ton of progress -- wish this work got more press. Not to take away from the Asahi linux folks's work but I had no idea the netbsd for example was this far along.


Mark Kettenis has been working on the OpenBSD and U-Boot ports with us, and we'll be relying on U-Boot for our end user installs :)

We're also dual licensing all our bespoke drivers, so the BSDs can take code from there (particularly important for the GPU driver, as there is already a lot of shared code in that subsystem).

I'm actually thinking I'm going to rewrite the WiFi support patch for Linux from scratch, based on the OpenBSD version, just because they did a great job distilling what matters out of the original messy PoC patch that Corellium dumped earlier this year.


amazing -- godspeed!


So good to read Marcan’s comments on this thread, so much context. Thank you!


really naive question:

why not run Parallels since it now uses the native macOS hypervisor.framework, and then Linux / Haiku / FreeBSD / whatever?

i thought of that as it somewhat outsources device driver digressions.


Because virtual hardware will never work as well as real hardware. You're layering one OS on top of another; that's not free. No GPU acceleration, etc.

(Plus, we actually support the M1's vGIC which Hypervisor.framework does not yet, so VMs running on Linux should perform better than VMs running on macOS! Yes, we beat Apple at supporting some parts of the M1 already.)


Because then you still don't own the hardware - Apple does.

Support can be pulled out from underneath you at any point and you're limited to the exposed hardware interfaces.

Throw on top that (at least for me) I have zero desire to maintain a macOS machine.


The hypervisor option to run macOS is a good way out for testing and using. Great job!


[flagged]


You broke the site guidelines badly with this flamewar. We ban accounts that behave that way, so please don't do anything like that again. More explanation here: https://news.ycombinator.com/item?id=28822026.

We detached this subthread from https://news.ycombinator.com/item?id=28763252.


Our project's goal is not to produce an "unsupported hack", it's to make these machines work at least as good as, if not better than, any other machine with "actual Linux support" ;-)


That's cool. Some Macs rank near the top in terms of Linux support and openness. E.g., the MacBook 2,1 is one of the very few laptops supported by Libreboot. Others are totally unusable.

I like Apple hardware (and ThinkPads), but I prefer the openness of Linux (and BSD).

Current MacBook Airs are pretty competitive in terms of price, and the fanless CPU is very appealing. Is it much of a gamble to purchase one now expecting to run mainline Linux in a year or so?

Aside from the new architecture, some components Apple has been using are quite unfriendly. Broadcom wireless cards tend to perform really poorly with open source drivers.


The Broadcom fullmac cards should be well supported in Linux. I've seen people complain about poor WiFi range and the like on other Macs, but that is almost certainly caused by wrong/mismatched firmware and NvRAM distributed with linux-firmware for those machines. We're going to be using Apple's blobs exactly the same way macOS does, which are tuned to each specific machine and module variant, so radio performance should be identical to macOS.

I wouldn't buy an M1 right now... just because the next generation is likely around the corner :-). But yes, I can't promise all the polish or that every single thing will be upstream, but things should be solidly in the "daily driver" category a year from now. I'd be surprised if there was any non-optional (i.e. excluding accelerators - no idea if anyone cares about the Neural Engine yet...) hardware left without usable support by then. GPU is a big question mark that should become clear in the next month or two as I poke at the kernel side, but honestly I expect solid OpenGL a year from now, at the very least.


For the user-mode part of the neural engine, https://github.com/geohot/tinygrad/tree/master/accel/ane helps quite a bit today. There is also no generic neural engine support infrastructure at all in user-mode on Linux. :-(

For the kernel mode part, ANE is behind an ASC.


It's really just a matter of someone caring. I don't see finding myself with enough time to work on that any time soon, but if someone has a good use case for it and motivation to get it done I'm sure it'll happen (and I'll gladly help).

Some things are dodgier - e.g. though supporting the AMX CPU extensions is quite viable in a kernel fork, I'm not sure if that kind of thing would fly upstream. Same with security features like SPRR - not likely to happen. But these aren't really things the average end user has to care about; they're bonus features, not anything core.


Just curious: if someone was inclined to build a set of good (correct style, well-documented, etc) patches to support those extensions, why would the kernel refuse them?

(Put another way: I'm wondering whether you think the difficulty is technical or political.)


It's a bit of both. It basically boils down to it being an invasive change to core kernel code that doesn't have demonstrable benefit to users, and is specific to one platform. If we can point at a specific application and say "look, this speeds up 300% with AMX" then that might help convince people, but there would definitely be quite some political discussion, not in the least because what Apple did is a violation of the architecture.

I'm hoping we can at least push through a prctl to turn on TSO mode for x86 emulation. I think that one will be simple enough and have enough benefits to convince people.


Getting proper AMX support on _x86_ Linux merged is nasty because of state size issues. I know nothing about ARM64 extended state, but some of the same issues may exist.

(The x86 xstate design is horrible. I doubt that ARM64 is anywhere near as bad.)


All I can say is I wish you luck!


I understand the subtlety involved in your argument that Apple may and probably will make closed decisions that strictly benefit their own ecosystem even if at cost of making life difficult for those running Linux on this hardware.

However, “Unsupported hack, at best” is pretty dismissive of the engineering effort going on with this. People running Linux on “windows branded PCs” of the not so distant past could have easily said the same thing because Say Dell and Microsoft would be commercially motivated to only keep Windows running on their hardware. However the ecosystem around Linux made things happen such that it is almost a no-brained that most systems you buy _will_ run say Ubuntu with more than reasonable success.

With enough motivation, and the efforts of engineers such as those listed in this article, I bet we might even see better use of Apple hardware than MacOS. Apple might optimise for their stuff, but Linux can bring the power built for other needs(HPC, Real-time systems etc) to the table and can leap over MacOS given the right abstractions. The future is exciting for the hopeful.


> People running Linux on “windows branded PCs” of the not so distant past could have easily said the same thing because Say Dell and Microsoft would be commercially motivated to only keep Windows running on their hardware.

I don't think this needs qualification. It's still true today.


We recently purchased a cheap (<$500) Dell Inspiron, and it advertises support for Ubuntu in the specs. Not even one of those fancy XPS "developer edition" laptops or whatever, just an ordinary Inspiron.

It's fine to worry about it (and UEFI was a real threat until Microsoft mandated support for other-os booting), but support for Linux in the PC world has never been as fragile as support for it on Macs. You're entirely at the mercy of Apple's whims; I support the Asahi project, but that's just the truth.


How did that Inspiron get its Linux support? Principally it was from hard working hackers reverse engineering the components and getting drivers working through painstaking dev test work. Dell then built a machine with hardware they knew worked thanks to all that effort and slapped a “Works with Ubuntu” sticker on it. That’s the sad reality. The idea that the PC platform and it’s peripherals are somehow an open platform paradise is a delusion.


Exactly. It may actually be in Apple's interest to not actively hinder the hacker community from doing wonders with Apple hardware. After all, a successful ecosystem there will lead to more purchases from Apple!


In fact the Asahi team has said Apple has done a lot of engineering work making sure M1 can and does support third party OSes. They're not directly contributing to or assisting those projects, but the characterisation that they are actively obstructing them seems false, according to those projects themselves.


Linux on basically every computer platform at least started out as, if not still is, "an unsupported hack". Even on x86 it's pretty questionable to call it supported on the vast majority of PCs out there.


I disagree.

Intel and AMD both contribute heavily to the Linux Kernel. This means that on the vast majority of Intel or AMD based systems, you can boot from USB and have a working system.

Apple doesn't even release documentation on their SOCs.

Apple doesn't use standard, open boot methods to allow other OSes to simply boot from USB. No BIOS/Standard UEFI.

There is a huge chasm between what you can do on the vast majority of PCs out there... and what you have to do to get anything else running on Apple hardware. IF you can get anything running. (Eg, locked bootloaders on iDevices)


Intel and AMD don't make the computer (most of the time). They make the CPU. ARM is as open a CPU platform as x86 is at any rate, so if you're gonna pick your line only based on the CPU there's literally no difference there. But you obviously know that's not all there is.

Most computers anyone can buy are full of components that only have drivers or support in the linux kernel because people reverse engineered them. Most are also full of components that linux only works with because of extracted firmware blobs from other systems.

EFI is more or less an open boot system (which.. you know intel macs used that right?), though with plenty of proprietary extensions and alterations in most booting computers, but the BIOS boot that predated it? Yeah you know that was a proprietary system right? Linux booted off it because people reverse engineered and cloned it until it was forced to be an 'open' thing. It didn't just magically happen one day and IBM even sued about it back in the day.


For Linux and BSD, everything has been reverse engineered and hacked from the beginning. Linux has been ported to hundreds of unsupported computing devices. If everyone had your mindset, Linux and BSD would not even exist, only proprietary operating systems would exist. You just don't get it. Linux on M1 is a worthy challenge for the most talented and creative software engineers. If we all just waited passively for manufacturers to provide official support then literally none of the free / open source operating systems would have ever existed.


As a bit of meta-commentary, the fact that anyone is even arguing with the points you're making is deeply disappointing to me.

It seems like there's an entire generation of folks out there who can no longer tell the difference between an open technology ecosystem and a closed one...


Hi I grew up with a TRS-80 and a Tandy 1000. I'm not sure what generation you think I'm from but I suspect you're wrong.

What disappoints me is a "generation of folks" who can't tell that openness is a spectrum and that every bit of openness they enjoy was hard-fought for by pushing the boundaries of the platforms that existed beyond their original intentions, and now treat an effort to do the same on new platforms as some kind of windmill tilting.


> What disappoints me is a "generation of folks" who can't tell that openness is a spectrum

And Apple is very far on the closed end of that spectrum relative to their competitors, and always has been.

There is simply no arguing this point.

> every bit of openness they enjoy was hard-fought for by pushing the boundaries of the platforms that existed beyond their original intentions, and now treat an effort to do the same on new platforms as some kind of windmill tilting.

Let's be real: Asahi isn't gonna change Apple's corporate culture. It's a super cool technical project and I absolutely applaud these sorts of efforts simply because they're cool.

But you're not the Jedi fighting the empire here. You're not gonna blow open the M1 and suddenly change Apple hearts and minds.

The PC industry became more open for one reason and one reason only: Money.

The IBM monopoly was destroyed because clones were cheap, not because of ragtag freedom fighters.

Linux won over in the embedded space because it doesn't cost anything, not because folks suddenly got stars in their eyes over the GPL.

Apple will only change their practices when economics force them to. But it's very clear that they view a walled garden of hardware and software as key to their financial success, and everything they're doing is intended to build those walls just a little bit higher.

So until you can change that financial equation, nothing in the Apple ecosystem will change.

But, honestly, keep plugging away! Have fun! I personally love repurposing hardware to make it do cool things for which it was never intended. When I see projects like this, I think of Everest: We do it because it's there.

But let's not pretend that you're somehow going to change Apple just because you get the Linux kernel booting on the M1.


Fundamentally, if people with your and the spawner of this thread's attitudes had won the day we wouldn't have any of the things this thread is allegedly about loving so much. That's my main point here.

Also I am not involved in or affiliated with or even a likely user of the Asahi project, to be clear. I just find this attitude incredibly frustrating, and I'm definitely bristling at accused of being a young'un or some shit for believing people can actually do surprising things. I am not the dewy-eyed idealist you seem to think I am, I believe what I'm saying here because I've seen it happen over and over and over again.


> I just find this attitude incredibly frustrating

What attitude? The attitude that Apple's hardware is closed and Apple should be criticized for building a walled garden? That we should, where possible, invest in open hardware and technology and put our money where our mouths are, so as to create the kinds of economic incentives that ensure that technology remains open in the future?

Do you really disagree with that?

> I am not the dewy-eyed idealist you seem to think I am, I believe what I'm saying here because I've seen it happen over and over and over again.

What "it" are we talking about here?

What do you think the end game is going to be?

I genuinely have no idea what your argument is, other than to tell me how I'm wrong without providing any specifics regarding how I'm wrong.

Please, enlighten me, I'm happy to listen!


The attitude that pushing on technology that seems hard isn't worth it.

We can do both. We should do both.

At any rate, I'd suggest you go read marcan_'s replies and the OP submitted to HN because the picture you paint and the picture they paint of this platform are not quite the same to begin with. I think the people doing the work deserve a little more credit to describe the platform they're working on.


> We can do both. We should do both.

So, other than for the intellectual curiosity (which is absolutely a great reason!), I have a simple question: why?

Edit:

By the way, thinking about it, I have my own answer to this question: So that this hardware can continue to remain useful long after Apple has ended support for it.

Of course, I continue to believe that it's better, now, to simply buy hardware that doesn't need this kind of reverse engineering to ensure longevity, and I steer my hardware purchases accordingly.

But the hardware exists, and people are buying it, so this project is a way to keep it alive after Apple eventually abandons it (which, granted, could be a long time; while I have a lot of problems with Apple, they are very good about continuing to support old gear).


> Of course, I continue to believe that it's better, now, to simply buy hardware that doesn't need this kind of reverse engineering to ensure longevity, and I steer my hardware purchases accordingly.

For Linux and BSD, ALL OF IT was hacked and reversed engineered from the beginning. With your stupid mentality we would never have any free operating systems, we would only have proprietary operating systems. This is how its done. New hardware is released, someone hacks it, then its supported. If we are forced to wait like children for manufacturers permission or assistance (which is not actually required) then Linux, BSD, and other free / open source operating systems would never exist.

Linux has been ported to every modern computing device on the planet and this M1 port is par for the course. Only an actual idiot would recommend Linux devs should ignore the M1.


Support can be effective even if it doesn’t come from a first party. As long as someone is signed up to fix your problems, it’s not an unsupported hack.


It’s honestly pretty disrespectful to the people who have put a bunch of absolutely amazing work into this project to dismiss it as an “unsupported hack”.

It’s true that we are never going to see first-party support for Linux from Apple. That kind of sucks, and it would be much better for everyone if they had a more open and documented approach to this new platform. But it does at least have explicit support for booting third-party operating systems, and the attitude of the team behind the project is clearly not “hack together a prototype and we are done”.

I guess the point is that it’s not black and white, and there’s a large spectrum between “first-party support” and “unsupported hack”.


Perhaps you should expand your definition of what 'unsupported hack' means instead of taking it as an insult.

I support the work of anyone who gets Linux running on any hardware, I condemn the difficulty in which Apple unnecessarily restricts people from doing what they want with the hardware they own.

It is an unsupported hack because Apple doesn't provide means (such as documentation or code) for people to run what they want. So anything you can do will always end up being a method that Apple may decide to close up with later software updates and everything has to be reverse engineered from scratch.

Contrast this with AMD and Intel code in the Linux Kernel contributed by those companies themselves and being able to just boot from USB using a user accessible BIOS/UEFI loader.

It's Apple's restrictions that make this necessary to be a hack.


Makes sense, even as someone who wishes I had an M1 mini lying about (all non-trivial purchases need home CFO approval) I would be hesitant to install Linux on it as I'm not a kernel hacker.


Well, this kinda pisses on the work whose progress report is posted here...


No, it doesn't. It pisses on Apple for being an incredibly and unnecessarily restrictive company.


That's absolutely no fun at all!

I've dropped Apple pretty hard (to the point where I've gone back to a flip phone - my objections to Android are significant as well), and I've accepted that if I want to use computers, it's probably best to use weird configurations that are often broken, because it discourages me from spending too much time on them.

I mean, I feel like even using x86 Linux boxes is lazy. :/ This box, currently, is an ODroid N2+ that's working fine. I've got a Raspberry Pi 4 over on the other side of my office (the "solar shed" posted yesterday, for some reason), and I've made that into a nice little desktop too. Still working on Spotify support, and I was actually quite surprised to find out that I can watch something on YouTube today...

My laptop is a PineBook Pro running a custom kernel (I really should push the sleep/resume patches I wrote for the sound card upstream... one of these days...).

Unsupported hacks are fun! They're challenging, and also reduce my dependence on computers, because odds are good that one or more simply don't work at any given moment.

And it's not just computers. The closest thing I have to a "daily driver" (other than the family car, which my wife and kids have priority for) is a 2005 Ural - sidecar motorcycle, evolution on a late 1930s BMW, quite literally the most vile handling thing you'll ever encounter on the road. It works, I work on it, I get places eventually, just no longer at the speed limit.

Anyway, the very insanity of bare metal Linux on the M1 actually appeals an awful lot to me.


[flagged]


Yikes, you broke the site guidelines incredibly badly in this thread, which unfortunately I didn't see at the time. Please don't post like this to HN again! That means not fulminating, not name-calling, not posting insinuations, and most of all not flaming. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

We detached this subthread from https://news.ycombinator.com/item?id=28764935.


I don't think this really makes sense. I agree that you could characterize things like soldering on RAM and storage, or gluing the battery in, as user-hostile (because that means the user can't repair or replace or upgrade), but changing the interface used by a component that's already integrated just won't matter to any user (at least one that is staying in the Apple ecosystem). This sort of thing only matters to people who want to run Linux or Windows (or something else) on Apple hardware.

> The fanboys have arrived. Factual discussion will not be tolerated. Downvotes shall commence without comment!

Please don't do this. Complaining about downvotes is a waste of time, and ultimately detracts further from what you're trying to say.


I don’t get the glued battery complaint though. I’ve replaced the battery on 4 iPhones of 3 different models and the glue is just a sticky blob holding the battery in place. It was trivial to prize the battery off and press a new one into place.


When the glued batteries first started showing up, they were showing up in devices with security screws where bits were not available and the expectation of being able to swap batteries out at will. (Macbooks vs regular Laptops of the day)

The glue was so strong that you could not avoid damaging the battery when attempting to remove it.

You also could not safely remove the battery if it had started to bloat/become swollen.

This was, of course, about 10 years before pulltabs and adhesives that were solvent sensitive were used.


I have a lot of issues with the things Apple chooses to do, but I think "user hostility" is an incorrect interpretation.

They have a very specific customer set in mind and they optimize for that really well. Its unfortunate that they don't cater to every market, but they definitely aren't "hostile" to their primary market.


Forcing people who have broken their new iPhone's screen to trade it in for a refurb because no third party repair shop can swap out a serialized display isn't user hostile?


You are saying that the "why" of Apple doing these things is purely "user hostility", which is highly implausible.

A company does not make decisions based on a pure "will to be evil".

They probably think that the reputation hit from not allowing repair is less damaging than the reputation hit from users dissatisfied with repairs. Other design choices can be for cost cutting in design or production.

So sure, it is not nice for the user, but the reason is not a desire to spite users. They likely simply think the additional costs, tangible and intangible, of being repair-friendly are not worth it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: