Hacker News new | past | comments | ask | show | jobs | submit login
November 2022 Progress Report (asahilinux.org)
293 points by Wowfunhappy on Nov 22, 2022 | hide | past | favorite | 193 comments



> Ah, but when people say “power management”, what they usually mean is “suspend”. See, ancient x86 platforms (where “ancient” means “everything prior to 2015 or so”) don’t have reasonable real idle power management like Apple Silicon Macs do.

Well, I've been perfectly happy with the "ancient" power management of my computers. It took a decade or two until suspend/resume actually _worked_ most of the time, and now all of that has been swapped out wholesale by a set of states that are at least an order of magnitude more complicated.

Apparently on newer chipsets, there is no such thing as "suspend to RAM" anymore. Instead, they rely on the OS to micro-manage the sleep states of all the various component that make up the system. I can see this being effective in smartphones where one vendor (Apple, Google) owns the whole stack of hardware, firmware, and software plus a large chunk of every third-party application by tightly controlling what code can run. They can actually run functional tests on the whole stack under varying conditions and have access to every part to debug issues.

Heck, it probably works great on Apple computers for the same reasons. But on general-purpose computers made up of components from dozens of manufacturers running dog-knows-what software and drivers, I don't think it can ever work well. There is already lots of evidence that these "modern" intermediate sleep states are causing real problems not only for Linux users (poor battery life while running, high battery drain while suspended) but also Windows users whose laptops bake themselves to death inside a backpack because a toolbar widget or something woke the system up at 3 a.m. and it then decided to run Windows Update.

When I suspend my laptop, I want to know, with _certainty_, that it will stay suspended until I physically open it. From everything I've been reading, newer laptops offer no such guarantee. The only way to know that your computer won't wake up on its own is by powering it completely down, like we did in the 1990s. Not looking forward to that.

I am all for more power-efficient computers but introducing these new sleep states while throwing out the old ones completely really feels like some backwards pageantry.


> Well, I've been perfectly happy with the "ancient" power management of my computers. It took a decade or two until suspend/resume actually _worked_ most of the time, and now all of that has been swapped out wholesale by a set of states that are at least an order of magnitude more complicated.

This. I want suspend/deep sleep. I don't mind slow wake; and I definitely don't want bluetooth, wifi, etc running while it's supposed to be idle. 90% of the issues with Windows on Laptops waking and causing them to overheat in backpacks is because of this bullshit half-sleep/smart-sleep they've started adding and forcing users to live with.

At least let me, as an informed person, choose to allow my laptop to suspend still.


> I want suspend/deep sleep. I don't mind slow wake

Also, you don't need to keep the machine actually awake to wake up fast; I have laptops that are... more than a decade old now, that wake so fast from S3 that they're active by the time you get the lid open.


The other 10% is probably auto update thinking that it would be a good idea to wake up my machine in the middle of the night to install updates.


There’s no cost to “bluetooth wifi etc” being on and not doing anything unless you’re operating an RF chamber or have the security needs of a head of state.

It doesn’t significantly affect battery life; any time in your own life you’ve spent thinking about this would be better spent playing with your dog, or getting a dog so you can play with it.


Keeping WiFi and BT on usually means you're keeping the PCIe and USB links on (or at least waking up frequently), which means you're also preventing the processor from staying in a low-power state where all the IO is power-gated. Once the display is powered off, getting the rest of the components to be properly asleep can reduce idle power by another order of magnitude.


“On” is a UI state though; the actual power state of the hardware is up to it, and its idle power usage can be managed by things like offloading idle tasks to the wifi SoC so they don’t have to communicate with the rest of the machine.

That’s more well known for performance (where faster Ethernet interfaces do increasingly more offloading) but it works for power savings too.


Yeah, if you're offloading things to the wifi SoC, then you're giving it more power than I want to be spending when the machine is supposed to be asleep.


The power use could be something like less than 1% battery a day though - I have smart home thermostats that are “on the network” all the time but run on battery for months, and that’s a much smaller battery.

That’s a reasonable thing to think about if you’re a wifi driver engineer. It’s not reasonable if you’re a hobbyist responsible for a single install in one laptop. It’s not really advancing the state of the art here.


It sounds like you're speaking mostly about how hardware could work, without regard to how current PC hardware actually does work.


This is how the laptops being discussed in the article actually do work. If PCs don’t work that way complain to Broadcom or nearest motherboard OEM. They have much worse problems though, like only just getting efficiency cores.


No, but what it does do is randomly wake the laptop when it runs across an Access Point it recognizes or when a BT device randomly wants to wake it. What you're describing as the ideal is what I can already get if they would just re-enable true S3 or S4.

That is the state I want, not their glorified hybrid S1. If you want it, go for it; no one is saying remove it, just re-enable S3/S4.


> but also Windows users

I’m a Windows user, and I can confirm. My HP ProBook 445 G8 laptop doesn’t support any of the proper S1-S3 sleep states.

Luckily, I discovered that before any hardware has failed due to overheating, by reading the output of `powercfg /availablesleepstates` console command. To workaround, I have set up the OS to hibernate when the lid is closed, instead of going to sleep.


> Heck, it probably works great on Apple computers for the same reasons.

Nope, the very first (and last) time that I suspended my Macbook and stuffed it in a backpack, it was hot to the touch when I went to go take it out. I looked online and everyone was just advising each other to power down every time they wanted to take their laptop anywhere. So no, Apple can't get it right either. FFS, will OS vendors please just let me hibernate to disk like the old days?


sorry to hear your bad experience. i close the lid multiple times a day on 2 m1 macbooks and i really have to rake my brains to remember when it failed. in 7 years i had the "taking it out hot from a backpack" exactly 1x (with intel). that's a track record i am super happy with.

on the other hand i got macos black screens of death for a year as the last stage of every reboot... apple is by far not flawless. but i cant imagine using windows as a daily driver. the shit to put up with is just endless. and it makes my eyes bleed


You can hibernate to disk on macOS (even with modern M1 Macs). It’s a pmset option.


It is there, but hibernation is a pain to implement since it doesn’t have much in common with other OS functions, so eg device driver engineers don’t enjoy implementing/maintaining it much.

It is useful in laptops as a last ditch effort to avoid losing data when the battery dies, but that’s one reason phones don’t implement it even though their batteries die much more often.


Powering them all the way down doesn't even work sometimes; you have to physically open the case and disconnect the battery. It's really annoying.


The weird thing is, not even Mac OS is doing it on that hardware: they just do proper S3 suspend. I think the advantage of this "modern standby" feature is that you can sometimes wake up and do some minor processing. But I'm not sure that linuxes actually make use of that functionality.


> But I'm not sure that linuxes actually make use of that functionality.

It's funny, the answer is both yes and no. I have a funky Skylake CPU in my current travel laptop, and one of it's cool party tricks is that Linux can drop the CPU into suspend state just by limiting the CPU to it's lowest frequency. I've seen it drop all the way down to 400mhz when leaving it alone, which gives me a chuckle.

Totally useless for the "power nap" functionality you're thinking of, but ironically useful for certain other use-cases.


Can you expand on the use cases it's useful for?


If I'm watching video or editing text I'll often drop into the lowest availible CPU power setting to save on battery and keep the system below 30c.


You need to do this by hand?

The Linux kernel does all the PM stuff nowadays automatically afik. At least I didn't touch any of this settings for a long time and it just works. When not compiling anything or something that like, but surfing, editing text (in something besides IDEA), watching videos, or such "light tasks" the computer has only a few percent CPU load at max, the fan is off, it has room temperature, and runs for hours on battery.

I would also support what one of the siblings said: Explicitly pining the CPU to lower frequencies doesn't help much. For example web pages load much longer. I think a few hundred milliseconds at higher speed and then falling back into the idle mode doesn't cost much extra battery compared to up a few seconds of full CPU load at lower frequencies for a page load.

Yes, it safes battery to make the CPU slow. But than doing anything with the computer is also much slower and everything takes "forever" therefore. I'm not sure this is a net win when the battery lasts longer but you waste much more time waiting. A super slow computer feels worse in the end, imho.


It's just an interesting observation, I'm not really preaching any huge benefits here. When you tell the systemd power manager to enter max-efficiency mode it runs an interactive desktop in suspend-level frequencies. That's all I find interesting here.


Running the CPU constantly at 400MHz is likely to consume more power than bursty workloads at full speed. There are various linked clock domains, and if the CPU cores can't get into low power states then neither can other bits of hardware on the SoC.


Who says the CPU is running constantly at 400MHz? I assume for video to work at all like that it must by bypassing the CPU, and text editing would look to the machine like "all CPU cores idle... waiting... waiting... oh, the user pressed a button, let's handle that for 50ms, okay now we can idle while we wait for the slow human to press another key..."


I'm familiar with the race-to-idle theory, but even on the lowest power mode it will scale between 400-900mhz depending on the workload. If there was anything blocking the CPU or hogging cycles, it would be clocking a bit faster.

That being said, it's still not a huge power gain. I mostly do it to keep the system cool and the fan off.


This is why I power down my laptop physically and ensure startup is speedy. You simply can't trust suspending the laptop to not jack up the overall stability of the system after wakeup or drain the battery unnecessarily or whatever unforeseen disaster awaits you.

I can't stand other platforms comparatively. On Linux, I can start my laptop and be productive in the time other platforms take _to wake up from suspend_. No thanks


I remember that an older machine, a core 2 duo laptop that I had could stay in deep sleep for weeks and it’d function just fine after.

Modern laptops don’t seem to have this capability unfortunately. Both my Mac and Linux machine (even with windows) doesn’t seem to last as long these days.


In my experience, the macbook’s sleep mode isn’t very effective. But if I turn the machine entirely off and turn it back on later, it does a good job loading all my apps back the way they were. Functionally, the experience is similar to the old days of “hibernate”.


> But if I turn the machine entirely off and turn it back on later, it does a good job loading all my apps back the way they were.

...If you don't really use the terminal at all, and don't mind a really slow "resume".


> ...If you don't really use the terminal at all

That’s valid. I don’t notice it, since I’m always ssh’d into a server.

> and don't mind a really slow "resume".

I remember wake from hibernate being pretty slow too. I can’t quantify it, but macOS boot feels like a similar delay to me.


Wake from hibernate is pretty slow, but wake from suspend to RAM is like a second or two on my Linux machines.


Is this with an M1 (or related) machine? I've found sleep to be much better, Intel macs became an afterthought in the past 5 years.


> Apparently on newer chipsets, there is no such thing as "suspend to RAM" anymore.

Newer chipsets? Or AMD chipsets particularly?


I don't know about AMD, but I'm pretty sure it's a thing with newer Intel chipsets (there was lots of hubbub about it when the Framework laptop came out, which I was following at the time).


That’s why I do all my computing on a Univac 1100/80. I’ve even hacked a clamshell for it so I can port it around.


> But what about the display brightness? [...] In order to support the display output properly, we need a driver for Apple’s DCP coprocessor and its firmware. We’ve already talked about DCP in the past, and how cursed the interface is! Since then, Alyssa wrote a Linux kernel DRM KMS driver for DCP and Janne took over maintenance, and he’s been steadily adding features, including brightness control support.

> However, it does come with some caveats: the driver [...] may also reduce performance on some setups, since it is really meant to be used together with GPU acceleration (the simpledrm framebuffer driver has some software rendering optimizations that DCP lacks) and clients using the modern atomic-modeset and swap APIs, like Wayland compositors. It also has some limitations when used with legacy clients such as Xorg - in particular, there is no support for true VBlank interrupts, and it is unclear whether the hardware/firmware supports this at all. This breaks XFCE4’s window manager with compositing enabled. For these reasons, we are not enabling DCP by default for all users

Is there a reason they can't use the DCP driver to change display brightness without switching over to it entirely? It sounds like DCP and GPU acceleration probably ought to ship together—but IMO, changing display brightness is a must-have, in order to use a laptop comfortably in different ambient environments.


They mentioned that the display output is currently using a framebuffer provided by the boot loader. I suspect when the DCP is initialized, the screen starts displaying a different framebuffer provided by the DCP, so if it was just used for brightness the screen would go blank.


Could you switch it back afterwards though? Having it temporarily blank to switch brightness doesn't seem so awful.


Even if you could I don’t think the GPU stuff is far enough behind to really worry about it.


Kind of ridiculous that with so many of the top latent using Linux in 2022, we still have to resort to this to have it as our main OS on our preferred hardware.


Isn’t it more absurd that no vendors who support Linux make acceptable/comparable hardware?


Man, I'm surprised to hear that's the case

I assumed that Macs were mostly preferred because of their UX and relatively high-quality drivers/OS working well with sleep/wake. But if you put linux on there, you're giving all that up

Are there not any linux laptops out there with decent build quality and comparable perf/battery life?


Agreed.

Aside from the processor, there really isn't anything particularly compelling to me about the hardware. Apple's forte is how they integrate the whole hardware package with good software. The build quality is better than most but not especially great. That's not really saying much when you consider the low quality of so many others out there. The keyboards are terrible and I had serious reliability issues with the last couple Macs I used. The battery life probably comes as much from the OS as it does from the hardware. Support is generally acceptable if you pay for AppleCare, although you can sometimes end up waiting a couple weeks for certain repairs.

The Asahi team is doing great work, but I can't help but feel like Linux will always be a second class citizen on Apple hardware. I understand it still appeals to some people. It's not for me, though.

I'm using a ThinkPad now. It's ok. It's well supported in Linux and Lenovo still provides good support. I think the plan I paid for includes next day repairs. I like that it actually has a variety of ports unlike some of the others that cheap out. It's more repairable than most laptops out there. I will probably get a Framework next time or maybe System76. If I was into MacOS, I'd get a MacBook without a doubt, but I just don't like the OS very much anymore.


> Aside from the processor, there really isn't anything particularly compelling to me about the hardware

The trackpad. I've tried a dozen laptops. Thinkpads work great for linux support, as do Dell XPS 13's and the X1 Carbons. But the trackpads on all those pale in comparison to the accuracy of an Apple Trackpad. It's accurate, precise, I can select text anywhere easily, double click works reliably and consistently, and the mouse feels like it is directed exactly where I want.

Here's a recent example of a linux distro using libinput: https://old.reddit.com/r/archlinux/comments/sp3yfe/libinput_...

There are some good suggestions for improvement in the comments, but the fact remains it is always more frustrating to use a trackpad on a non-apple device.

Have the state of trackpads changed in the last 6 months for other laptop vendors? If so I'll seriously consider switching but in my experience, even on Windows, those other laptops don't even come close to what Apple offers.

Other sources about the linux trackpad problems, I've not tried these changes but allegedly they're supposed to come close to what Apple offers:

https://www.gitclear.com/blog/linux_touchpad_update_december...


> Aside from the processor, there really isn't anything particularly compelling to me about the hardware.

Yeah, but I find the processor pretty damn compelling.

And then the rest of the hardware is—if not remarkable—very solid, so the computer is an enticing package.


> And then the rest of the hardware is—if not remarkable—very solid, so the computer is an enticing package.

A bit offtopic, but I've been a bit annoyed lately that we have to treat the computer as a package. Why should my choice of keyboard (Brazilian ABNT2) and trackpad (I want three physical buttons) restrict my choices of CPU or screen?


Has this ever not been the case for small form factor laptops? I think it's mostly just a practical reality of manufacturing. Although I have been really impressed with what Framework is doing!


As a long time Mac user, I had long been used to people saying things along the lines of “I tolerate their hardware to use macOS” especially re: the terrible laptop designs.

Now with the release of the AS machines, I hear the exact opposite sentiment very frequently, especially on techie sites. It’s certainly been an interesting reversal. I for one like both macOS and the AS hardware, and (minus the somewhat higher bugginess with recent releases) couldn’t be happier about the state of Macs.


I am using Dell XPS 17". It is relatively decent and you can have two harddrives (with a RAID setup but I was lazy to hack that).

It is certainly not a macbook but I do get 7-10 hours of battery life doing Rust development.


> of their UX and relatively high-quality drivers/OS working well with sleep/wake.

The hardware integration UX is the good part of Macs. The UI UX is inferior to Linux IMO. I'm not referring to any one DE in particular, just the fact that they are so customizable. I wish I could have Windowmaker again on hardware as rock solid as my MBP (and all the integration bits solved, i.e. audio, wi-fi, plug-n-play, multiple monitors, etc.).


> Are there not any linux laptops out there with decent build quality and comparable perf/battery life?

None that I've ever seen, especially now compared to M1 Macs.


The are not any Windows laptops with comparable perf/battery life either.


My lamentation goes well beyond Apple's business practices and I agree with you wholeheartedly. I am hoping Framework + AMD might get close sometime in the next couple of years.


They do make good laptops though? Look, let's not dive into the whole ARM vs x86 thing, it's not their fault.


There's hope in RISC-V.


There is plenty of other hardware that is comparable. In fact- numerous laptops exceed them in a number of specs.

"Isn't it absurd that no vendors who support Linux are making Apple laptops?"


No one else makes a laptop that has the power / thermal / battery / weight spec combination that Apple does, and none of them are ARM laptops, either.

To exceed the Apple M1 / M2 specs with anyone else’s hardware, you need to give up on other specs that matter greatly to those of us who care about things like that.


The M1 Air is almost three pounds, not that a lightweight laptop.


Oh. I forgot that power/thermal/weight is the only spec that matters.

The point is that not everyone cares as much about perf/watt and there are plenty of comparable computers which surpass Apple laptops in different areas.


> power/thermal/weight is the only spec that matters.

For a laptop these specs hold considerable weight.



Amongst other issues with this sort of nonsense benchmark:

1. Does not indicate the amount of power used during the compilation (again, no one has equivalent performance / watt as the Apple Silicon chips).

2. Does not report the speed of the disks involved.

3. Does not report the amount, type, and speed of the RAM involved.

4. Does not report anything about the compilation environment (cross-compiling, running in an x86 VM, running in any VM, etc.)

Are there faster computers than Apple Silicon computers? Yup. Are there faster computers that have the same power and thermal specs? It’s possible, but I have yet to see a single benchmark that reports anything close which is available to most people.

It’s not just laptops, BTW. Amazon Graviton 2 machines run in server farms that consume less power and are cooler than their x86 equivalents.

Please stop being blinkered by your prejudices.


What prejudice? They are unmatched (AFAIK) in perf/watt. The original post that I replied to stated that there are no "acceptable or comparable" hardware. I think that's subjective but false. Comparable doesn't mean exactly the same. Similar is good enough. I take acceptable to mean usable. There are plenty of other usable machines.


Anything that has less battery life at comparable capability is neither similar nor acceptable to me. Anything that looks like it’s going to cook my nethers as I use it on my lap is neither similar nor acceptable to me.

Absolutely no PC manufacturer matches the build quality, battery life, weight, thermal management, power consumption, and performance capability that the current round of Apple laptops provide. From my perspective, that means that there’s nothing acceptable or comparable.

In the year or so that I’ve had this Mac, I’ve heard the fan spin up a total of three times, and while I’m not doing heavy compiles often, there’s no other computer that I’ve owned with active cooling that is as quiet as this has been (using its passive cooling for all but those three times, all caused by runaway processes triggered by the problem in the chair).


Please recommend one (or more)! I want to buy a new laptop for years. Last time I got so fed up with the available ones I just bought two second hand laptops for cheap. A small XPS and a big Lenovo (as a backup and for compile heavy development work).


Dell XPS 13 with an UHD screen is the closest you can get.

Again, the problem is that the hardware and the software are not optimized to work well with each other as much as Mac hardware and MacOS. Dell's fingerprint sensors do not work on Linux due to undocumented specs, and sleep/suspend doesn't work (the laptop will overheat in your backpack)


This was the typical post on Slashdot in 2005, but one would hope to notice you can’t judge a computer by “specs” anymore, especially not one that runs on battery power. The different specs fight each other; you can’t just increase them all.


Sounds like the sort of problem your preferred hardware vendor can fix.


There should be case studies written by what this team of engineers has been able to accomplish. Everyone said why? Don’t do it. It’s not worth it. And yet. Here we are. What amazing work. I can’t wait to get an M series chip powered Mac and limit Linux on it because of these folks.


The whole passage on speaker support on laptops blew me away. I knew I was vaguely impressed that you can get such (comparatively) good sound on ultralight laptops, but didn't know what was happening under the hood. Quick sample:

> Modern micro-speakers require sophisticated software EQ to sound good, but they also require sophisticated safety models! The most critical safety parameter for micro-speakers is the temperature of the voice coil: you don’t want to melt the thing

They destroyed their own tweeter while testing!


It absolutely blew me away too. I mean I kind of respected those engineers for being good software engineers but clearly their skill sets extended beyond that. Quoting from the investigation of how they blew up their tweeter:

> Assuming a constant resistance (which isn't true but it'll do for now), with the amp at 15.5 dBV, and input at -2dBFS, that's 13.5 dBV = 4.73 Vrms, current at 3.72Ω is 1.27 A. Almost exactly 6 W I was pumping into each tweeter.

> We have τ_Tvc = 3.9 and τ_Tmg = 70, and I think T_sett_vc=104.9 and T_sett_mg=129.5 are the thermal resistances in °C/W. That means that, if I'm mental mathing right, the tweeter voice coil would've reached >600°C after a few seconds, give or take (and longer as the magnet heats up).

> Yeah, no wonder I cooked it.


To be fair, you can pretty easily destroy any size speakers by overdriving them. Your amp doesn’t magically know what’s on the other end of it.


I wonder if if there is a safe value we could clamp audio to now and recompile that the speakers may not be loud but would at least be usable without being unsafe (I know there is an option to just enable them outright and recompile right now).


One little rabbit hole this sent me on was to look up the lzfse [0] algorithm, which was an offshoot of LZSS that Haruhiko Okumura [1] wrote [2]. You read the code and the structure of it just seems like something he would write. He even has a flickr [3] of beautiful shots of Japan. Sometimes I really love HN.

[0] https://en.wikipedia.org/wiki/LZFSE

[1] https://en.wikipedia.org/wiki/Haruhiko_Okumura

[2] https://github.com/opensource-apple/kext_tools/blob/bc71a85/...

[3] https://www.flickr.com/photos/h_okumura/


Wow. The effort here is amazing—sad Apple won’t provide for more assistance. If it wasn’t for this project, OpenBSD on Apple silicon (thank you kettenis@!) likely wouldn’t exist.


I have to say, I don't entirely understand Apple's approach here.

Apple spent significant engineering effort modifying their iOS bootloader to support third-party OSs—then neglected to tell anyone how to actually make a third-party OS. Whoops! Have fun!

And to be clear, this is absolutely preferable to Apple selling fully locked-down Macs. And, I realize that macOS will always be Apple's first priority, and that writing documentation takes effort.

But would it really kill Apple to connect Marcan to an engineer, who could allocate 30 minutes a week to answering questions? Is there some sort of legal liability involved? Security concerns? Brand safety?

The Asahi team is comprised of people who clearly enjoy reverse-engineering, and if everyone is having fun (and creating an awesome Linux port in the process), perhaps that's all that matters. But I still find Apple's choices confusing.


> And, I also realize that writing documentation requires effort.

For Apple's hardware, the documentation exists. It's comprehensive. It's just not being released. (This is not the case for software.)

Source: Apple employee.

But this attitude is common in the hardware industry. This particular situation is a bit unusual because most of the time, Linux drivers either are developed with no support from the hardware vendor (something which wouldn't have been possible here due to secure boot) or are developed by the hardware vendor itself. But in the second case, it's common for no documentation to be released along with the driver, leaving independent parties to glean what they can from register names and other definitions in the source code. Or if documentation is released, it only covers the parts that drivers are supposed to access, excluding what would be needed to, say, write a custom firmware to replace the included blob.


But that doesn't really explain anything, or if anything makes it more confusing. So Apple goes out of their way to enable running non-Darwin OSs, writes good docs on how to use the hardware... and then never releases any docs or does anything to actually help people run non-Darwin OSs, after putting in all that effort to make it possible in the first place? Having the docs exist but not release them is more confusing, not less.


The documentation is for internal usage, it's not confusing. Of course their hardware behavior is documented in detail.


> Apple spent significant engineering effort modifying their iOS bootloader to support for third-party OSs—then neglected to tell anyone how to actually make a third-party OS. Whoops! Have fun!

This tend to be the usual practice for much of what Apple does, release things with minimal documentation and let others figure out how it works.

Maybe it's just a sadist corporation who wants to see how far people are willing to go in order to get stuff working with their own hardware/software? Sometimes it certainly seems that way.


I think we are ascribing too much to this as some corporate apple policy when the reality is closer to a single engineer or engineering lead believes the hardware would be open, but Apple is not going to spend any resources behind that.

So you have engineering teams with hacker ethos building "open" hardware, but Apple the company doesn't really give a shit and is not going to spend money on documentation for a feature the company doesn't care about.


I take your point, but I have to imagine Tim Cook (or someone just under him) signed off on opening the bootloader. It's not like the executive team doesn't know about it.

Allowing an engineer to answer questions for half an hour a week would be practically a rounding error in terms of resources, and certainly less of a commitment than rewriting iBoot policy, which they already did.


We're not really privy to how it got through (I could imagine some engineer/manager somewhere arguing that allowing it open would change some obscure tax/import filing somewhere).


I’m fairly sure that it is not as easy from a legal standpoint. Apple does manufacture some of their own components, but they still have plenty of third-party ones, where the usual licensing applies - so the same reason why it’s hard to run third-party OS on android phones.


It's mostly frustrating that this is still the rhetoric from Apple now that they are the largest company in modern existence. They have the faculties to release their Unix drivers and even provide world-class Linux support while still profiting heavily from their hardware sales. Yet, they don't. Every time they're given an opportunity to err on the side of freedom or choice, they shrug.

This is an ongoing problem that has prevented me from daily-driving MacOS since Catalina. Really a stance I wish Apple would revert, even Microsoft does a better job here than Apple.


> "it's mostly frustrating that this is still the rhetoric from Apple now that they are the largest company in modern existence."

But why should it change? They've become the most profitable for sure, but they became that while ignoring docs etc. Why should they now change, considering it's been unquestionable proven that it doesn't matter for their financial success?

PS: i still don't understand how people some people call it largest, doesn't that adjective describe size...? It doesn't have the most employees, it doesn't have the most locations etc. It definitely has the largest pile of money, but that's still a very unfitting description for that, at least in my opinion, as that's usually called richest.


> But why should it change?

Because I'm not buying Macbooks anymore. In fact, over the past 5 years I've increasingly seen people develop on a dedicated Linux box or Linux VM. Apple's appeal is shrinking to developers, and it has been on a steady decline for the past 10 years. For all of MacOS' POSIX certification, it hasn't stopped people from trying to implement Linux just so they can run privacy-respecting software and benign GPU libraries that Apple refuses to officially support.

Their plan here isn't working. It might placate the 80% of users who don't care about this stuff, but the technical sentiment towards Apple's technologies is waning. I'm frustrated with WebKit, I'm frustrated with Swift, and everyone is frustrated with their 30% tax. Something has to give, and it's probably going to be Apple's facade of benevolence.

> It definitely has the largest pile of money, but that's still a very unfitting description for that, at least in my opinion.

All businesses are constrained by a set of limiting factors. The most important factor will always be capital, since you can trade it for any one of the lesser factors. Apple uses their 200 billion USD cash reserve to buy goodwill in the form of advertising, first-in-line tickets to TSMC and the finest lobbyists in the nation. They have every protection that lesser companies do not, which is why their valuation supersedes any other publicly or privately traded organization.

I'll stop calling them the biggest company when business stops revolving around money.


> Because I'm not buying Macbooks anymore.

You're not but many people still are [0]. Many people started to see Apple's developer experience wane in previous years, true, but their Apple Silicon changed that. Their price/performance/battery life ratio is simply unbeatable for devs and anecdotally many people I know bought AS Macs where before they would've bought or used a Windows or Linux computer, including me.

There are some things I will agree with you on though, such as their 30% tax, as a mobile developer myself.

[0] 2021 Mac shipments grew twice as fast as overall PC shipments - https://9to5mac.com/2022/01/12/2021-mac-shipments-growth/


With all due respect, if you're a mobile developer you don't get much of a choice which laptop you buy. A Macbook is the only machine that lets you meaningfully deploy to iOS, so I'm not sure if I agree that Windows/Linux machines were competing products.

Apple Silicon only reverses their hardware quality (which was truly awful 2015-2018). Their software quality has still been in rapid decline since Mojave, and it's developer experience out-of-the-box is still marred with coreutils older than dinosaurs and increased restrictions around running software. I know a lot of developers that are happy with Apple Silicon, but I know exactly 0 developers that don't complain MacOS.


You're right, I do complain about macOS. I guess the stuff I'm doing isn't as dependent on the OS itself (web, mobile dev) so I don't see the same problems as others might who are working on lower level stuff.

I used to use tools like Codemagic which ran macOS in the cloud for deploying mobile apps, so buying a MacBook wasn't necessarily a blocker for me.


> I used to use tools like Codemagic which ran macOS in the cloud for deploying mobile apps, so buying a MacBook wasn't necessarily a blocker for me.

Codemagic, from my understanding just does code-signing and deployment. I don't know how you did it, but a Mac would still be necessary for access to Xcode libraries, Objective-C, and iOS simulators.


Codemagic and Bitrise allow connecting to a Mac in their cloud via VNC where you have full graphical access to XCode and iOS stimulators. I believe both have free plans that support this.


I've sworn off Macbooks three times now but after I'm disappointed (again) by Dell/Lenovo, I just end up buying another Mac.


> Something has to give, and it's probably going to be Apple's facade of benevolence.

Honestly speaking, Apples main success vector has always been it's marketing. It's never been benevolent, and if you ever thought it was... I'm afraid you've only witnessed first hand how effective they are at their job.

> I'll stop calling them the biggest company when business stops revolving around money.

I admit that I'm not a native speaker, but that's exactly the reason why that adjective confuses me so much.

Bigger/largest directly translates over but nobody would consider bigger to be better in a financial context. Profitability is the thing that's interesting, and to a lesser extend how rich is is.

Calling it biggest/largest doesn't (to me) say anything particularly interesting about it


Lol Apple doesn’t care about users like you. You and your like not buying MacBooks has virtually zero impact on them.


Yeah, it does kinda suck. That being said, I save a lot of money not paying for AppleCare, iCloud and my Developer License anymore.


> [...] even Microsoft does a better job here than Apple.

How so?


For one, they helped build Linux drivers for NTFS. Despite Apple promising to document and open-source APFS, they still have not gotten around to it (which makes interop with Macs really frustrating). There are lots of little things, too - Microsoft packages desktop apps for Linux and made pretty great OSS contributions like the Monaco editor. The list could go on, but this really shouldn't be surprising. Apple doesn't even treat upstream BSD with respect, it's insane to think that they would respect Linux.


> Apple doesn't even treat upstream BSD with respect, it's insane to think that they would respect Linux.

Meanwhile from Microsoft:

* https://wiki.netbsd.org/ports/emips/

* https://www.netbsd.org/ports/emips/index.html



yes, microsoft is truly amazing, where is their patent free exfat implementation?

that is the only true modern interop fs and they keep it hostage.


Been in the kernel for long enough to trickle out to recent distributions. Is there something missing?


i see i was out of date on this, thanks for the heads up. better late than never.


They certainly have a penchant for randomly crapping on people that don't do things the Officially Supported Way(TM). For example, one rev of MacOS changed the ABI for the gettimeofday() kernel system call. That broke Golang (and Virgil). Apple didn't care. They want you go through libc for some reason. Uh, no, don't break userspace.


Microsoft also changes the kernel syscalls between releases. It's not unusual for operating systems to specify ABI at the libc level, in fact I believe Linux is the odd one out to specify ABI at the syscall level.

https://j00ru.vexillium.org/syscalls/nt/64/


I know. Solaris has/had a stable ABI.


Apple has been crystal clear since 1999 that syscalls are not ABI on Darwin. Linus chose to draw that line differently, which is fine; Linux is a different environment.


> I don't entirely understand Apple's approach here.

They've just decided they don't want to be in the business of supporting Linux on Apple hardware.

Short of fully supporting Linux, the "Whoops! Have fun!" part would happen somewhere, no matter where the line was drawn.

Of course they could do more. But you and I shouldn't really expect to be able to tell Apple how to spend their money.


> You and I shouldn't really expect to be able to tell Apple how to spend their money.

I don't, I just think Apple chose to draw the line in a perplexing location. I'd love to know what they were thinking.


Pure speculation on my part

Perhaps it was "targeted" for some internal skunkworks project to get Windows running on ARM Macs? Linux/BSD obviously got there first (at least in the open) and Microsoft is under a Qualcomm only contract for now

Microsoft can't directly request that Apple allow booting their OS, or work with them directly. But "leaving a spare key under the doormat" is a bit more innocent looking


Almost certainly. I remember hearing Apple saying that they asked for Windows on ARM and were told no.

That being said, ARM hardware is nowhere close to being cross-vendor interoperable[0] and going from Qualcomm to Apple Silicon would require a significant reverse-engineering effort on Microsoft's part. The "spare key under the doormat" is running Windows on ARM in a VM under macOS.

[0] Hell, most vendors don't even bother with generational interoperability. Each chip is it's own island. Apple Silicon is a very unique exception.


I think it's precisely this. Even just providing specs or engineering time can be seen as "support" on some level and Apple doesn't want any responsibility whatsoever associated with that. They're avoiding external dependency at all costs.


> Is there some sort of legal liability involved?

I would say, definitely. Apple does use third-party components as well, with presumably closed firmware/drivers.


My understanding is that Apple as a company doesn’t officially support this effort for whatever reason (be that legal or whatever), but their hardware team does (I’ve heard they use Linux themselves when bringing up new hardware), and have done what they can under the limits of secrecy imposed by corporate Apple.


Apple's entire business model heavily depends on vertical integration. They use their software to attract customers to their hardware, and viceversa. Users running alternative OSs on their hardware doesn't tie them into their software ecosystem.

That, and they don't give a crap about the open source community, unless it directly benefits them. They have zero incentive to help a group of hackers run Linux on their hardware that will only benefit a niche of a niche of users. Allocating any of their engineers' time to this project would ultimately result in a negative ROI.

TBH I'm surprised Asahi Linux hasn't received a C&D notice yet. Apple hasn't been this tolerant of hackintosh projects before, so at least they're turning a blind eye to this.

Why anyone would want to spend their free time working in such a hostile environment is beyond me, but hats off to the Asahi team for the dedication. The patience and talent required must be extraordinary.


This is exactly the opposite of hackintosh. A hackintosh is "pirated" Apple software running on non-Apple hardware.

These are people who have bought genuine Apple hardware - putting money in Apple's pocket. Then they want to write some custom software for their computer.

I don't see how this threatens Apple in any way. The intersection between general Apple users and those who want to run "a remix of Arch Linux ARM" on their $1000 hardware has to be pretty small anyway.

Actually it could open up a new market for Apple. I for one am quite impressed by Apple hardware, but have minimal interest in running their software. If Asahi becomes stable enough, I would seriously consider buying Apple.

Your second point is a good one, however.


True, doing this doesn't threaten them, and it could result in a negligible amount of devices sold for this purpose, but I wouldn't be surprised if they saw this as equivalent to jailbreaking, and thus as some kind of breach of the ToS you implicitly agree on purchase.

Though Apple hasn't historically prosecuted individuals for such things, so this likely won't happen.

But it's even less likely they'll officially support, advertise, or even recognize this project in any way.


> I wouldn't be surprised if they saw this as equivalent to jailbreaking

The difference is that jailbreaking compromises the security of iOS. Whereas some clever apple engineer has setup the M1 MacBooks such that each OS is completely isolated from each other (enforced by dis encryption and the Secure Enclave). The only thing a rogue linux install could do is delete macOS entirely. But there would be little incentive to do so. Installing malware or stealing data is completely blocked.


It would be the equivalent of Jailbreaking if Apple explicitly built checkm8 into the iPhone.


i'm not saying this can't be true, but why leave the boot loader open then?


Perhaps that work is an escape hatch should anyone threaten some kind of antitrust action or similar. They want it theoretically able to boot something else for legal reasons.


Maybe the approach is to do the minimum to avoid being successfully prosecuted as a monopoly. “It’s not locked down; there are alternatives freely available!”


This is my slightly-conspiracy guess: Apple has oodles of old hardware lying around that they would like to keep using but is either too old for macOS or they want to use it for backend services (prod or non-prod, doesn't matter). Think capital expense budget. So if Apple can run Linux on all that hardware, that's a lot of computing power still available for years to come. And if you can get the OSS community to do it for you for free - even better!


I would struggle to believe that Apple had an easier time getting someone else to support Linux on their hardware, than just... supporting their own hardware with their own OS with their own drivers that they already have using internal developers who already know how to write Darwin code for Apple hardware.


Machines too old to run that latest version of macOS can probably be replaced with commodity hardware in the cloud for a few dollars a month. Doesn’t seem worth bring a bunch of decade-old hardware online for this.


Apple already provided way more than anyone expected(making it very easy to dual boot).


Did people really expect Apple to prevent dual booting? Not only have they never prevented it before, but also they would for sure be getting into hot water legally if they start selling computers where there wasn't the possibility.


> Did people really expect Apple to prevent dual booting?

Yes, iPhones and iPad's don't allow it and Microsoft doesn't allow it on it's ARM based OS. (enforced secureboot; detailed slightly here: https://wiki.ubuntu.com/ARM/SurfaceRT#Secure_Boot )

There was no expectation on my side that they would support it.


On their computers they have never prevented it before, sorry if the previous comment was unclear about that we were talking about computers/laptops, not mobile devices.


> On their computers they have never prevented it before

"What's a computer?"

https://www.youtube.com/watch?v=3S5BLs51yDQ

I really don't know what to call iOS devices other than computers. Unless one of your requirements for "computer" is "ability to boot third party OSs"; I don't entirely disagree with that but it's a bit circulatory in this context.


iPhone is a phone, iPad is a tablet, Mac are computers. This is generally what people understand when you talk about the different product segment Apple divides their products in. I'd probably call of them "computing devices", but I think in general it is pretty clear what I'm referring to when I say "Apple's computers", at least to people outside of Hacker News. I think pretty much 0% of the people I spend time with AFK would think "Ah, he must be talking about the iPhone" if I said something like that.


Anecdote: I teach coding to children, including occasional private lessons in client homes. For the latter, families need to supply computers, which one client didn't realize. I managed on the first day by having the two girls pass my personal laptop back and fourth, but I made it clear they'd need to each bring a computer next week.

So I was a bit surprised the following week when one of them showed up with an iPad! But, it had that attachable keyboard and trackpad Apple sells, and it really did work fine in the web-based environments we use.

Broadly speaking, I agree that most people think of Macs as computers and iPads as iPads, but I don't think that distinction is meaningful. Macs and iPads are marketed for most of the same things, and Apple has even begun touting how they have the same chips inside!


An iPhone is a computer. An iPad is a computer. A Macbook is a computer.

Any questions?


But they also never went to great lengths to allow it, culminating in a terrible experience with the T2 chip: https://www.trustedreviews.com/news/apple-t2-chip-linux-mac-...

So, again, it was not looking positive.


That was never the case, seems the article you linked is based on a misunderstanding. See https://www.omgubuntu.co.uk/2018/11/apple-t2-chip-cant-boot-...

Even with the T2 chip, it was possible to turn off Secure Boot 100% so you could boot whatever operating system you wanted.

Just as a disclaimer, I'm no Apple fanboy, I stopped using their software/OS even before I got rid of my last MacBook, and since 2018 or something haven't been using the hardware neither and only use a Mac for testing various software I develop. So I don't normally defend anything they are doing.

But right should be right; they have never previously tried to stop people from running whatever OS they want on their computer hardware so guessing they suddenly would start, feels like a pretty far-out guess.


I tried to do this myself, I was greeted with a fan that was on 100%, a non-functioning keyboard and trackpad and USB ports that were roughly half functional.

It's a falsehood to say it was allowed.

It was possible but very much not how you seem to imply.


That sounds like the kernel/distro you were using didn't quite support the hardware you were trying to use, rather than a problem of a company trying to prevent you from booting a OS on said device.


I never claimed they were trying to keep us of, more that they didn't care at all, the T2 chip was a major change and basically didn't function like anything that existed.

The reason I said it was a "worrisome trend" is because with the switch to M1: Apple would need to intentionally leave in a back-door if people had a chance of booting alternative operating systems.

Since they had not shown any interest in supporting alternative operating systems (outside of Bootcamp, which we knew was going away too) then it could be reasonably assumed that they would do no extra work to allow alternative operating systems at all.

Since it was extra work, and since they had not shown any care before.


That was because of missing drivers, or drivers that needed to be modified a bit (like the nvme driver), it had nothing to do with a locked down boot loader.


Not sure where you got that I said it was a boot loader problem.

I’m not sure if you’re deliberately missing the point either.

The point was that the trend seemed to be locked down devices more and more, not that it was impossible before; just that it was getting more and more difficult- and that it was already difficult on arm platforms.


If your keyboard and trackpad is non-functional that means you are missing drivers. Apple is not preventing you from dual booting, but Apple is also not going to write drivers for their trackpad for linux. Apple is not locking the system down, but they are saying if you want it to work on Linux, write the drivers yourself.

This is exactly what is happening with Asahi linux. The ARM bootloader to install Linux, but they aren't helping the Asahi linux developers write a GPU driver. They are not locking the platform down, they are simply saying you can do what you want, but don't expect any help from us.


The article you linked said so.


AFAIK the enforced SB without allowing "3rd-party" keys is specific to (32bit ARM!) RT devices, which are obsolete. The current line of "Windows on Arm" devices (various Laptops and their "Volterra" dev kit) allow turning off secure boot.


Leading up to and for a short time after the M-series announcement, the resulting "locking down" of the Mac was a commonly voiced suspicion/concern, to the point that to this day, many tech-adjacent online discussion participants who don't follow Apple think that M-series Macs have the same boot restrictions as iOS devices.


> sad Apple won't provide for more assistance

I am missing why we think they are not doing so continuously, perhaps behind the scenes


I really don't think marcan and co have some secret backchannel with Apple. If nothing else, a lot of their coding sessions are streamed live on Youtube, so you can see them reverse engineering stuff in real time.


Apple really don't go in for that kind of thing.

Spiritually it's your device but their ego


One thing I observe in the latest OS update (Ventura) is that they made HUGE improvements in memory management (MacBook M1, 8 GB).

Before, Firefox would bring the laptop to a crawl with ~200 tabs (Yeah, I know). Having PHPStorm open at the same time was a sure machine killer.

Just today I found myself casually with close to 350 tabs, while at the same time working in PHPStorm with no issues.

In my experience (My previous box was a Thinkpad T430 with 16 GB of RAM running Debian) linux is far from this good handling memory.

Also, the state management in MacOS is equal to no one. I have a Lenovo Thinkbook 15 (I7), and having to wait for it to restart when I open the lid is excruciating. I should put a "This is my pos laptop" on it.


That might also be related to Firefox improvements: https://hacks.mozilla.org/2022/10/improving-firefox-responsi...


Was not aware of that, cheers.


I see no issue with Linux memory management. I'm currently using an older Dell laptop (I guess 2 to 3 years old. Inspiron, or something like that. It's very shitty. Got it for free. Would not buy anything form Dell. The hardware falls apart after so short time! It's a wonder it still works).

But: I have currently on this Debian Testing box under KDE Unity, Rider, Sublime Merge (with a few hundred changed files), half a dozen Yakuake tabs, Kate as note pad, and around 30 Firefox windows (with maybe 5 to 6 thousand tabs)¹ open.

According to htop it uses only around 15 GB RAM.

(Hmm, actually something seems not OK when I look at it as there is still unused RAM, but htop says almost 9 days of uptime? And it still didn't fill all unused RAM with FS cache? Strange. Did something change in Linux 6.0? Maybe something really broke in Linux memory management? But I actually didn't notice anything unusual after the latest Kernel update.)

---

¹ Before someone asks: Yes, thousands of tabs. But don't ask further…

Firefox handles this just fine with the help of tree tabs and tab auto discard. Only "booting" Firefox in such a state takes some time (I guess over 1 min.); but when it runs it runs just fine. No issues.


You truly live on the edge, my friend.

As a side note, what I truly like about random comments like this is the app naming. Never ever had heard of Yakuake, but that looks seriously interesting (Tough luck I'm on MacOS now).

Fetching Kate as we speak.


> You truly live on the edge, my friend.

I have an issue sorting tabs. I don't want to throw away research, and than never find some interesting things again. Actually having so much windows is already a good sign as it means I've sorted parts of the mess. But would still need to bookmark (and tag) stuff and this could take some time.

So I just "never" close tabs… (I close them after reading, and bookmarking and tagging the more interesting things; but I open new tabs faster than I could sort old ones… That's the problem.)

I think I had once even over 10k tabs open. But back than FF was much more unstable and it crashed in the end.

> Never ever had heard of Yakuake […]

It's one of the oldest KDE apps. I thing I was already there in version KDE 1. It's an drop-down console wrapper around `konsole-part`, the std. console "widget" in KDE. So it can do everything that Konsole (the default terminal in KDE) can do. But the window management is simpler with the drop-down approach.

Kate is the "dumb" desktop editor in KDE (besides KWrite, which I think nobody uses). But for a "dumb" editor it's quite powerful by now. And it's fast!

I don't do anything funky with it though. You can build half an IDE with it I think. But I never tried to get some LSP servers working with it. It's just my default "dumb" text editor.

I didn't know that there's a version for other OSes! Does it really work under macOS (or Windows)? I'm baffled to be honest. Also the webpage looks like Kate has much more features than I've seen. Even I use it for years. Interesting.


Didn’t Firefox just ship a big RAM improvement update (105)?


I was not aware of that, may be it IS related to that. But then, using a certain set of tools together (Firefox, PHPStorm, DataGrip) that historically have all been memory hungry was not possible before Ventura.


This is not criticism, more like wonderment and curiosity. At 200-300 tabs, isn't it just faster to do a web search or organized bookmarks? Or do you just Ctrl-Tab at light speed through them when you need to find something?


> isn't it just faster to do a web search or organized bookmarks

As someone who has a lot of browser tabs, unfortunately no. It's often near impossible to remember the magic query that yielded a particular site as a result and the problem with bookmarks is the overhead that comes with organizing them — most tabs sit in an uncanny valley between long-term usefulness and disposability which would require frequent clean up passes through bookmarks to keep one's bookmarks in a reasonable state.

And as noted by others, these tabs are typically organized by both windows (e.g. one window for apple platform dev stuff, one for android dev, one for shopping, etc) as well as tab groups within those windows.


Tab Stash is what I use (having had waay too many tabs open)


Idk, I think I use bookmarks similarly to how you use tabs. I don't organize them but I do eventually delete them, just as you eventually (I assume?) close tabs.


The thing that sucks about bookmarks is that you have to open the bookmark manager, find the bookmark(s) in question, and delete them, which is a process that happens naturally as tabs get taken care of. I don't have to explicitly think about managing tabs the way I would have to manage bookmarks.


I usually just click the filled-in star to un-fill it.


I am one of the 200 tab people but it's not all in one browser window. I use a Firefox extension called 'Simple Tab Groups' that let me categorize the tabs to get back to them later. I use it as a mini-knowledge base (I'm not that attached to my tabs if I lose them, I use Yojimbo for my real KB).

But just to say it: I also freak out when I see someone with so many tabs open that it's like a little Joy Division cover on the top of their browser.


But to parent's question, what does having them open allow you to do?

I understand the existential dread of closing a useful-or-interesting-but-unread tab. But isn't this what bookmarks were created to solve?


Tabs are a better way to do bookmarks than regular bookmarks in most cases.


Howso? Honestly curious.


They are easier to create and remove. They advertise their presence, reminding you to come back and read them better. They can load faster since. They save your scroll position. They save the state of the site and input fields.

The other use case for bookmarks is creating a new tab and navigating to some website like a blank google doc, twitter, hacker news, etc. This is better handled by history. The most frequently used sites can show up on the new tab screen and show up in the results when you start typing in the URL bar.

I don't see any advantage to regular bookmarks for 99% of use cases. They seem to be a feature that exists due to limitations of hardware and browsers from the early days of the web. It's similar to YouTube. In the old days you had to subscribe to channels. Now YouTube is advanced enough to not require you to subscribe to anyone for it to give you content that you want.


Do people ever come back and read them though?

It seems like for people with 50+ tabs, rate_of_addition > rate_of_closure.

Saving state is a fair a point, but I can't think of many read pages where that's important to me + I'd trust it to a browser.


Do people ever catch up on their bookmark backlog? I don't think it really matters.


The reality is that I'm a tab hoarder, I just forget to close them.

some of them I want to keep for "reasons" so every once in a while I do some cleaning.

Firefox has now that beautiful feature that originated in Chrome where each window has a drop down (To right, a rotated "greater than" - or "smaller than" - symbol pointing down) that lists all the tabs ina window, and I can close them from there.

So, cleaning time just got slimmed down greatly :)


You can search tabs on firefox if you add % to the address bar


Awesome, didn't know this!


TIL! That is slick


I use Vimium, Shift + t for tab search from long time, web search or organised bookmark is not faster when you are reading lot of obscure stuffs.


I would really like to update to Ventura, but I don't want to have to deal with that horrible System Settings application. That's the only reason I'm delaying the update. Although admittedly I don't spend that much time in System Preferences, so maybe it doesn't really matter for me.


Yeah, that new System Settings change sucks (Probably due to the fact that I was _just_ accustomed to it) but as you say, you don't spend that mach time in it.


Many things in memory management are stuck in the late 90s, where assumptions are made about disk vs memory vs cache that are no longer true.

Memory is still much faster than SSD but it is not as insanely faster as it was compared to spinning rust. And compression is a huge thing now, too.


Work on Mx GPU drivers is particularly interesting as it could allow for performant MacOS virtualization on commodity PC hardware. Right now if you virtualize MacOS interactive desktop performance is unusably slow unless you pass through a PCIe GPU.


Nah, Apple has a very clean Metal paravirtualisation ABI. This allows to decouple the VM from the underlying HW.

macOS 12 VMs will run on Mac hardware that doesn't even exist yet, with GPU acceleration.


Only supports Metal though, no OpenGL apps (even through their OpenGL implementation is written on top of Metal).


Written on top of Metal with some specific private extensions. I guess that they aren’t willing to long-term support those (yet?). I wonder if they are going to make the Apple OpenGL impl run without those as an alternative too…


I... don't think it will. There's a pretty big difference between writing a driver for a GPU and actually emulating that GPU, much less with reasonable performance.

And I do think that's what you'd have to do, because unlike on Intel, macOS on Apple Silicon does not support software rendering.


How big is the Asahi Team? I wonder, since there is so much of community interest in the final product, why do we not see a lot more community participation in the development as well?


What makes you say there isn't community participation? The repo for m1n1, at least, has 42 contributors according to Github[1]. There's plenty more reporting bugs and such, and their IRC channel seems relatively active.

1: https://github.com/AsahiLinux/m1n1


My dream, using Apple hardware with Linux will finaly happen some day.


I was very happy with Debian on my MacBook Air roughly 10 years ago. I am a non-GUI type of guy, so I might have missed quibbles that other people had around that.


I ran Debian on my Pismo PowerBook from 2000-2008, because it was the only thing that could reliably suspend/resume. Switched to ThinkPads because Linux suspend/resume on x86 had gotten pretty reliable by then.


I mean I ran ubuntu on my mac natively back 5 years ago. Ran pretty well!


Even in a alpha state Asahi works pretty well in my limited experience so I’m very optimistic for the future.


Also very excited about the progress Asahi is making. I did technically live your dream though in 2003 with YDL on my G3 iBook. It was… ok.


Ran ubuntu on mac mini's for a big chunk of the previous decade


The pace of work that's being done, with a lack of documentation, has been very impressive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: