Hacker News new | past | comments | ask | show | jobs | submit login
ARM64 Linux Workstation (jasoneckert.github.io)
778 points by jasoneckert on March 3, 2023 | hide | past | favorite | 480 comments



I cannot wait for Linux to be fully usable on my Apple Silicon MBP. That is going to be such a killer environment to develop on.

Crazy fast hardware, great high refresh screen, insane battery life, fantastic touchpad, gestures and keyboard.

There is the potential that Windows games could run on it with some combination of x86 translation and Proton.

Using MacOS on it feels like I have a ferrari with square wheels.

Want to play games? Not unless the developers use Apple graphics APIs (not happening for new titles and never happening for old titles).

Want to run containerized workloads? Enjoy a cumbersome, slow, virtualized experience.

Want to use FUSE? Not on your work computer locked to high-sec mode as you cannot install kernel extensions (where is our user space FS Apple?).

Want to write a program that targets MacOS? Expect a cumbersome CI/CD experience because Apple expects you to manually compile distributables.

I could go on but as someone who has used MacOS for years and generally appreciates it, Apple really doesn't care much for developers. Other than its POSIX shell, MacOS doesn't have much going for it.

At least with my Intel MBP I could dual boot Windows so I could carry one laptop and switch between work and play. Now I have to carry around two computers when I travel for extended periods if I want to game.


I'm wondering if an M1 MacBook would actually have such a good battery life on linux. The processor is very power efficient, but I'd (naively) assume that MacOS's power management probably has a big influence too. In the same way that Linux laptops usually have a bad battery life compared to the same hardware on Windows. How good is Asahi when running similar workloads? And is it possible that it could reach parity with "stock" MacOS?

(I'm sure linux is in theory just as power efficient as MacOS or Windows, but distros are just usually not very concerned or capable of tuning it well enough.)


Installing Linux has had huge wins for me for power usage. Running powertop --auto-tune helps a good bit.

I used to fiddle forever but this gets like 80% of the wins. Things like "do we reduce power state of sata links" used to be extremely conservatively set, for fear of really old hardware from the beginning of sata era that behaved badly, but was an obvious &bhige win for 99.999999% of folk. Linux has finally started turning a bunch of these baseline expected tuning on by default, across the past few years (rather than powertop doing it), so the out of box experience is finally much more reasonable.

It's been like this forever but people love throwing shade. I had a Transmeta Crusoe laptop (p1120) amd yes, at first battery life was worde. But I did low hanging fruit optimizations, and Linux got better, and soon I had like an hour more runtime on Linux.

Mac famously has a trashburger IO subsystem, so not having that boat anchor hanging off the side should probably let the system hurry-up-and-idle better, saving power.


I don't understand why the powertop tunings aren't the default when the distro detects it's running on a laptop.


The most annoying thing that prevents me from running --auto-tune on bootup is that there's no way to exclude USB devices. So if I boot when docked, the mouse will stop working when I don't move it for a few seconds and I need to click to make it work again. And no, unplugging it once is not an acceptable solution.


It just got added in 2023, but there's now --auto-tune-dump which generates a command line invocation of powertop that would set all the auto-tune settings. Instead of running --auto-tune, you can make a script of this dump, modify it to your contentment, and run the script instead of auto-tune at startup. So you can omit your mouse!

Ideally there would also be a way to omit certain device or subsystem tunings from auto-tune as well.

Personally I think it'd be fun to mess around with more advanced schemes, where we dynamically alter powersaving behaviors based on user behavior. What we have now seems somewhat unresponsive. It'd be fun to dive into.

https://github.com/fenrus75/powertop/pull/116


Awesome! Yes, I was thinking about doing this manually by toggling every setting manually and copying the command being displayed but gave up quite quickly as it was too cumbersome. Sometimes the screen would refresh before you copied so you had to do it again... This option solves that nicely.


I wasn't wasn't of this, thanks


powertop is way underrated. You don't have to know much to get a lot out of it safely.


I used my M1 MacBook Air last weekend with Asahi and GNOME on train ride for roughly 4 hours (browsing the web, fiddling with a server over ssh, writing a blog post) and when shutting it down, it was still at 78%.

This used to be different when I first installed Asahi, but the GPU driver and other improvements have made battery life something I don't really worry about anymore.


thank you for that report. This convinced me to install it now.


Actually it works quite nicely on a M1 14''. Even battery life is quite good. Compared to my Dell XPS 15 7590 it has twice the battery life, easily if not even more.

One thing which bothers me a lot as it makes the laptop hardly useable for me at night: display brightness cannot be adjusted, it is either off or 100%. Some user claim this can be accomplished by installing the asahi-edge kernel, which does not even boot for me.

And this also bothers me lot: why are things checked in/distributed which do not work at all in Asahi Arch?!?! If you install e.g. a dev kernel which does not boot because the NVME driver is broken and you do not have a fallback kernel you are doomed. You cannot mount the partition in macOS to fix this manually.


Also the whole things seems somewhat un-deterministic. This morning I booted and the the Login screen nothing works. No keyboard input, touchpad won't move the cursor. Powerbutton works and will power off the machine. Then, u-boot starts and it stopped with some inputs, like it seems those keys I entered at the login scree, which shouldn't be at all NEVER....a few reboots and then it works normal. Very strange...


Okay, one step further, and one step back: installing the asahi-edge kernel requires the mesa-asahi-egde package otherwise you won't see a thing. This isn't been done automatically. Now brightness works which is fine, but working e.g. with raylib which will create a GLX context fails now with mesa-edge...


The only thing holding me back from running Asahi is lack of HDMI output (at least on M1 Airs). Has anyone heard about progress (or its likelihood) recently?


I tried it just now on M1 14'': nothing...


I think the key difference (hopefully) will be that there are a limited number of hardware configurations for Mac. Still a lot (i.e. 100+ SKUs) but not nearly the number of x86 hardware configurations (i.e 10000+)

Also presumably if Apple wants this to happen, they as a single vendor can provide a single definitive answer on what they are doing for each configuration. Eventually...

In the x86 world, there is Intel, AMD and like 50 different manufacturers each with a particular HW implementation/BIOS customization, leading to IntelAMDHardware_Vendor_customizations^50 permutations that'd need to be validated.


The permutations don't really matter. If one machine has Broadcom wireless and Intel graphics and another has Qualcomm wireless and AMD graphics and they're both supported and then you come across one that has Broadcom wireless and AMD graphics, you already have the drivers for it.

The real trouble is that you get some wireless chip which isn't popular enough for anybody to reverse engineer it but the manufacturer didn't provide any documentation, so the driver for it sucks or doesn't exist.

There is plenty of well-supported hardware so the solution is, don't buy the bad one. But some poor sucker who already did and now they want to run Linux on it may have a bad time.


But power management is still ussually subpar(compared to Windows) even on the well supported ones.


I run a T14s with nixos. For me battery is never ever a problem, sure it runs out if I fall asleep and YouTube is autoplaying. But it lasts almost an entire workday in the couch, it's usbc powered so I can charge it a bit with my powerbank.

Just clearing some FUD if people consider Linux bad for everyday use, that's mostly shit if you run Wayland and like sharing your screen (Jitsi or obs+virtualcam works well too).

Edit: remember Android runs on the Linux kernel-ish


Because Linux drivers don't implement as many features as their Windows counterparts, sadly.


It’s been a while since I’be owned a MacBook, but the speakers sounded significantly worse on Linux (and if you own a MacBook you know they can sound pretty decent) because they are EQ’d in macOS, not the speaker firmware.

Never has there been any effort taken within Linux to apply a generalstic ‘small speaker EQ’ to anything with detected internal speakers.

Similarly, the Linux kernel by default is optimized for server style workloads (throughput) instead of smoothness. It would be so easy to check for an internal battery and if true apply a few kernel parameters so your laptop stays smooth under load.

Linux as a desktop (well, laptop) OS is terrible not because it is incapable of being great at it, but because people don’t seem to care for the death-by-a-thousand-cuts issues.



There was even an article[0] about the sound of macbook speakers quite recently.

[0] https://news.ycombinator.com/item?id=34935483


Battery life is significantly worse on my m1 pro macbook pro under asahi. I'd estimate about half of MacOS.


Is it worse across the board? Is idle power consumption higher, or is it mostly when under load that it gets worse?


I find under load its the same, idle power consumption is higher. It seems linux doesn't have the same level of delegation on powerformance/effeciency cores that apple manages with macOS


I think it'll be the same on the MacBook pro 14 m1, at least for some people like myself. I had a Lenovo p15 with an Intel chip running fedora before my m1 MacBook and I got roughly the same life during the same working day between the two.


>I'm wondering if an M1 MacBook would actually have such a good battery life on linux.

Not 100% the same, but the Asahi members reported something like 8 hours on a M1 Macbook, so that's better than most/all x86 laptops with Linux


> 8 hours on a M1 Macbook, so that's better than most/all x86 laptops with Linux

It is makes no sense to compare an $1k/$2k M1 Macbook with random laptops. Those laptops exist because people want to run MS Word or Excel for $400. That's the feature.

Various System76 laptops, Thinkpads (among many other laptops recommended by Linux users) give you battery life comparable to macOS on the M1.


I keep hearing about battery life being just as almost just as good or better on linux (and on Windows too, though that's a more realistic claim). Yet even the latest gens thinkpads are pretty awful w.r.t battery. My friend bought an i5 12xxx Thinkpad X1 and can barely do 5 or 6 hours of screen on time, when the stated capacity is 16h. Lenovo denied a refund, because it's working as intended according to them (and my colleagues with s76 laptops don't even really expect more than 3-ish hours of battery at best). My work provided t14 gen 3, that I like, can't even keep it's charge when unplugged and in my bag for long enough.

And I'm no Apple fanboy, the only Apple device I've ever used has been an ipad 5 years ago and my M1 macbook.


Lenovo’s battery life numbers are consistent generation to generation, but wildly unrealistic. My 2015 thinkpad (since replaced by m1 air) was rated as 12 hours but could do at most half in practice doing very light tasks and typically ran about 4 hours. So when you buy a thinkpad you should assume actual battery life is a third to half of the rated life.

Apple’s numbers are equally unrealistic. They rate my m1 air for 15 hours, but in practice it gets to around 8 hours.


Eh... I have a T480. It's advertised at "17+" hours. With the 72Wh extended battery fitted, it has a total of 96Wh energy storage. With some small effort tuning (making sure the nvidia GPU is properly powered down etc) I can easily get its consumption down to about 5W - less, with lowered screen brightness and wifi off. 96Wh / 5W = 19h 12 min. Seems legit.


My macbook pro was rated for 18 hours and I find it very easily meets or exceeds that number if I'm working on general tasks.

Of course if its a day where I'm compiling often and have several tasks running its going to be lower, but thats not the usecase they advertise


Just a sample size of 1, but my HP ProBook 430 G3 (6th gen i5) could comfortably hit 8 hours of battery life, or beyond 10 on light tasks - before the battery went kaput unfortunately. However it was the “large battery” variant with I think a 40 or 50ish Wh capacity on a 13” screen.


> arious System76 laptops, Thinkpads (among many other laptops recommended by Linux users) give you battery life comparable to macOS on the M1.

that is a lie. system76 and thinkpads are nowhere close to giving you a full day of battery life on Linux.


System76 systems absolutely do not have good battery life. The systems the make tailored only to battery life try to be comparable in battery (~14h system76 to ~25h on m2 pro) but obviously x86 vs arm makes this attempt exceedingly difficult. You're going to remove a massive amount of computational power from an x86 system to improve it's battery life like that. However, of course if you're interested in increasing a laptop battery life, it helps to have control over the system if you have the time to put forth on optimizing it. I would imagine an asahi linux (archlinuxARM) system could be made to last much longer than macOS. However, this would require 1) not running any wm, or perhaps a simple one like awesomewm, 2)drastically reducing screen brightness, 3) removing any processes that run in the background continually (you could take this rabbit hole far - perhaps even to NTP requests and such), etc. MacOS doesn't have as much configurability like this, so it's variance in battery life is much less - probably a range of 10h-37h if you really push it. Linux could probably take that to 20min-8days if you tried.


I had a ThinkPad T14 AMD and the battery life on Linux was very mediocre, even after tuning with powertop. It was usually 2-4 hours, on Windows more like 8-9 hours.


Conversely, I'm on a Gen 8 X1, and on Silverblue with tlp replacing power-profiles-daemon, I routinely get 8-9 hours. Anecdata and whatnot, but I'm pretty happy with modern linux + modern hardware. That's not to say I wouldn't love to get my hands on an M1 macbook to see the difference though...


Right, and that is the issue. If I buy a Mac and run MacOS I can be sure that I get long battery life. If I get a PC laptop and run Linux, it's a random crapshoot if the random combination of hardware, kernel version, etc. (I also used Silverblue and Fedora) of whether you get a good battery life.

Maybe Asahi Linux will improve things, since they only target a very small number of hardware configurations.


It's almost as if "Linux" said very little about what software people are running, in what configuration.


That hasn't been my experience... If anything System76 are much much worse, and Thinkpads mediocre in that aspect


Better than high performance mobile workstations, sure, but I was getting 10-12 hour battery off a macbook air running Gentoo nearly a decade ago. PBP gets 10+ hours. $25 scrap latitude with ~70% peak capacity gets 6-8 hours.


I actually think you're onto something with this. When I run compilation workloads on my M1 MBP, I get 3-4 hours battery life.

VMs and games (crossover) also bring my battery life down to 4-6 hours.

My old intel MBP would get around the same battery life when doing the same workloads (games under bootcamp Windows), it would get worse battery life sitting idle or for optimised workloads (like video playback).

Mixed workloads, I get maybe 2-3 hours more with my M1 laptop. That's not nothing, but it's not the crazy 18 hours battery life in the marketing material


I need to go back and watch the marketing material, but I think everytime they brough up battery life, it was under conditions. Like 20 hour battery life browsing the web. 18 hour battery life with video playback. I don't think anywhere they claimed 20 hour battery life using something like Blender.

Which of course should be expected. My M1 probably gets about 18-20 hours when I am just web browsing. If I am compiling Xcode projects, half that. Messing around in C in Emacs, probably 14-16 hours battery life.


18 hours battery life is mostly marketing. Not even sure how to test that, maybe on 20% display brightness with no programs running, other than the OS...assuming it doesn't take screenlock/display off into account?


I remember reviews on youtube that showed the macbook air/pro M1 playing video (streaming?) for that long. Actually, it kept going for 20h.

https://youtu.be/KE-hrWTgDjk?t=645


Benchmarketing. It has hardware support for video codecs so it can play video efficiently in particular even though people traditionally think of that as something processor intensive.

The question is, what's the battery life when you're doing kernel compiles or GPGPU.


According to "Garry explains" on YouTube, the M1 MBP gets 3 hours of battery life looping kernel (I think) compilations until the laptop dies.


That feels about right to me too. I don't know if I'm usually just doing more computationally intensive stuff than most with my 16" M1 Pro, but I think the only way I could get anywhere near the advertised numbers is if I basically just had Safari open and the display on half brightness.

And not to beat the already long dead horse any more, but toss in some electron apps running in the background, I probably average somewhere around ~7 hours or so. I'm not convinced the 10+ hour lifetime is possible if you've got stuff like Discord/Slack/VS Code open.

But in terms of when I am doing truly evil things to it, I am surprised it lasts as long as it does, be it max CPU compiling or all of this morning when I was toying around with stable diffusion, it gets around ~4 hours or so.


My Lenovo X1 carbon was giving me more than 10h 6 years ago (after powertop tuning), so high quality laptops have been providing similar battery life for a long time.


> Using MacOS on it feels like I have a ferrari with square wheels.

Indeed macOS is the weakest part when working on Apple Silicon.

Sure it’s quite pretty but it stumbles over itself with terrible quality of life features. macOS looks modern but feels around ten years behind other desktops now.

Apple love to talk about how professionals and power users can get so much work done with the amazing Apple Silicon but ignore the fact window management is garbage. The UI still frequently stutters/judders when just resizing a window. There is no true virtual desktop management. Etc.

I do love macOS but Windows and Linux desktop environments are evolving much faster with excellent quality of life and productivity improvements that macOS badly needs but Apple ignore or worse over engineer other “solutions” such as Stage Manager.

Honestly who at Apple signed off on Stage Manager before basic window snapping?!

Does nobody at Apple have an ultrawide monitor? macOS is painful to work with on anything not 16:9 (or :10).


> Honestly who at Apple signed off on Stage Manager before basic window snapping?!

My money is that the iPad team are steering a lot of these decisions as part of a broader strategy to converge their desktop and mobile platforms.


It's funny, I love Stage Manager. I was worried about what I'd do after the Ventura upgrade because TotalSpaces is on life support, and so there was a chance it might stop working and I'd have to go back to 1-dimensional spaces after relying on two-dimensional spaces, and various 2D window manager workspace things, for many years.

But Stage Manager is what I actually wanted all along: let me group windows together, and then when I switch to one of them, bring the others along. For my typical workday I have my "slack, mail, mastodon" stage, a stage with a couple of browser windows, a stage with VScode and a couple of terminal windows, a stage with Obsidian and Omnifocus, and then ad-hoc stages as I need them. I add a terminal window to stages as I need.

I end up with the same benefits of two-dimensional TotalSpaces spaces, without having to care about the spaces' relative positions anymore! When I cmd-tab to VScode, I get my VScode "stuff", and when I cmd-tab to slack or mail, I get that "stuff", and so on. I wish I could make the transition faster, but with "Reduce Motion" turned on to skip the animation, it's not so bad.


I am glad you find it useful. You may not have heard of the app rcmd before but it works nicely with Stage Manager. Might be worth checking out as I found it make Stage Manager a lot more functional. https://lowtechguys.com/rcmd/


Oh that's nice, thanks!


> Honestly who at Apple signed off on Stage Manager before basic window snapping?!

Or UX testing. I gave it a try for a week. Gave up. It’s one of those few features that is not intuitive at all.

And it was supposed to help with productivity. Epic fail.


It really is awful isn't it. Why anyone thought a second dock on the side of the screen would make multitasking easier I have no idea. I think they did it just to be different.

I believe the reason they don't bring window snapping and better overall window management to macOS is because it is something Windows and every Linux DE does and has done for years so they will be seen as copying and being "late".

One of the good things about the changes in Windows 11 is the new snap assistant that pops out at the top of the screen when you move a window. It's simple and clear how to use it and makes sense visible only when appropriate.


> ignore the fact window management is garbage.

All I can say is that I'm flabbergasted by this statement. Expose was lightyears ahead of Windows window management at the time (10.4!), and it took Microsoft until Windows 10 to include it in Windows (same for virtual desktop functionality).

> no true virtual desktop management

Not sure what you consider "true" virtual desktops; OS X 10.5 had classic workspaces with Spaces; Mission control later localized workspaces to monitors. Both approaches have their pros and cons.

What exactly is it that has Windows 10 years ahead of MacOS in window management?

> before basic window snapping

I can snap two windows side by side in a fullscreen split (the most common use case for snapping). I would certainly call that "basic window snapping".

> Does nobody at Apple have an ultrawide monitor? macOS is painful to work with on anything not 16:9 (or :10).

I don't know. They don't sell any, right? Apple designs their user experience for their own hardware. It's just table stakes with them.

tl;dr -- these are your opinions. They differ from those of others. That's how Apple can ignore the fact that their window management is "garbage" --- they likely don't see it that way.


I use an ultra wide with my M1 Pro. It’s actually pretty great. I agree on the on the windows manager. I use rectangle app for that.


Exactly, you need a third-party tool for better management via snapping and sensible hotkeys.

Windows, Gnome, KDE, etc. have all these things for years. Perhaps even over a decade by now?

Microsoft even go one further with Fancy Zones in their PowerToys pack which is fantastic on very large/wide monitors.

I wish Apple would add the functionality of apps like Rectangle/Magnet/Fancy Zones with some additional options such as layouts. That would be far more useful than whatever they were trying to achieve with Stage Manager on macOS.


> I could go on but as someone who has used MacOS for years and generally appreciates it, Apple really doesn't care much for developers. Other than its POSIX shell, MacOS doesn't have much going for it.

This is why I couldn't justify continuing spending money on Apple (computer) hardware, I don't want to feel like a second-rate citizen on the platform I spend most of my time on. I used and loved MacBooks for a long time, but eventually I got tired of fighting against the one I'm giving money to for hardware, and had to switch away.

I'm gonna be honest and say that recent M2 laptops from Apple almost got me to buy one yet again, after a couple of years of hiatus from Apple laptops, but my wife reasonably reminded me about how much I bitched about various things last time I used their laptops for work. Ended up with another Carbon X1, and everything works as expected, battery works OK and so on.

But a side of me would love a laptop, but I need the maker to actually want to support me, not make it feel like sometimes they work against me.


> But a side of me would love a laptop, but I need the maker to actually want to support me, not make it feel like sometimes they work against me.

Take a look at the Framework - a friend got it and is very happy with it.

https://frame.work/


> Want to use FUSE? Not on your work computer locked to high-sec mode as you cannot install kernel extensions (where is our user space FS Apple?).

It exists already - https://threedots.ovh/blog/2022/06/quick-look-at-user-mode-f... - it is just that Apple hasn’t made the API public, and (AFAIK) won’t give you the entitlements necessary to use it. But, both are issues Apple could fix relatively easily - if they wanted to - and here’s to hoping the fix is coming soon

However, there is an alternative that works right now - have your user-space filesystem process act as an NFS or SMB server, and mount that. From what I’ve heard - https://lists.nfs-ganesha.org/archives/list/devel@lists.nfs-... - macOS even supports mounting NFS filesystems over Unix domain sockets, which has performance and security advantages over IP loopback


I think they're uninterested because they have a documented and supported userspace VFS API. The problem is, it's really designed for cloud drives and so deeply assumes a sync model. Works great for the intended use case, much better than Fuse but if you want to do something unusual, not so hot.


There’s a new FUSE implementation that uses this technique: https://github.com/macos-fuse-t/fuse-t

Worked without problems for sshfs when I tested it briefly.


Last I heard there are no plans to do this.


If true that's a pity.

20 years ago, Apple cared about macOS as a Unix, they were interested in putting in features that developers and sysadmins wanted, like this.

Sadly, today's Apple has a rather different attitude towards that sort of thing


On my x86 MBP, I run GNU/Linux in VMWare Fusion, full screen. That gives me a combination of both worlds, which I can easily switch between with 4-finger swipe.

I did battery life tests running the same Linux distro natively vs in a VM on MacOS. What I found surprised me: battery life was better with the VM than native, so I stayed with the VM.

When a VM is used, driver issues are effectively absent as Apple drivers handle most devices. (USB devices can be passed through to the VM.) The touch pads are properly calibrated and gestures work. Things like external displays just work. GPU access is not the same as from native Linux but the basic OpenGL support in VMWare Fusion was good enough for what I wanted to do. (That's not true any more though.)

Historically I've been more of a Linux user (and developer) on other laptops but it became interesting to get to know the MacOS world as well while still using Linux for dev work.


I also run Linux in VWWare Fusion and it's mostly great. But mics just don't pass through well. I have to switch to OSX to run Teams at all, and also to Slack if I need to huddle.


> I run GNU/Linux in VMWare Fusion

Just to be clear, is that an arm64 linux? Because emulating x86-64-linux still seems to be not very good.


> On my x86 MBP


Sorry, missed that part :)


I do this also (with Arm64 Linux) but it does ruin your battery.


I would consider myself quite the power user and use both macOS and Linux daily… I don’t really understand, from the developer perspective, your concerns about macOS?

While I prefer the feel of Linux, macOS is more than sufficient for almost any development I do (which also involves a lot of container/VM development and I’ve never felt this was slow).


I used mostly macOS for the last few years, with the occasional Linux. Recently I switched back to Linux (System76 laptop) completely, and only now do I realise how complacent I've been with macOS.

There's stuff like iTunes opening every time I connected BT headphones, and regular attempts to nag me into using Safari. Then there's a boat load of issues I found workarounds for: The window manager with its fullscreen mode and lack of tiling (Spectacle makes that bearable), the unique modifier keys that don't work well out of the box with readline shortcuts in the terminal and such, the often still manual process of installing and updating apps (maybe I could have used Brew or MacPorts more heavily), the inability to debug and fix any OS problems, and a literal hundred small things I can't even remember from the top of my head.

These things are all individually so small that I didn't event think about it, but when I realised the sheer quantity, I felt like I was in a codependent relationship with my main tool. Apple doesn't seem to care even remotely about users like me, and that's perfectly within their rights. The logical thing for me was to use something else.


Appreciate the reply.

> There's stuff like iTunes opening every time I connected BT headphones, and regular attempts to nag me into using Safari.

I don't know if these are legacy concerns, but this has not happened to me ever in the past 3 years.

> The window manager with its fullscreen mode and lack of tiling (Spectacle makes that bearable)

Yabai solves this very well.

> the often still manual process of installing and updating apps (maybe I could have used Brew or MacPorts more heavily)

I recommend Nix and Homebrew.

> the inability to debug and fix any OS problems,

How often do you run into this? I just don't really. I remember in my earlier days I did a bit because of some pinned package versions prevalent in macOS but when I switched to using Nix this went away, so I haven't run into any OS issues since... I think Catalina.

> Apple doesn't seem to care even remotely about users like me, and that's perfectly within their rights. The logical thing for me was to use something else.

That's totally fair. I love my MacBook, but I also have not run into the same struggles. I think my experience has been the complete opposite - once I configured it properly, it not only "just works", but I've yet to find a laptop that connects other aspects so seamlessly (AirPods, FaceTime, iMessage, Notes are all killer features in my day-to-day). Additionally, since switching to the M1, I have yet to find a laptop comparable in battery life (speed is generally matched when considering prices).


> I don’t really understand, from the developer perspective, your concerns about macOS?

It is not an OS that is made for power users. The UX is for non-technical people mostly, with vague errors that try (and usually succeed) to hide any details from you, making many things thoroughly undebuggable. When something doesn't work, best case scenario is "try turning it off and on again" (e.g. if iPad screen extending doesn't work and there is absolutely no information anywhere on why, what prerequisite you're failing. The button just isn't there and that's it, deal with it. I had to restart both devices for it to work; another fun one is "a USB device is consuming too much power and has been shut down, replug it" without saying which device and without any way to detect which device (they all still worked)).

Also the lacking facilities around basic stuff for developer productivity like window management and containers.


> Also the lacking facilities around basic stuff for developer productivity like window management and containers.

This isn't true, though? Just use yabai.


macOS can't run Linux containers because it's not Linux. It requires a vm and additional tooling, configuration, resources and just general considerations because of this.

It's also impossible to work with anything KVM related on macOS, which would require a vm inside a vm and is not supported on any M1 or M2 hypervisor.


Maybe it is time to understand the difference between being UNIX and yet another Linux distribution?


For me, it's less of a macOS inherent issue and more so that many, many things just assume Linux by default. Anything you find on GitHub that relies on some sort of native module is really hit or miss in terms of "will this just work, or will I need to go waste a couple of hours or resort to a VM?"

For example, I've run into a couple of occasions where I've needed to build some third-party thing that happens to also need something like QEMU, but the build scripts only run on Linux. Annoying, but whatever, setup a Linux VM/"container" and good to go, right? ... except there is no nested virtualization support on the M1 macs.

Outside of those sort of issues and other ARM-specific problems, while it doesn't usually impact my workflow too much, I wish I had more control over the system. I wish, for example, it could boot off of something other than APFS, etc. I don't want to have to deal with Time Machine when I've already got snapshots everywhere else, etc.

Everything is relatively minor, but there is always some new thing that makes me annoyed every week or two.


This largely depends on what you do, e.g macos doesn't have systemd and the features it depends on. I guess you can do all of what Linux does through containers but then it's just more cumbersome.

That said, if your workflow is to run everything containerized already, then running macOS as the base layer works just fine.


> macos doesn't have systemd

Some would consider this a feature. Unfortunately it does have launchd, essentially the model from whence systemd sprung... so joy for those folks would likely be short lived.


"Using MacOS on it feels like I have a ferrari with square wheels."

LOL. There was a time when mac/macos was the premium choice for a development environment, but I think that time has gone. Apple spent too much focus on iOS.


I seem to be in the minority, but I honestly don't get how people can develop productively on a laptop. The lack of full size keyboard, screen real estate, and uncomfortable touchpad compared to a mouse just feels horrible to me.


I’ve grown to hate using a mouse, and love the MacBook trackpad. Having your hands so close to both keyboard and trackpad means less travel time between.


That's why I like trackpoint. Mouse without as much as moving your fingers from the home row.


I like the idea behind trackpoint, but I just can't get used to it. Any tips?


It really helps if you set up your environment to mostly not need accurate clicks. For example, avoid needing to grab window title bars or corners, use keyboard-driven window managers. Use keyboard commands instead of clicking buttons. When I last used a Thinkpad, the only time I had to aim at all was clicking links on web pages. I still used the trackpoint to switch active window, but it was a "shove toward upper right", not aiming the pointer at anything (I liked the spatial aspect of it, compared to keybindings).

I also had it configured to move significantly faster on strong input, and slow down with gentle input.


The way I have my trackpoint set up (low sensitivity, no acceleration) makes it more accurate than trackpad, even. No acceleration means it always behaves extremely predictably; for me it's most useful when typing in forms and clicking around in them. It's extremely accurate and useful when set up properly. You can make tons of fine, precise movements and clicks when filling out documents and forms without even needing to move your hands from the keyboard.


Low sensitivity and no acceleration means it takes a very long time to move from one side of screen to the other. I guess I really don't fill in many forms where I can't tab between fields. But many of my habits also predate mainstream availability of GUIs...


Force yourself to use it exclusively for a while, no cheating. For better (it's great) AND worse (you have fewer options for laptops), there's no going back after that. I love it and even use Trackpoint with my desktop computer.


I used a couple variations for about 2 years - Thinkpads with a red trackpoint thingy. never liked them. Continually felt unnatural and clunky. Mac trackpad was a game changer for me.

Glad you found something that works for you.


Are there any hot-swappaple mechanical keyboards with a trackpoint? The only mechanical keyboard with a trackpoint I can find is the Tex Yoda II, which isn't hot-swappaple, and is also very expensive.


I had a Unicomp but the trackpoint wasn't as good as the Thinkpad* and it broke dead after about a year. Avoid.

*Thinkpad 560X! It was worse than a machine 17 years younger. I'm not sure which version the 560X had.


> it broke dead after about a year.

For clarification --- the trackpoint broke, right? Not the whole damn keyboard?


True, but in terms of development, the main use of mouse is really selecting text for copy and paste (unless you are hardcore vim user), and scrolling.

Id be impressed if the speed at which you can select text accurately is higher on the trackpad than a mouse. I have seen lots of developers at work on their macbooks do the standard select text->copy->paste, and everyone is way slower at this then people who use mouse.

Maybe its my FPS gaming days that make me better at this, but then again, nobody really plays FPS seriously using trackpads


Not a gamer - haven't 'gamed' since pitfall - so no experience with mouse vs trackpad in that respect.

What seems missing from all this is the notion of gestures. The two finger scroll, single finger pointer move, pinch/zoom, etc - the trackpad is flexible in ways a trackpoint thing just can't be.

"I have seen lots of developers at work on their macbooks do the standard select text->copy->paste, and everyone is way slower at this then people who use mouse."

Not really sure what you're getting at. Selecting with a... trackpoint can be faster than using a mouse? Maybe, for some people. I'm constantly surprised by mouse-using/GUI people who always do so much more with a mouse than you need to. But I'm not sure that takes away any argument from the mac trackpad.


> Not really sure what you're getting at.

Pretty sure they're saying that mouse accuracy trumps trackpad for cut/paste operations.

> What seems missing from all this is the notion of gestures. The two finger scroll, single finger pointer move, pinch/zoom, etc - the trackpad is flexible in ways a trackpoint thing just can't be.

Yeah, these are great. You can get the best of both worlds on a Magic Mouse, though.

Personally I do alright with a trackpad, but I also lean heavily on the keyboard regardless of auxiliary input devices. In particular, I don't do pointer-heavy workloads like e.g. image editing.


I prefer an (Apple) trackpad to a mouse to the point where I even have one now at my desk setup. I've used BetterTouchTool to configure custom gestures like three-finger swipe to open new tab/close window, rotate left/right to switch tabs etc.


The MacBook desktop environment combined with the trackpad (and gestures) are a big contributor to being productive in that context (though it also sucks so bad in some scenarios).

Swipe left, swipe right, terminal, editor, browser, etc.

Swipe up, see all open windows.

Before long I am using a single laptop monitor about as effectively as a multi monitor desktop setup.

In general though, companies tend to issue laptops. So even if you have a desktop setup with multi monitors, keyboard, mouse (which I do and use daily), it will be backed by a laptop on a dock.

Personally, I like being able to work out and about. I work from cafes, planes, poolsides, my couch, my bed. That style of work limits your hardware choices


To the last point, I have a tiny Bluetooth mouse. Not much bigger than the single AAA battery that powers it. I don't know how I managed to use laptops without one before. (Usually with a full-sized wired mouse, come to think.) I also prefer a traditional keyboard and large screens and so I normally use a desktop. But if you need portable, you need portable.


You can use a dock station and have the best of both worlds: mobility and high quality peripherals and mobility is not needed.


> insane battery life

Is it just hardware that helps extend battery life or is it the combination of hardware and software? One advantage for Apple is that they are both the hardware and software designer hence they can optimise both to take advantage of each other.


It’s always the sum of all that extends battery life but even when Linux support was just “CPU only rendering and minimal powersave functionality using unmodified DE components” the battery life was still decent. Brute force is quite the force.


A ferrari with square wheels, that is a good one.

A UNIX descendent of NextSTEP tooling, versus the distribution of the month in GNU/Linux world, preaching they will get the desktop this year, for 20 plus years, and the road for gaming relies on pretending to be Windows.

Treat macOS as Mac OS, not as a Linux distribution.

A proper set of UI frameworks, not stuck in pre-history C and C++ workloads or even worse Electron, a 3D graphics API that doesn't require one to become a graphics driver programmer to draw a triangle, proper attention to UI/UX,...

"They said a Unix weenie was code for software engineers who hated what we were doing to Unix (the operating system we licensed)—putting a graphical user interface on it to dumb it down for grandmothers. They heckled Steve about his efforts to destroy it. His nightmare would be to speak to a crowd of them."

https://web.archive.org/web/20180628214613/https://www.cake....


I don't need anyone to tell me about the "year of the Linux desktop". I have been using it exclusively since 2018 and earlier at work. The only advice I have to people is to stay the hell away from Ubuntu. That distribution is so bad and it gives people an absurdly bad impression of the Linux Desktop experience. Thanks to snaps, running IntelliJ, Firefox and the JVM application you are developing will consume 16GB of RAM. The coercive update culture with forced application restarts basically downgrades the experience to something as bad as Windows.

On my desktop with 32GB RAM I rarely get above 12GB with a ridiculous amount of applications.


Pity that 98% of the desktop computing hasn't noticed it, as per Valve survey average.


I’m paid to develop apps which deploy on Linux containers. macOS is uniquely incapable of providing a good development experience for that use case. Even Windows with WSL2 is leaps and bounds ahead.


Then don't buy a Mac, that simple.

WSL 2 is a Hyper V VM for running Linux distributions, while WSL emulated Linux syscalls.

Surely some of that salary can be used to make OEMs selling Linux hardware happy, instead of buying Apple and then complaining macOS isn't yet another Linux distribution.

If you want WSL like experience on Mac, VMWare is happy to provide it to you.

It is like complaining using Playstation SDK isn't fit to target Nintendo Switch, go figure!


The issue isn't that a MacBook is the wrong tool for the job, it's precisely that it's demonstrably capable of being the right tool for the job but Apple deliberately chooses to make it useless in an attempt to ring fence products within their ecosystem.

The comparison to the console world is apt as it mirrors Apple's attempts to lock their platform down, much like the console ecosystem. Thankfully the bootloader is unlocked and it's unlikely that's going to change.

Remember that the MacBook pro is a device marketed at professionals looking to do the work of their profession on their laptops. By "professional", it appears that Apple means "multimedia editors". If you are a software engineer, you are no longer the target audience for the MBP lineup.


Buying a device for Apple ecosystem developers and pretending it is a Linux distribution, is precisely holding it wrong.

Professional Developer !== Doing stuff on Linux distributions.

Want Linux? Buy Linux.


Why are you so aggressively defending a trashy product? I can't reliably alt+tab on a MacOS to switch between windows and apps, there are hidden shortcuts, 3 modifier keys make it a nightmare to wrap my head around shortcuts, window tiling is trash (thanks for the save Rectangle), iTunes randomly popping up when I use my BT headset, applications need to be "quit", closing just isn't enough, but some apps quit when their last window is closed, weird! Stage manager, virtual desktop, a broken dock. It's a nightmare. MS Windows have invented such a cool way of switching between multiple windows of the same app, doesn't matter if those are minimised or not.

Oh, and Macs have a hide and minimise feature, wow! Just a cluster fuck entirely. Gazillion different ways of installing applications and then managing all those updates separately (a lot like Windows in that regard). These are all issues I face on a regular basis, haven't even dived into developer experience.


> I can't reliably alt+tab on a MacOS to switch between windows and apps

Why do you expect alt+tab to perform identically to the Windows version such?e

> window tiling is trash

How's the built-in window tiling on Windows?

> applications need to be "quit"

...okay? yes?

> but some apps quit when their last window is closed, weird

Within Apple's own programs, you see this behavior? Or are you speaking of third-party apps that may be disrespecting the established paradigm?

> Oh, and Macs have a hide and minimise feature, wow! Just a cluster fuck entirely.

Yes, and I love the hide feature.

Seriously, there's plenty to complain about in Mac land these days. There's a definite trajectory and I don't at all like where it's headed. But all of your complaints (other than possibly the one regarding iTunes), essentially boil down to "this isn't Windows". That's fine, go use what you want. What's baffling is when you call something garbage, because it doesn't match your preferences especially when it matches those of others.


> If you are a software engineer, you are no longer the target audience for the MBP lineup.

If you are a software engineer developing for Linux.

Seriously. I don't get the sentiment here, at all. It's like buying a (insert non-microsoft platform here) computer and complaining it sucks to develop Windows apps on it.

???


Or you know, take off the square wheels and put some round wheels on.


Linux OEM vendors will gladly provide them, specially for those into Pimp My Ride kind of experience, plenty of wheels to chose from.


> Then don't buy a Mac, that simple.

If only. It was not my choice to make, it was my employer’s.


Difficult debating a fan's love with logic, right? This person is so aggressively justifying the MacOS decisions


That depends a lot on what you do. People writing servers on the JVM can do all their development locally and have be tested on Linux via ci. The idea that all developers spend their days running software inside Linux VMs isn't right. Many of us don't need to do that at all.


> Want to use FUSE?

Fuse is infuriating on MacOS - why is it not included by default??


Because it's a random third party filesystem adaptor module.

Why would it be "included by default"?


Most of the Unix utilities that are included were written by third parties. They're "included by default" because they're useful and take up a negligible amount of space.


They're included by default because they are standard UNIX userland.

FUSE is nowhere near that.


FUSE is newer than netcat or bzip2. That doesn't mean it isn't expected as standard. Linux, BSD and others all have it.


Where is FUSE in the POSIX or SUS standard?


A machine without FUSE is a toy


Because it's necessary for a load of use-cases, and installing a *kernel* extension for it is terrible for deployment


MacFUSE isn’t even open source anymore, so it would likely involve Apple forking a version prior to the license change and maintaining that themselves.


They have their own APIs for it but seem to have assessed that the only mainstream use case is cloud sync, which they always provide a better API for.


> Want to write a program that targets MacOS? Expect a cumbersome CI/CD experience because Apple expects you to manually compile distributables.

I'm not sure this is true anymore. I download source and run "make" and the Xcode compilers all do their thing and entitlements are applied and so on. I haven't tried to do any store submissions or anything but it all seems like Apple are aware that CI systems are a thing, now


> Want to play games?

This is an interesting topic. I found a surprising amount of games available that run on my new MBA just fine.


> Want to use FUSE? Not on your work computer locked to high-sec mode as you cannot install kernel extensions (where is our user space FS Apple?).

FYI, MacFUSE has a user-space library (`libfuse`). The kernel extension also works fine on macOS 13 if you can persuade your IT department to allow it.


I use the user space alternative, yeah - it does the trick though it integrates as a network drive.

Unfortunately, MacBooks are limited to being in "restricted" security mode which disallows kernel extensions from being loaded.

We are not allowed to use "Permissive" security mode


> not happening for new titles

Resident Evil Village, Baldur's Gate 3.

> and never happening for old titles

World of Warcraft.


I really don't understand the obsession with Apple laptop hardware. There are many alternatives that objective better: more ergonomic, less reflective screens, better keyboards, cheaper, lighter.


Apple's processors are really really good.


I haven't seen a non-apple laptop with a such a great touchpad, sound and display. It is always only one, max two of these criteria.

I have been using x86 laptops in the past (terrible clickpad and speakers like a cheap handheld radio from '90 but whatever, I ignored the overpriced Intel Macs). But M2 is something which I couldn't ignore anymore, bought and I am very happy. They are awesome (unless you buy lowest spec 8GB RAM version)! My only complaint is the soldered SSD and apple pricing on SSD. Having a slot there would make so much sense...


Point me at a processor as good as the M1 and I'll buy it


Define good? An AMD Ryzen 7 6900HS has similar single-core performance, much better multi-core performance for slightly worse power efficiency, compared to an M2. Add in the added advantage of flexibility (you can buy different hardware and switch it up like add RAM and disk and GPUs to your liking) and the fact you can run more things on it (e.g. game), and it's pretty "good". Is it better than an M2? Depends on what you're looking for.


>> slightly worse power efficiency

I don't believe it. My M2 Max maxes at around 40W. x86 just isn't that power efficient. You can idle with only one/few economy core turned on running at the lowest power state for longer battery life but x86 is simply very hungry when running high loads and it can't be magically improved. The arch is just obsolete and legacy.


slightly worse power efficiency is a slight understatement ;)


>I cannot wait for Linux to be fully usable on my Apple Silicon MBP

Since TFA says "everything works" atm, including GPU, Bluetooth, Audio, USB, etc, what's stopping you from doing it now?


Support for multi external monitors and improvements to power efficiency. It's also unlikely that my company's security team will permit Asahi until Apple Silicon support is available in the mainline Linux kernel.

I asked and they said currently they don't permit it

Native GPU support with support for Vulkan APIs and some games running would also go a long way towards making the switch as I'm permitted to dual boot MacOS and Asahi - allowing me to once again use one device for "work and play".

Right now I dual boot Asahi and it's "work and experiment"


I'd imagine they're talking about like-native support for the GPU and heterogeneous CPU architecture. Plus, the desktop experience in the article avoids some of the less-finished MBP parts like the speakers and brightness controls.


For me it’s the fact you can’t use external displays and the fact it can’t sleep yet so it drains while the lid is closed.

All things that should be fixed soon.


For desktop why wouldn’t you use Intel or AMD?

They’re faster and cheaper and more compatible and better software support.

I can understand people who want a Mac laptop, but seems weird to choose a slower, more expensive, less mainstream machine for desktop.


Incientally, the best Windows ARM experience is to be had on the MacBook with Parallels. I haven't really used Linux ARM outside of Parallels (except on a Pi 3, and that's not a fair comparison), but I bet the experience is up there.


Agree on MacOS, it feels like I am being held hostage by Apple when using it. The OS was advertised as developer centric but every single aspect for develop is terrible.


Do we know what doesn't work yet? I'm rooting to get Asahi onto my M1 mbp and have fallen out of the loop as to what is pending to be done on the Asahi side of things


Feature support: https://github.com/AsahiLinux/docs/wiki/Feature-Support

Asahi is still in Alpha. Expect things to be a bit rough.


>Using MacOS on it feels like I have a ferrari with square wheels.

Probably they care more about iOS since iPhone is their money maker. They might release iOS on MacBooks, too.


APFS has some kind of “file provider” API that apparently allows user space file systems. No idea how hard it is to use or if it’s well documented.


also No need to use flash usb drive just enter in terminal

curl https://alx.sh | sh

to install asahi linux Dual boot on m1 or m2 MacBook apple silicon


Great idea piping a website straight into a shell.


About as bad as downloading an exe and running it


I did that many times since my laptop has no cd reader anymore.


Installing gigabytes of Linux and applications from people you have no contract with, who have no obligation towards you, is a good idea but running a script to install it, that’s too far?


Asahi Linux is one of the best projects in OSS community I trust their work to run on my m1 m2 macs


seems to be clean bash script http://pastie.org/p/578asRpS0abo92Efj1YGLZ FWIW

so seems safe to me


The script that actually does stuff is here https://de.mirror.asahilinux.org/installer/installer-v0.5.3....


*<checks if axl.sh is available for registration>*


> Everything works… and works perfectly.

How can this possibly be true when Asahi Linux is still in alpha?

A quick trip to the project page shows lots of hardware features that are still not supported. [1]

It’s not a knock against Asahi Linux - it’s amazing what they have accomplished with a small team. It just seems so misleading for the author to claim that everything works perfectly.

[1] https://github.com/AsahiLinux/docs/wiki/Feature-Support


From your link, what's missing for the OP is:

Thunderbolt, DP Alt Mode, Video Decoder/Encoder, SEP, Neural Engine, internal Speaker

Thunderbolt seems like the main missing feature. It's easy to see how someone could live a fulfilling life without running into the lack of any of those features. E.g. is there even anything on the Linux desktop that could take advantage of the Neural Engine or Secure Enclave?


I'm dearly missing support for the fingerprint reader on asahi, thuogh its likely never going to be supported.


They say they are aiming to support it at some point in the far future (they still have to implement support for the SEP firmware in order for this to be possible, as far as I know).

Granted, that might as well be never.


Lack of HW video support will ruin battery life if you do conference calls or watch YouTube. It’s surprisingly important.


OP is working on a Mac Studio, which they upgraded to from a mini.

I don't think battery life is their concern.


Considering the absolutely awful performance of "modern" web/Electron based communications software (aka all of them), I'd be surprised if they used hardware video encoding/decoding to begin with?


Both teams and zoom on macOS seem to encode/decode on the gpu for me, as well as meets over chrome


The mac studio has no battery


> Is there anything that doesn’t work?

> To quote Hamlet, Act 3, Scene 3, Line 87: “No.”

> Everything works… and works perfectly.

That is impressive and really hard to believe! I guess I'll have to find an M1 mac Mini to try myself!


I've got an M2 Air. It is very usable, but its far from everything working perfectly.

Builtin speaker support is still a work in progress. It works if you know what you are doing and are not afraid to damage your speakers if you make a mistake. There's been progress on this though, so I expect this to be resolved sooner than later.

There's no builtin speakers in the author's mac studio though, so I'll give them a pass for this one.

Bluetooth and wifi both work, but their drivers are still buggy. I see a lot of errors in dmseg from these drivers and occasionally things stop working and require a reboot to fix.

GPU acceleration works, but there is still a lot of work to do on that driver.

External displays over the thunderbolt ports don't work yet.

Other things that don't work on the laptops: webcam, touch-id / Secure Enclave, some miscellaneous software.

Don't get me wrong, I've been able to daily drive linux only on my m2 air since August. I love the hardware and the current state of Asahi meets my needs. And its been steadily improving over the course of time. But its far from "work[ing] perfectly" yet.


OP's requirements pretty much uniquely match what Asahi on a desktop M2 provides -- heavy compute, not heavy graphics, no onboard speakers, one big display, no webcam/touch-id, etc.

It sounds like if you're developing services hosted in a cloud, Asahi Linux is a great fit for a desktop Mac.


Do they ever sell old graviton machines as salvage? Arenithey on revision 3 now? I'd like one of those.


Not that I'm aware of. I'd be surprised if those machines were even usable outside Amazon's datacenter; they're likely to lack a lot of standard interfaces (certainly no display output, probably no Ethernet, USB, SATA, or PCIe, possibly no 120/240V AC input).



You can find the Annapurna Labs AL73400 (Graviton 1st gen) in some devices, but I'm not sure you can easily install your own OS.


Precisely today, given the lack of SBCs stock and the price rises, I was checking small routers as options. I.e., a basic mikrotik router is 30 something euros, and in some models it supports OpenWRT.

Some of their more powerful routers have a bunch of ARM cores (no video out, though) and I think I've seen people using OpenWRT there.

For a while they were using Annapurna Labs CPUs, and I've seen that in some high end Netgear wireless routers from 2017, and I see people selling those with OpenWRT on eBay. I don't know their performance, but I'd like to try that instead for a mini server instead of an expensive and virtually inexistent RPi.

There goes a list of routers with Annapurna Labs processors, I'll see tomorrow if I find around some OpenWRT builds for them.

https://wikidevi.wi-cat.ru/Annapurna_Labs


> Do they ever sell old graviton machines as salvage?

Considering modern AWS hardware relies heavily on Nitro, their dedicated chips offloading networking and storage, it's unlikely Graviton CPUs would easily work outside of AWS' environment.


> There's no builtin speakers in the author's mac studio though, so I'll give them a pass for this one.

Don't all Macs come with a built-in speaker? I.e. the one that plays the start-up chime, and plays "system sounds" like emptying the trash.

On Mac notebooks and AIOs, this is one-and-the-same as the obvious external speaker array used as the default output device; but on desktop Macs (like the Mac Mini — don't know about the Mac Studio), it's a separate little speaker hidden somewhere inside the chassis. (It's like the old concept of a "PC speaker", but this one is hooked up to a DAC and routable as a regular audio device, rather than only being able to play tones from a PIT.)


Yes the Mini and Studio have the exact same mono speaker.


Wow, i would have thought given the Studio price you'd be getting the same speaker as in the current MacPro, which is pretty decent compared to the tinny PoS in the M1 Mini (og version is my experience)


Minor point, but the Studio does have a built-in speaker. I'm not sure what, if any, functional level it has under Linux, though. Hopefully, somebody can enlighten me.


Do you have a dual boot ?


yes No need to use flash usb drive just enter in terminal

curl https://alx.sh | sh

to install asahi linux Dual boot on m1 or m2 MacBook apple silicon


Yes


Not everything will work. Once upon time I tried to make CTF with M2 Pro, and I seriously wished that I had x86_64 Linux instead.

Many of the penetration testing tools are not mainstream and they include pre-build x86_64 binaries, or are configured just to not compile with ARM, for reason unknown. And there is no time to figure out what is wrong.

I know this might be a niche scenario, but still...


That seems like a wonderful opportunity to use QEMU since it's not all that performance-sensitive (I think?) and you're not running random pre-built binaries directly on your computer.


I ran all my stuff inside non-root non-volume container so prebuilt binary it is not really a problem.

I used QEMU through Podman machine in the end, but the performance is terrible. Even on MacOS side QEMU does not support Rosetta 2. Docker for Desktop supports and gives significant boost, but I don’t want to use it.


This stuff makes me super nervous. I've just gotten an M1 MacBook Pro at work because my old touchbar Mac was showing signs of impending battery death (and I only got that because IT didn't want to support Linux and I didn't want Windows). I've already had a few battles with some python libraries and today I've found that the Oracle container image doesn't run.

I love the battery life and the keyboard is much better than the old one. It feels snappy running native code but I just know it's going to cause me more issues than it's worth.


It will cause issues, yes. But personally I’d rather spend two hundred hours fiddling with arm64 issues that going back. The performance and silence is wonderful.


Did you try Lima on Mac? I never did compare apples to apples, ran an x86 Podman VM and then switched to an arm64 Lima VM and had drastically improved performance.


Lima uses QEMU as well. Performance is not an issue from ARM-to-ARM virtualisation.


Starting from 0.14 lima support vz framework on macos and allows you to use rosetta2 for x64 linux binaries :)


Thanks! Need to test it.


The newest release of colima / lima runs Rosetta2 by default when running cross arch. The file system and networking are flaky but it’s fast.


> Even on MacOS side QEMU does not support Rosetta 2.

Is Rosetta 2 that much faster than QEMU's own translation? Why?


>Is Rosetta 2 that much faster than QEMU's own translation

Oh yes. Based on the benchmarks I've seen it's a good 30-40% faster on compute workloads. I assume that the pre-transpilation of the binaries is advantageous relative to having to virtualize the CPU.


Anecdata: on my workloads it's near-native† vs ~10x slower for QEMU

† actually not measured, but compared to the order of magnitude on the other side it's about that, eyeballed against the same code on aarch64.


QEMU makes translation in Just-in-Time. Rosetta ”pre-builds” binaries to match different architecture, and binaries eventually run ”natively”, with the cost of delayed start.


It's not obvious that QEMU's JIT has to be a lot slower, but it is. It may be that Rosetta 2 is just more thoroughly engineered and tuned for Apple Silicon.

Among other things, Rosetta 2 is able to use the CPU's special TSO memory ordering mode, which I think QEMU's TCG cannot use so has to use barrier instructions instead (MTTCG), or run all the virtualised CPU threads on a single host core (obviously slower for parallel workloads).


“Once upon a time I tried to make CTF with M2 Pro”

Wasn’t the M2 Pro released like a month ago?

Am I that old that a few weeks now counts as once upon a time?


World changes fast these days!

But yeah, it was like week after the release I took that laptop to CTF.


I don't mean to cast aspersions, but that sounds a lot like something a time traveller would say if they'd been rumbled.

I'm just saying.


I cannot imagine a worse idea than downloading a binary artifact promulgated by someone in the competitive infosec community.


Not much different than installing any random pip package. They include pre-build binaries too. Infosec tools might be even safer.


That’s exactly what I’m always telling people about pip.


They’re inviting you to tear it apart, so


It's not obvious if you skim the post, but they mention getting the Mini in the first paragraph, but later mention they got a tricked out mac studio. so just be aware the fawning notes are about the Studio, not the Mini (although the mini does rock I love it)


Good point. I only noticed the bits about the studio.


per statement from Asahi Linux [https://social.treehouse.systems/@AsahiLinux/109931764533424...] - you'll have to do a lot of your own legwork [EDIT: on upstream/"standard" Linux] to get there.

Not having to deal with an entire custom laptop (i.e. getting a lot of standard peripherals for "free" after only getting USB to work) also helps a whole lot, and note the article specifically talks about a desktop mac.


That toot is about upstream Linux 6.2, not Asahi Linux. If you run Asahi Linux (as in the blog post) you get decent (but not complete) legwork-free hardware support. Detailed breakdown here: https://github.com/AsahiLinux/docs/wiki/Feature-Support


Good point, Thanks for the correction!


I thought the sound and trackpad didn't work although maybe that isn't an issue for the Mac Studio.


Not that hard to believe when you consider the hardware specs for Apple machines are so heavily locked down to a very specific set of mobo, cpu, and only variability in core count, ram amount, hard drive size, etc.

It's not like PC world of mix and match everything.


It's hard to believe because virtually none of the proprietary Apple Silicon platform (except the CPU instruction set itself) is documented and involved a tremendous amount of reverse-engineering effort by some very bright volunteers.


Agreed, if anyone wants some good insight, this video is fantastic. It gets to the tech stuff about 20 mins in iirc and explains some quite surprisingly open aspects of the apple ecosystem and some approaches to funky stuff with the closed bits.

https://www.youtube.com/watch?v=COlvP4hODpY


And yet open source community embraces the hardware with fervor.


It's an appealing platform. Entry level is $600, nice memory systems (66GB/sec to 800GB/sec), and nice iGPU. The laptops have nice keyboards, nice trackpads, nice screens, and great battery life. The desktops are built in a solid aluminum chassis and have a single fan (instead of the normal 4-8) and are small and quiet.

I do wish apple did a native apple port, but the community is making good progress on that front.


> but the community is making good progress on that front

You mean in reverse engineering hardware?


Yes.

Marcan, Asahi Lina, and related folks have been working on improving 3d accelerations. It's good enough for desktop use and some gaming so far, games like tuxracer, video playback, minecraft, etc. First the driver was in user space in python, then in the kernel with rust, and recent improvements have increased parallelism (from Lina) and removing Mailboxes (from Marcan).

OpenGL and Vulkan compliance has been increasing. Last I heard OpenGL was at 99-100% (almost all tests passing) and Vulkan wasn't as good, but improving. I believe Alyssa Rosen is doing much of that work.

Last published update I've seen is: https://asahilinux.org/2022/12/gpu-drivers-now-in-asahi-linu...

There are some posts on Twitter and/or Mastodon, and regular updates on YouTube from "Asahi Lina" and Marcan, often by Live stream and Patreon.

Oh, and Neal Gompa is working on getting the GPU working with Fedora. One problem is most ARM linux distros default to 4k pages. But the GPU (which shares memory with the CPU) requires 16k pages. Seems like a performance win (less TLB thrashing), with a marginal increase in memory use.


Vast majority of hardware used by the open source community is proprietary and undocumented, and always has been. They embrace the challenge of reverse-engineering.


I hope I get to be a badass challenge-embracing reverse engineer like the people behind Asahi one day. I've done some USB stuff but have no idea how they're tackling all the other hardware.


I don't understand the draw to something so hostile to the core values of Linux.


GNU / Linux and BSD would never have started with that attitude. Most hardware was proprietary back in the day, and still is proprietary today. Even Thinkpads (a favourite among BSD devs) have proprietary subsystems that are reverse engineered. The alternative would have been to sulk and whine about it and do nothing. The hardware vendors truly do not give a f**k. They are run by business people, not open source idealists.


Your italics are strange and makes me wonder if you are trying to convey something. My first thought is that if the reverse engineering was done anonymously or pseudonymously, then the ones best positioned to do it would be actual Apple engineers who have insider information and feel like sharing with the world.


That's typically been the case since Linux started in 1991. Reverse-engineering undocumented proprietary hardware has been the norm.


Another Arm64 workstation you can buy today: https://www.ipi.wiki/products/ampere-altra-developer-platfor...


From the title of this post I was kind of thinking that maybe they were writing about how they're running Linux on an Ampere workstation.


Android phones with Termux are surprisingly usable too. So long as Google stops breaking it.


the 128 cores is just $5,518.00 ... neat.


Performance isn’t there. Why spend 3k+ on this?


M1 Ultra on macOS: ca. 1800 (1C) ca. 24000 (20C) https://browser.geekbench.com/v5/cpu/20798209

Ampere Altra on Windows: ca. 800 (1C) ca. 12800 (80C) https://browser.geekbench.com/v5/cpu/20639458

Ampere Altra on Linux: ca. 900 (1C) ca. 44000 (80C) https://www.tomshardware.com/news/ampere-altra-max-80-ccore-...

Apparently the key to getting performance out of the Ampere Altra is … to just not use Windows? And have a very parallel workload. Yes, a single Altra core is half as fast as a single M1 Ultra core. But it really depends on your use case and other factors (e.g. power efficiency) which one is preferable.


What's wrong with the performance?


A comparable m1 is twice as fast in the above benchmarks?

And that’s Apple’s chip from 2 years ago!

Core for core this is a dog.


altra is also > 2 years old, so the age of the apple chip doesn’t matter

Neoverse N1 is hardly a dog. It’s a decent enough core in a server processor that has 80 of them.

Altra is not intended to compete core for core with any laptop/desktop/workstation processor.


Surely if you consider apple hardware you don't care about performance, price or supporting an ecosystem that underpins the very essence of you are trying to do.

Buying mac hardware for a linux workstation is madness.


I really would like someone else to put out something competitive in that case, I’m rooting for them but haven’t seen it yet.


For a workstation there are tons of options? I'd argue tha mac studio is a very weak offering for that usecase.


I would argue there are zero options in x86 land that will give you the mac studio form factor (tiny, stylish, quiet) with that level of performance. PC workstations in that class are noisy big beasts. For some people how it looks is part of the set of requirements, and it is not “madness” to also care about that.


Noisy beasts... Hyperbole much?

But yes, if by workstation you mean shiny then sure!


> In some cases, it is too fast. When I installed K3s, all of the containers in the kube-system namespace kept entering the dreaded CrashLoopBackOff state [..] After some investigation, I found out that the Mac Studio was just too fast for the Kubernetes resource timing

I’d like to understand what the issue here is. Sounds counterintuitive to me.


It’s called a race condition. They never expected something to complete before another thing.


Yup. It is also a bug and a very difficult one to catch, so original devs would probably be very grateful for report and help in fixing it. I hope OP opened an issue.


I read that as "the container ran so fast that the orchestration assumed it was failing".


Classic that the only thing they reported didn't work was Kubernetes! Kubernetes is really such a complex and fragile system, at least in my experience.


Same here, OP can you share the limits you added? Or is it on namespace level


No problem. In the end, I had to actually set CPU limits on the pod level for traefik, svclb-traefik, metrics-server, coredns, and the local-path-provisioner.

I initially chose a limit of "500m" for all of them except coredns (which seemed to be fine), but there were still some occasional issues with the metrics-server and traefik, so I increased those to "750m" and that solved those issues, but it caused coredns to CrashLoopBackOff. After setting a "500m" limit on coredns, everything ran smoothly.

So, essentially, I set a "500m" on all of the pods in the kube-system namespace except for traefik and metrics-server (they got "750m"), and of course I didn't need to set limits on the helm-install-* pods.

I didn't modify the default memory limits at all.


Just so that I understand this correctly: he bought a $6000+ Mac Studio with a 20-core M1 Ultra processor and 128GB of RAM. But all these cores, ram, bandwidth, storage are not important, but rather the fact it is an "ARM64" workstation is what was the game changer.

Seriously? Would a $6000+ x64 workstation be noticeably slower?


No doubt having lots of threads and lots of memory make this a very nice workstation.

But it's about twice the price of a 16-core AMD 7955 which has about 150% the performance both per core and overall. The M1 certainly might win on power, though.

Using a machine of this class is mind blowing, that author isn't wrong about that. But there's still about a 3x price/performance gap. I love ARM and I can't wait for this to get more mainstream, but it's not quite here yet if you are cost conscious.

https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+9+7950X&i... https://www.cpubenchmark.net/cpu.php?cpu=Apple+M1+Ultra+20+C...


Not sure, but anecdotally I have a machine running linux that was $6,000 and definitely feels slower than my M2 macbook air.

Mostly its a question of latency I would imagine.

My machine:

Archlinux “Zen” (lower latency kernel).

Threadripper 3970x

256G DDR4 2666MHz

3x4TiB NVMe (PCIe Gen3) in RAID0

Radeon VII GFX card (more than enough for 2D render of terminals).


I attended a talk by a PhD student who specializes in CPU/architectural research, on the M1 as an architecture. There are some key differences in design, such as unified memory etc, but what accounts for a large part of the performance gap with x86 is the fact Apple's compilers can target their specific microarchitecture and optimize for that, because they know its performance characteristics well.

We know the performance characteristics of various x86 cores, however there are simply a lot of them, so most precompiled x86 code is generically targeted but likely has suboptimal parts on different architectures. You can, of course, -mtune= with gcc, but you'd basically need to rebuild your distribution. Or in other words, the ideal would be to compare someone running Gentoo vs macOS.

So, that was the gist of the talk and I more or less accept the conclusions. Now for my own view, I think what makes ARM processors interesting is their efficiency wrt power consumption. I would love a non-Apple ARM laptop if only because intel cores are much less efficient in terms of power they require.


> what accounts for a large part of the performance gap with x86 is the fact Apple's compilers can target their specific microarchitecture and optimize for that, because they know its performance characteristics well.

OP is running Arch though, which is probably built with gcc/clang targeting generic ARMv8(.0?).


I've a friend in the industrial and academic chip development space.

They basically agreed with what you said. ARM isn't some magical compute/watt wand that Apple waved; it's their incredibly tight software and hardware integration that allowed them optimize.


Interestingly I actually ran gentoo for a while on that machine precisely for that reason, but the overall latency improvement was marginal.

Certainly not worth recompiling things all the time in my opinion, even with a beefy machine like mine.


Owning every layer of the platform helps with the microarchitecture design as well: They’ve got massive quantities of real world profiling data / instruction traces. These are invaluable in assessing the trade-offs in how you do branch prediction, cache associativity & a thousand other decisions.


> Archlinux “Zen” (lower latency kernel).

that might be the problem

low latency kernels aren't of any help unless you're running realtime (or near-realtime) workloads. not only that, in order to guarantee high priority tasks they will hurt regular process performances.

for desktop usage running a "low latency" kernel is probably a bad idea (no wonder "low latency" kernel config are not the default choice).


I think you have it backwards.

The lower latency kernels are great for desktops (and the default for desktop distros typically). They sacrifice raw power for lower latency.

So my compiles take a bit longer as the kernel will switch tasks more often, but keypresses will be captured with reduced latency from my perspective.

I have tested this, the stock arch kernel has considerable more latency on inputs.


"Lower latency" usually means lower peak, not average. E.g. <5 99% of the time and 25 1% of the time is "high" but much better for desktops than <15 100% of the time (made up numbers). So you usually avoid low latency or real time kernels unless you need the <15 guarantee.


I think we're talking about latency in incorrect terms, I'm referring to polling frequency of my task queue to handle interrupts, which is configured as an increase in the polling rate of the scheduled workloads inside linux with the following keys:

    CONFIG_HZ_100   # Typical for server workloads
    CONFIG_HZ_250   # For servers that need to switch tasks more often (think: overprovisioned cloud providers) 
    CONFIG_HZ_300   # For Media servers/workstations, typically
    CONFIG_HZ_1000  # For desktops, typicall.
The documentation suggests for “workstation/desktop” operations the timer should be set to 1000MHz.

Why does it recommend this? Because the higher the polling rate the quicker it will see inputs from human interface devices like mice and keyboards and respond to them.

A higher tickrate here will mean more interrupts (meaning lower throughput but a lower latency to interrupts). If you're talking about process latency of non-interrupted processes then a lower value is prefered because you don't really want your program to wait around for more CPU time while linux polls all of its buffers and tries to schedule other tasks.

You can easily see what is configured by asking the kernel for its configuration param: `zcat /proc/config.gz | grep -E '^CONFIG_HZ'`


depends your workload I guess, if you are just compiling a lot of code or having containers with a non-server-like load, then most of features of this computer are useless. Also GUI software will feel significatively worse.

Lastly, threadripper has a NUMA architecture, which is not very user application friendly. Frankly, they are very bad personal computers in general, only worth if you need that much memory or PCI lines, which doesn't seem to be your case.


> Asahi is a Linux distribution that can run natively on Apple Silicon-based Macs due to some slick reverse engineering provided by members of the open source community. Moreover, running Asahi is perfectly legal because Apple formally allows booting non-macOS operating systems on their Apple Silicon platform.

Umm how would it ever be not legal? Even if Apple doesn't formally allow it, you're free too so with your own hardware whatever you want.


The DMCA made it illegal to circumvent software locks.


That doesn't sound right. Hasn't this been overruled a couple years ago?


That's only if it's being used to circumvent copyright?


I think they were referring to the bootloader not being locked down.


Imho the only relevant parts left are: Sleep (which is being tackled right now), USB-C Extra displays (Also WIP) and Hardware Decode for video.


Linux 2013: Sleep support doesn’t work quite yet, but it’s being worked on right now

Linux 2023: Sleep support doesn’t work quite yet, but it’s being worked on right now


You can thank Microsoft for the current mess. S0ix (aka "Modern Standby") doesn't even work right on Windows.


Linux Laptops with no S3 support use S2, not S0 as on Windows. Still buggy as hell. My XPS sometimes loses 70% in 4 hours standby and then just ~5% in 2 days of standby. Most of the time the battery drain is insane. At least the overheating in bag issue never occured to me.


I've still had better experiences with closed laptop lids when using Linux in comparison to windows in the last decade.


and audio and screen brightness at least...those don't work on my M2 setup.


> and audio...

Overall tip for anyone using Linux and having issues with audio... USB Audio is magical and works wonder. I never bother with the computer's in-board sound: it's USB Audio immediately through a DAC. Works fine. Apple sells $10 USB-C-to-3.5mmjack DAC which are basically a tiny cable (for anyone lamenting it's "too bulky" to have a full-on DAC next to their laptop).

For desktops: problem doesn't even register. Just bypass the audio components on your motherboard and use a proper DAC.

As an added bonus computers are particularly electrically noisy and a DAC is a great way to do away with all that the noise.

So my advice: just use a DAC with Linux. Plug it in. Marvel at "dmesg" showing "USB Audio" and enjoy quality sound.


I've been doing this to get my computers connected to a KVM to share speakers. Works pretty well, bought a Soundblaster for nostalgia sake.


This will use 10-15% CPU (intel and M1 Pro).


Be somewhat careful what USB audio device you buy. I've had multiple of these and some of them have tons of EM noise, which is very obnoxious.


> I've had multiple of these and some of them have tons of EM noise, which is very obnoxious.

Ah darn yup! If there's one thing I'd expect from these is to not be noisy! FWIW I'm using a CambridgeAudio DAC and, well, CambridgeAudio is supposed to be a good brand when it comes to audio stuff (amp is CambridgeAudio too).


> CambridgeAudio DAC

It looks like these start at $250? I'd expect them to work well. I'm more wary of this pricepoint:

> Apple sells $10 USB-C-to-3.5mmjack DAC

I've gotten a lot less EM noise with builtin soundcards than cheap USB DACs, but YMMV.


> cheap USB DACs

These numbers are several years old now, but Apple USB-C DACs do perform very well overall regardless of price point: https://www.audiosciencereview.com/forum/index.php?threads/r....


> It looks like these start at $250? I'd expect them to work well.

Price inflated, like the price of nearly everything. I paid mine 150 EUR back in the days (still not cheap but not either the 4 digits high-end audiophile stuff).

The thing is: you want thing to work fine from Linux's side and in my experience USB Audio on Linux just works.

I'm sure there are other brands/models at reasonable price points.


EM noise is something that someone would notice, so I'm sure Apple would have nuked it from orbit until it's completely imperceptible.


Yeah, to be clear I expect the Apple ones do fine. The ones I had issue with were not Apple.


I've been pretty happy with Dragonfly[0]. Tho they don't come cheap.

I have had the black and am currently on the Red, I noticed the difference so I think it's worth it. Have not tried the cobalt.

[0] - https://www.audioquest.com/dacs/dragonfly/dragonfly-cobalt


Screen brightness works on the beta kernel. Speakers work but are disabled by default because there is a risk it could blow up the hardware


> Speakers work but are disabled by default because there is a risk it could blow up the hardware

Wait, what? This sounds like a pretty egregious design flaw on Apple's part.


The (very good) writeup conveys this much better than I could, but briefly for folks not inclined to click/read through: it’s a design trade off. Apple’s solution conditionally sends excess power to the speakers—higher power to achieve better sound under safe conditions, lower power to protect the speakers from damage. It’s quite clever (and quite effective, although I’m sure some audiophiles would disagree), but it’s not the kind of design you’d want to use without an equivalent solution. It’s also an unintuitive benefit of having a whole lot of compute headroom to play with, where you can create an intentional footgun that has meaningful benefits but risks physical consequences, and then spend some of that headroom to disarm the gun.

A pure software analogy (because that’s what I know better) would be something like: race a bunch of speculative variant algorithms in parallel, knowing that you’re “wasting” compute time, where the probability of speculation being more optimal than hand tuning is high and pathological cases have a “safe” straightforward deoptimization.



Speakers are just things that move when you send electricity to them. It's not hard to blow up something like that if you try, especially if they heat up over time.


Then why isn't this a problem with the built-in speakers of any non-Apple device?


It is, you're just not trying hard enough or you're using the official drivers that are software limiting it.


They either cap them at the “always safe” level which results in much less volume, or you actually can blow them up but you just haven’t played the right sound to trigger it.


The tldr is that Apple drives the speakers way past their “always safe” level and then uses software in the OS to detect and dampen the uncommon sounds which could damage the speakers.

Resulting in exceptional audio performance. The Asahi Linux devs have already reverse engineered and implemented this safety system but there is a risk they have made a mistake or it doesn’t get enabled due to a bug.


I don't get the hype around apple silicon for desktop computers. I sort of understand the power efficiency argument for laptops (although it is often overstated IMO, I could work for >10h straight with my X1 carbon 6 years ago), but for Desktop it is trivial to build a PC that runs significantly faster, with much more upgradability and flexibility for a fraction of the price. So if I want performance on a Linux system why would I choose Apple silicon?


>for Desktop it is trivial to build a PC that runs significantly faster <...> for a fraction of the price.

Is it though?


Mac Mini m2 pro with decent ram and storage starts at 2000. According to apple current intel and amd chips are faster but much less efficient (ignoring how to define that for now). If you are the "mini" target customer, 2000 will buy you lots of PC hardware.

The article list an m1 ultra with 128gb of ram, so 6000 or so. Not sure what the target market for this is, but the intersection of "4000 does not cut it" and "I don't need high performance computing, clusters, multiple GPUs, no ECC,no more ram than that" to me seems almost empty.


>current intel and amd chips are faster

These are close to 2k for CPU alone AFAIK.

I must admit though I am biased as I am yet to find a fault or reason for an upgrade in my original M1 machine, and I had dual xeon workstation prior to that.


Apple compared to the current 13th generation i5 and i7 chips. Those are not remotely close to 2k.


What if you don't want to build a PC and a need for ARM64 (lets say at work they run stuff on ARM64 servers for whatever reason, those fools!)?

What if I just want to buy a thing. And then later also be able to sell it maybe.


> So if I want performance on a Linux system why would I choose Apple silicon?

Power draw for one; it applies to desktop PCs as well. Electricity isn't always cheap.


More than 19 hours on X1 Carbon? For real? Mine barely lasts 6 hours and it's 2 years-old. I'm using Linux without a GUI.


The claim was more than 10 and seems accurate, e.g.

https://www.notebookcheck.net/Lenovo-ThinkPad-X1-Carbon-Gen-...

Also just throwing out there that the X1 weighs about 500g less than a MacBook pro and a powerbank is about 500g.

That said, if you are fine with weight and battery life is important, 2015 was the year to be:

https://www.anandtech.com/show/9623/the-lenovo-thinkpad-t450...


Are you making use of Cunningham's Law?


> Moreover, running Asahi is perfectly legal because Apple formally allows booting non-macOS operating systems on their Apple Silicon platform.

How nice of Apple. #snarc


#snarc beats /s anyday. On a serious note. If you want to save on the power bill. Arm anything wins against my 1000 watt xeon workstation. I can heat several rooms with that monster.


if you need AVX-512 ( x86-64-v4 ) then Ryzen 9 7950X3D 3D V-Cache is also good alternative.

"AMD Ryzen 9 7950X3D Linux Performance"

https://www.phoronix.com/review/amd-ryzen9-7950x3d-linux

"From the nearly 400 benchmarks, when taking the geo mean the 7950X3D was at 97% the performance of the Ryzen 9 7950X while on average being at 60% the power consumption rate. The Ryzen 9 7950X3D in these non-gaming workloads was 11% faster than the Intel Core i9 13900K and at around 60% the power."

+

"AMD Unveils Ryzen 9 7950X3D, 7900X3D, and Ryzen 7 7800X3D, Up to 128 MB of L3 Cache And 5.7 GHz Boost"

https://www.anandtech.com/show/18709/amd-unveils-ryzen-9-795...

HN Thread https://news.ycombinator.com/item?id=34260253


It's a shame Apple is Apple and you can't pay a reasonable price for RAM, otherwise these would make awesome servers.


That's the cost for 10x the memory bandwidth of the Ryzen 7950x.


Nope, that's the cost of a monopoly. However fast or special Apple's RAM is, it doesn't cost them 10x the price of the competition, it costs us that because we can't around shop for another supplier.


It does cost quite a bit more though. Otherwise everyone would ship 1k bit wide memory interfaces instead of 128 bit wide.

Just look at the premium AMD and Intel charge for 256 bit wide memory, let alone 1024 bit wide. Xeon servers have 512 bit wide memory, AMD Epycs in the newest gen have 768. Apple's unique in having a 1024 bit wide memory interface, at least among commodity hardware.


To be fair, what you're getting is a hardware configuration that "just works." By controlling the hardware configuration, the manufacturer is controlling the user experience. Yes, components like storage and memory are built to standards but interoperability problems do come up.

By controlling components/configs, they (Apple) doesn't have to field angry tech support calls and go down rabbit holes as to why a random RAM manufacturer's DIMMs don't work with my machine.

Apple also does this with thunderbolt products, which have a very strict certification process so that user in theory should have a user experience where the products "just work."


Have you ever bought a non-Apple computer? When hardware fails it’s not because the configuration “just dont work”, it’s because crappy OEMs ship garbage quality components that die, and when you buy one of those machines it’s really a race to see what fails first: an internal component or one of the poorly designed external components made of shitty plastic.

But that’s only for the cheap computers (a market Apple doesn’t serve). Most machines in the $800+ range offer excellent value and reliability (except Dell).

> By controlling components/configs, they (Apple) doesn't have to field angry tech support calls and go down rabbit holes as to why a random RAM manufacturer's DIMMs don't work with my machine.

I think you severely overestimate the number of people who install aftermarket parts into their computers. Furthermore, Apple offsetting the cost of tech support by overcharging customers for hardware is really shitty. Maybe if you’re a shareholder then that’s a positive statement, but this thread about pricing is obviously from a consumer perspective.

ALSO, it’s ignoring the fact that Apple’s answer to any tech support question is “buy a new one”. So whatever they’re doing with that extra money, it’s not going into tech support.

Why should I pay 10x RAM prices? Because nobody else can sell RAM for Apple’s latest computers, and nobody else can compete directly with those computers yet, and Apple is taking full advantage of that.

If a competitor starts offering comparable hardware, they’ll be able to significantly undercut Apple simply by offering RAM (and storage) at reasonable prices. Until then, Apple will be emptying wallets.


That's... comically untrue. The Playstation 5 has faster memory than the M1 (448gb/s vs 200), and it was manufactured by AMD before Zen 3 or Apple Silicon even shipped. Dual-channel DDR5 should smoke the LPDDR4X in Apple Silicon in terms of memory bandwidth.


How so?

Memory bandwidth (peak, not stream)

  Ryzen 7950x =  84GB/sec #  128 bit wide ddr5-5200
  M1 =           66GB/sec #  128 bit wide ddr5-4800
  M2 =          100GB/sec #  128 bit wide ddr5-6400
  M1/M2 Pro =   200GB/sec #  256 bit wide
  M1/M2 Max =   400GB/sec #  512 bit wide
  M1 Ultra  =   800GB/sec # 1024 bit wide, m2 ultra not out
800GB/sec / 84 = 9.5. Although the ARM64 has a more relaxed memory ordering and generally you see a greater fraction of peak in the real world. Also Mac's default to a 16kb page, which helps the TLB with random workloads. Memory bandwidth is part of why the Apple iGPU does so well when compared to Intel and AMDs best iGPUs. Similarly the improved memory system is why the PS5 and XboxX does so well on games.

Sadly AMD is reserved the improved memory system for the XboxX and PS5. Both AMD and Intel limit laptops and normal desktops (except for the expensive HEDT segment like the Threadripper) to 128 wide memory.


The latest Intel/AMD non-HEDT memory controllers only have two channels, so your DDR5 would drop to below-DDR4 speeds if you put 128GB DDR5 in it.

(32GB dual rank sticks * 4 -- there are no 64GB DDR5 sticks yet. Some reports of system instability with all four DDR5 slots populated, too, even when running at DDR4 speeds, it seems the motherboard manufacturers aren't QAing it.)


Once again reminding folk to still be angry at AMD for giving up on HDET. Sucks so bad that one has to go server to get >128bit ram. So sad, such a bad limitation.


Heh, well, seems silly to have a 2 channel standard, 4 channel standard, and TWO 8 channel standards. The volumes don't justify it, and it's just a big waste of money. It contributes to expensive motherboards among other things.

Seems like Intel and AMD are heading towards an 8 channel standard and the cheaper chips will enable only 4 of those channels.


I'd be curious to see properly done benchmarks against top of the line Intel or AMD machines. I'd be surprised if the Mac was faster.

I do understand that people want the Mac for the build quality of the laptop, not just for speed.


> I do understand that people want the Mac for the build quality of the laptop, not just for speed.

I find LG Gram laptops to be of higher build quality than Mac laptops. They may be pricier too depending on which LG Gram and which Mac you buy though.

Mac laptops feel and are incredibly brittle. They may look shiny and the screen may look gorgeous, but they're simply brittle.

People here and there like to make fun of "MILSPEC" but my MILSPEC LG Gram can sustain quite a beating. Meanwhile my M1 Mac is 1/5th the age of my LG Gram and the M1 Mac's screen is already broken.

Mac laptops are very good looking but they are not, to me, of great build quality. They're also heavier than my LG Gram.


"incredibly brittle" sounds like a massive exaggeration. I've been using MacBook Pros of some form as my main machine for a decade now and they've followed me around the world, usually in a backpack without proper padding, used in very humid environments throughout Southeast Asia, dropped, used outdoors with volcano ash falling on the machine, rain coming in sideways that I've failed to shield it from, I've taken several spills on my bike where I fell on my back with the backpack with laptop in it taking the fall, they've held up to everything. The only failure was that g*ddamn butterfly keyboard.


I agree with what you're saying but feel the issue is more durability than build quality.

Apple's MacBook Pro has very good build quality but that doesn't mean it isn't fragile. The screen and trackpad are glass for example so going to be far more brittle than most plastic options.

That isn't a build quality issue though, at least not to how I define build quality which is fit and finish and excellent quality assurance. I've had expensive Dell, Lenovo and HP laptops with "Premium" displays that arrive with stuck pixels. Hinges not properly aligned. Squeaky keyboards and trackpads. Speakers that pop, etc. That is poor build quality and crap QC.

The Mac's have near perfect build and QC but they sure are less durable than many other laptops.


Only anecdata of course, but my several years old MBP has recently taken a few beatings (falls to the floor at various angles from 3-4 feet, getting narcoleptic with age is fun!). I haven’t noticed any problems I could attribute to any one of these incidents or their cumulative impact. There are a few very obvious dents in the casing, even behind the screen which I’d expect to cause distortion on the display… but nope, the tank keeps taking beatings without a problem. Granted there are other ways the MBP is showing its age but they’re just more pronounced versions of issues that well predate any physical incident.


It is a bit odd, Apple’s laptop chips are of course very good. But breathless articles like this make me wonder if the author is not aware that Intel and AMD have desktop chips? Especially in single core performance, Intel’s designs are, from the ground up, built on the assumption of higher power budgets. Apple’s “workstation” chips are a bunch of (very good) laptop cores, they can only go so far.


Yeah, looks like Intel desktops are currently ~10% ahead of Mac laptops in single-threaded performance:

https://browser.geekbench.com/processors/intel-core-i9-13900...

https://browser.geekbench.com/macs/macbook-pro-16-inch-2023-...

(Admittedly the Mac Studio is slower, since it was released ~360 days ago - probably not the best time to buy one, but who knows when they'll update it.)

The author states the desire is to match the architecture of the servers they deploy software to, which seems reasonable - depending on the software, it can be very useful to be able to run/debug the same binaries locally.


How long does one of those things function if I take out its processor fan?


Don't forget that it's the best bang for the watt (at least when running macOS).


Is it? A mac mini with m2 pro, decent ram, decent storage is $2000+. You can buy quite a few Watt hours for that.


If the base model M1 has the performance you need, it performs very well for the price and has a very low power usage.


> Everything - and I mean everything - is unbelievably fast.

As a long time Linux user and someone forced to use MacOS, my hands are itching to buy a 2nd M1/M2 mac and install Asahi linux on it. I've never felt such excitement since the days of Compiz.


Don't do it. It's tempting. I bought a high-end System 76, and I am very happy with it.


Is anything interesting happening for non-Apple desktop/laptop Arm?

At the moment building for aarch64 pretty much means embedded, Mac users on the team, or Graviton/other Arm servers.

Are we going to reach a point where there's a meaningful mix of architectures, but it doesn't point to the brand of device? Of course Apple will continue with only one thing, but that Lenovo might have options from Arm licensees as well as Intel/AMD?


I use a thinkpad x13s and i’m happy with it


Oh I didn't realise they existed! Nice. Performance/specs-wise, I haven't looked in-depth, but seems they compete more with iPads/other tablets + keyboard cases than with Macbook Pros/x86 options in that class and price point though?

Thanks though, seems promising. I've been interested in Asahi since I first saw it here, but I can't help wondering if long-term the better buy might not be a ThinkPad, Framework or whatever that only ever expected (Windows or) Linux. (The Framework has probably been enough to quash the draw of Mac hardware/build quality but Linux for me.)


Performance sucks comprared to the m1 :)

I use it for dev so I don’t care. Don’t expect to train any models with it.

It’s very snappy for media consumption though, no issues at all there (if you stick to ARM native apps).


It's cool that Arm64 chips are catching up in speed, but the software support has been there for a long time.

When I bought my Raspberry Pi several years back, I was able to comfortably develop using Emacs, gcc, and SBCL Common Lisp - the same software I use on my AMD desktop. Everything I needed "just worked"; compiling just wasn't very fast.


Ha, Emacs, sbcl and gcc are exactly what I've been using for development for the past few months on my Raspberry Pi 4 desktop replacement.


If you have two raspi, 4GB and 8Gb models, you can use tmux, btop to check your memory and cpu levels.


Are these ARM64 Mac using ECC?

P.S: the 3440x1440 screenshot in TFA looks gorgeous on my system. Everything is using subpixel AA (terminals, code editor, browser, except for the URL, for whatever reason) but not the file manager. I wonder why that is.


> but not the file manager. I wonder why that is.

It's documented: https://gitlab.gnome.org/GNOME/gtk/-/issues/3787


So that's what happens when you use 3D rendering approaches for 2D text and line drawings. That's too bad. Hopefully we can come up with better approaches to using the GPU's compute hardware for this stuff. Pixel-correct rendering is not a trivial concern;


Current ARM Macs don't have ECC, just like how the Intel iMac/Macbook don't have it. Likely the ARM Mac Pro will have it.


Agreed, a 1080p or 1440p screen with good pixel-sharp graphics (which Mac OS does not provide, leaving you with blurry anti-aliasing) is all you need. 4K resolution is overkill.


the 1440p screen on my macbook looks incredible on macOS. Apple has been on top of the font rendering game for a long time now


Because you use a 1st party screen. It's like saying "Linux doesn't have any problems, look at my Pixel phone". Try multiple 3rd-party ones that aren't recommended by Apple and you'll see where the problem lies.


Blanket statements of "everything works" always leave me a little suspicious. But if sleep/wake, DPMS, power consumption, fans, and those kinds of irritants, which are omnipresent even on x86 Linux, are all good under Asahi that is pretty cool.


We need more devs to use linux. It is a better experience IMHO. I've been donating to the asahi project every month hoping apple hardware gets finally liberated.


I wasn't planning to switch to ARM anytime soon, but I was looking for a new Linux distro with Ubuntu heading off in it's own 'Canonical' way.

Debian seemed to be a natural default but I'm leaning toward an Arch distro so would prepare me if I should switch to ARM down the road.


Pop!OS is worth looking at if you like Ubuntu but don't like where Ubuntu is going.

Plain debian is fine though - unless you have a recent laptop and need nvidia drivers. You can make it work, but it's fiddly and the Pop! guys already figured everything out for you. I use debian on an old HP deskside server as a workstation and it's been fantastic.


This was my I first thought before Debian because I've noticed Debian packages don't always work like the Ubuntu repo ones and the latter is used by the large majority and I don't mean desktop things but rather server things. The concern is how much infulence will Canonical have on the repo contents itself. If distros have to override Ubuntu repo packages what's the point? I could wait and see if things go that way or choose different. I like voting with my feet.


That's really impressive!

That said, I really wish someone other than Apple would release something like an ARM64 NUC, with good Linux compat. Even something on par with the SD Gen 2 should be plenty to run a desktop day to day.


I'm not trying to invalidate your specific use case, but in general, there's not really any practical difference between ARM64 and x86 for desktop use. ARM is generally perceived as being more power efficient, but the latest x86 mobile processors from AMD (and probably also Intel) are very efficient. The Apple M series does have very tightly coupled RAM and onboard ML & video encode accelerators, but when it comes to plain old compute power & efficiency, AMD and Intel have pretty much caught up afaik.

There were recently a couple of new NUC-type devices based on the outgoing generation AMD mobile processors. I have the Minisforum UM690, based on the Ryzen 6900HX, and it's been working great as my new desktop computer. The iGPU is pretty good too. The Ryzen 6800U (U = efficient variant, 25 W TDP) is about the same as the M2 for performance and power consumption. The 6900HX in this device is the high power variant, default TDP is 45 W but you can lower it to 25 W in the BIOS which should make it about identical to a 6800U. The newer ones with the latest gen Ryzen chips should be much more efficient.


Dunno, M1 max (400GB/sec twice a threadripper) fits in a thin laptop, performance well, and has a decent iGPU. That's more bandwidth than a threadripper, better iGPU then anything Intel/AMD ships, and still sips on the power.

M1 Ultra doubles the bandwidth again to 800GB/sec, and fits in a small and quite desktop. What CPU+iGPU (or GPU) do you think is comparable to M1 Ultra and competes on performance and power?


PS5 is $500 also has 400GB/sec memory bandwidth. Very good iGPU made by AMD. Also has a Blu-ray drive.


Sadly they are hard to run Linux on 8-(.


PS5 runs a FreeBSD distro out of the box!


Sadly uselessly so. At least last I heard the updates prevent the known jailbreak, so no root for ps5 owners anymore.


btw. my m1 mac barely goes over 40 w TOTAL power (whole machine) heck watching prime video my total power is at ~10w total power. trust me no matter which ryzen/intel processer you will never be as low as 10w total power.

I'm pretty sure the biggest intel/amd desktopn cpu is faster than a m1 pro/ultra, m2 pro/ultra but there is no amd64 processer with the same efficiency than the m1/m2 macs.


> btw. my m1 mac barely goes over 40 w TOTAL power (whole machine)

The v2718 is an embedded Zen2 SoC with a Vega GPU that totals 10-25wCDP. Adding RAM and a main board to that might add 10w, give or take.

> heck watching prime video my total power is at ~10w total power.

Watching video is probably one of the most efficient tasks you can do these days, especially since most of it is offloaded to specialized decoders in the GPU (of which AMD, Apple, nvidia, Intel, etc all provide).

> trust me no matter which ryzen/intel processer you will never be as low as 10w total power.

They literally exist. There’s just no market for them. The sweet spot for PC users is around 15-25w for laptops. The loss in performance below that is generally something people are dissatisfied with.


Try rereading my comment. Your M1's 40 W total power is not special, the 6800U does that with roughly the same performance.

My 7 year old i7 laptop used 5 W idling with a few applications open and the screen at max brightness. I don't know what a modern non-Apple laptop uses when watching video, but 10 W seems about right.


I can get under 8W playing Morrowind on the Steam Deck. The CPU and GPU use less than 2W while playing.


Just a tinkerer - no real specific use case. I also have a UM690 that I'm using as my primary desktop, pretty sweet little machine.


> in general, there's not really any practical difference between ARM64 and x86 for desktop use

... provided everything you and your team does is 100% on the web, otherwise if you're a normal dev shop building and consuming docker images then you'll chase innumerable reports of "well, on my machine I can't use docker.io/example/thingy:1.0 I had to use docker.io/eleeetz-safe-4-sure/thingy:0.33.05beta-arm64"

Thus, it would be better if the whole company were on the same CPU and since there's no longer any prayer of amd64 macOS ... we Linuxers have to capitulate


I was really tempted to try this out: https://news.ycombinator.com/item?id=34027280 but the 16GB of RAM was too tiny for my day-to-day. It seemed to be a common trend in arm64 things one can buy nowadays


Good info. After reading this, I bought an Orange Pi 5 with 8GB of RAM, and now it is up and running with Ubuntu 22.04. The spec seems to be similar to a c6g.xlarge.


Not a NUC, but there is the ThinkPad X13s. Linux support seems to be shaping up decently, with some hope that it might be supported by stock ubuntu in ubuntu 23.04.


Asahi Linux is amazing and pretty usable at this point. However, last I tried I couldn't get stuff like Slack and Spotify to work on it because there are no Arm packages for those. Has anyone found a way around that?


Electron and apps built with it are missing from most ARM64 Linux distributions.

Once you've built Electron (or downloaded their binaries), many of the builds fail because they download something that requires x86_64, and then you get to troubleshoot that.

Flathub doesn't have aarch64 builds of most of these apps either, so I'm assuming it is not just me having issues.

So much for cross platform. It is doable for the open source apps if you want to get your hands dirty though.

It did motivate me to look for non-Electron replacements (to Electron apps, not Slack and Spotify specifically), so I've got that going for me.

Sent from an RPi4 I use as an always on workstation.


Apple released Rosetta 2 for Linux. It's for running x86_64 binaries inside of a arm64 VMs. I've heard rumours people have it running on Graviton instances and other arm64 devices..I'm guessing it would work with Asahi, although it might be against some ToS.


You could have a look at box86 and box64:

https://box86.org/ | https://github.com/ptitSeb/box64

I haven't tried them myself, but I've read consistent testimony that the performance (and compatibility) is great.


when i run across this issue i just use their web apps in a separate browser. slack's 'native' app is just a sequestered browser instance. not sure about spotify.


You can make your ”own” slack app with few lines of code by using some WebView library, like Tauri and Rust, for fun :-D


Use the web versions?


The web version of spotify doesn't work on my m2 air in either chromium or firefox. The Spotify website errors with: "Playback of protected content is not enabled."

Slack works just fine from the browser on Asahi though.


You're likely missing the Widevine DRM plugin. IIRC, many distros don't include it, or don't enable it by default.


Where is the publicly available linux/aarch64 build of widevine drm?

The fetch-latest-widevine.sh[0] script says 'Architecture not supported'.

[0]: https://github.com/proprietary/chromium-widevine/blob/master...


Well, to answer this question if someone comes looking: there's no official aarch64 widevine release, but chromeos just started shipping a 64bit arm version of widevine in the last month. But that version is incompatible with the 16k page size kernel that asahi ships with. There was a report in the asahi irc channel that a manually patched libwidevinecdm.so works.


Perhaps the universe is telling you something there? :-)


At least for Spotify, there are many other options, like spotifyd and spotify-tui. Also one decent GUI app build with Rust, if you want to try them.


I've been running the `psst` spotify app on my M1 the last few days and its great.

Sadly I miss out on spotify connect and all of the social/playlist features. I'd love to use the web version, but firefox doesn't support drm on arm, and I can't seem to get webkitgtk to compile with EME support.


You can't run any Electron apps. Is that a problem? :)


A bit off-topic: does anyone know what the best new laptop, available in EU, would be for Linux (exclusively)? Preferably something with a keyboard similar to the old ThinkPads (T480)?


The Tuxedo Infinity book is the one device that I've got my eyes on. But haven't used it personally though


I like my NovaCustom I got at the end of last year, but I'm not sure how it compares to the ThinkPads.


Thank you, I wasn't aware of them. IIUC the laptops are made by Clevo? It looks like a very interesting option, I'll check them out.


Framework laptops also have the repairability and very good Linux support.


> In some cases, it is too fast. When I installed K3s, all of the containers in the kube-system namespace kept entering the dreaded CrashLoopBackOff state (something I’ve never seen before outside a production container). After some investigation, I found out that the Mac Studio was just too fast for the Kubernetes resource timing, and I had to add resource limits to each pod to remedy the situation.

You should consider to submit a bug report to k8s project.


All sounds good to me. What I don't get is how running on arm64 would speed up i3. I3 is already fast enough to be unnoticeable, hardly performs any expensive operations.


I believe they're commenting on the fact that the sway Wayland compositor seems faster than i3. Might be due to how Wayland handles frame rendering, but switching between the two I tend to notice a more "snappy" effect from sway


So more likely they are comparing wayland and X.


I didn’t see OP saying ARM increased performance, just that sway felt more buttery/smooth and easier to configure.


Everything you run on it, though?


> This January, I installed Asahi Linux on Apple’s most powerful ARM64 system: the Mac Studio with a 20-core M1 Ultra processor and 128GB of RAM

Sucker. For the same price, OP could have had TWO Mac Studios Max 10-core with 64GB RAM each, which together would absolutely spank the Mac Studio Ultra 20-core with 128GB RAM. Why are people falling for this!?


I want to respond but I can’t tell if this is a joke or not.


It's not. Two Studios Max considerably outperform one Studio Ultra.


What workload do you have that is easy to run across two separate hosts? And how easy is it for you to maintain the workflow? Sounds like a pain in the butt unless you’re using them in a headless fashion.


Unless there is one single never-ending processing task... forever... I see your point. Generally, everyone else has a never-ending series of processing tasks that obviously can be shared between as many processors as are available. Otherwise, we'd only ever need one single computer and one single person operating it.


When I saw "ARM64 Linux Workstation" I immediately thought of my raspberry pi 4 at home (arm64). Kind of annoying really, you'd be surprised at the number of developer tool projects that only publish amd64, and I can't watch netflix without bending over backwards. I haven't gotten widevine to work on it yet :(


At least you have the option, since there are armv7 widevine binaries. Can't run those on M1.


Forgive a bit of ignorance on my end; does Asahi have x86_64 emulation? It would be cool to play Linux Steam games on Arm.


Performance will be terrible as you virtualise instructions. You can try it with QEMU for example.


Performance with Rosetta on macos is good enough for most games that shipped for x86 macos. With a decent enough graphics driver implementation it should be feasible to grab the Linux version of Rosetta and use that to play games. A lot of the emulation on Rosetta is straight instruction conversion; the hardware acts like an x86 cpu in most other ways apart from the isa. It’s not that bad as far as performance goes.



You could try the patched Rosetta2 for Linux + Wine or something!

https://github.com/CathyKMeow/rosetta-linux-asahi


qemu-user can emulate x86_64 hardware, but it's not going to be as fast as Apple's Rosetta.


Yeah, that's what I was afraid of; honestly the reason I haven't seriously attempted moving to Arm Linux full time is because I suspect that it would be great for 99% of stuff, but that 1% of ~30 years of x86 legacy junk would be some huge blocker that forces me to change to something else.


you can copy over rosetta2 but it's not there by default (i.e. you need to dual boot mac into a linux vm with rosetta2 mounted than you can copy the rosetta2 binary)


Nice. My particular poison is FreeBSD and I don't think they're quite there yet in terms of support for Apple's hardware. However once they are I would consider it. Currently using a 10th gen Intel i7 NUC which I'm not unhappy with. But a M2 Mac Mini might be a nice successor eventually.


I would prefer serioulsy to jump from my x86_64 workstation to a RISC-V workstation.


All this free work to get Linux working on Apple hardware is only getting more people to vote with their dollars for Apple's locked down ecosystem.


That's great - still would love to see if it brings any improvement when using "modern" SPAs. Everything except those work great on my 2013 MBP.

Sure compiling will take a bit longer than on that machine but I'm not doing that very often - nor do I occasionally run 48 core kubernetes microservices clusters, but to each their own.

Happy to hear about the HW support for Linux though.


I noticed upon Apple silicon that when you’re emulating x86 via Colima, performance falls off a cliff.

Everything is instant for me on dwm on my AMD Thinkpad


Doesn't performance always fall off a cliff when emulating?


Sometimes translation can do an ok job. Eg Rosetta 1/2, the Alpha thing. Still slower than native though.


Does anyone know how cost effective a base M1 Mac Studio would be compared to a PC build with comparable cost? There's also the obvious set of trade offs, such as being quiet, compact, but being unrepairable, while the PCs having nearly the opposite qualities. Is the power efficiency of Apple Silicon still a major advantage in a desktop system?


> Does anyone know how cost effective a base M1 Mac Studio would be compared to a PC build with comparable cost?

If you normalize to performance, building a comparable PC for things like compiling code can be about half the price. Intel and AMD’s latest top-end consumer CPUs are very, very fast and significantly cheaper.

> Is the power efficiency of Apple Silicon still a major advantage in a desktop system?

If you’re going for ultimate silence and/or you need the smallest machine possible, power efficiency matters.

If you’re spending 99% of your time in the code editor and web browser, your CPU is going to be mostly idle anyway and peak power usage basically doesn’t matter. A decently configured AMD or even Intel system with reasonable fan curves can be plenty quiet for the 1% of time that you’re at 100% CPU usage (hint: embrace the high temperatures and let it throttle, it’s fine).


I watched one guy build a hackentosh for the same price as a Mac mini and compared them. The intel based one wins in multi threaded workloads but the M1 wins in single threaded, power efficiency, and things which it has hardware acceleration like video encoding.


I save 20€ per month by using my Macbook instead of desktop PC in electricity. At least 720 euros from three years.


I dont think this mac consumes much less than a amd 7950x capped at 65W.


Depends, whats the GPU on the 7950x?


I dont think gpu power is relevant for people installing linux on their m1/2 as long as it can play youtube at 4k.


Light gaming seems common, asahi linux streams have have minecraft, tux racer, mario kart, and related. Seems like a fair amount of ML (both inference and training) going on. PyTouch now works with CPU, GPU, and AMX (via metal), and seen some posts on using the neural to achieve a 7x improvement over the GPUs, but not upstreamed yet. One bonus is the GPU (and AMX) can access all memory, not just the 8-16GB that's common on GPUs these days.

I'm not a heavy gamer, but having some 3D is nice. I'd likely buy a 7900 (non-x) or 7800X3D if buying now, but not going to spend $800 and up for current gen GPUs, maybe get a 3060 or 3060 Ti. The Mac Studio is looking pretty promising, hopefully it's refreshed with the M2 CPUs RSN.



It really depends on how much you were planning to lard up that Mac with expensive Apple memory and storage. The more of that you want or need, the better the PC is going to look. Ballparking:

  ATX case $100
  650W ATX PSU apparently these are the smallest now? $100
  Z690 motherboard you can spend a lot here but say $250
  Reputable DDR5 DIMMs: $70 per 16GB, $120 for 32GB, $220 for 64GB
  Reputable 1TB SSD $100
  Excellent and quiet air cooler: $100
  CPU similar in single-thread perf to M2: $400
You'll note the absence of GPU. I personally don't use them so I see no value in Apple's supposedly quite good ones. And we don't have 4x Thunderbolt ports that will set you back another $100. But we're up to $1200 which is noticeably less than any Mac Studio model.


The 20-core Mac Studio with 128GB RAM is $4,800

It would be interesting to see a 24 or 32 core Threadripper vs the Mac Studio. I didn't price out the full system but those CPUs are in the $1300-2800 range


It’s about half the cost to build your own based on Ryzen thread ripper and DDR5. I’ve done this before however, it’s not ARM64. It also requires like 1000w power supply. Can you build a machine just as capable? Yes. Will it be as efficient? No. Will it be Arm? Probably not. That said, I run Apple MacBook Air as my daily so YMMV.

*edit* ok a bit more than half. Chip shortage has scalpers sitting on threadrippers.


Do you need a Threadripper to compete with M1 Ultra? Ryzen 9 7950X seems pretty comparable and doesn't require a 1000W PSU unless you must pair it with a 4090 or something.


Depends, how cache friendly is your workload? The M1 ultra has more than 4x the memory bandwidth of the threadripper. Of course the GPU makes a big difference in cost and power use. If you need a 4090, there's nothing comparable. But if the Apple iGPU is enough you save a ton of power and space.


If you leave it switched on using 5 cents worth of electricity an hour, how long does it take to cost more?


Only you can answer that question. You just stated the tradeoffs.

Compact quiet low power unexpandable box vs large noisy power hungry expandable box. Pick your poison.

I'd also do more research if I'd plan to run Linux on an Apple box. Just because it's good for TFA's use case, it may not be good for yours.


The stock Mac Mini with M2, 8GB RAM and 256GB SSD is $600. Pretty good deal for what you're getting.


You can get a Ryzen 6800H mini PC with 32GB RAM and a 500GB SSD for less than $600 these days.


"Moreover, running Asahi is perfectly legal because Apple formally allows booting non-macOS operating systems on their Apple Silicon platform."

As if you were actually violating the law by running non-vendorOS on a vendor locked computer. What a ridiculous concept.


I’m torn on whether or not to build a Ryzen 7900X workstation or stick with Apple Silicon. I love my M2 Air but want something beefy for my desk. M1 mini is starting to outgrow 16GB of RAM.


  To quote Hamlet, Act 3, Scene 3, Line 87: “No.”
lol


Wayland is great but Sway/any wlroots based window manager can't handle HiDPI properly for X apps :(


Only thing that would make this better is if the containers were running under Rosetta instead of QEMU.

I'm hoping this is possible at some point on macOS.


Docker for Desktop supports this, as well co(lima) apparently. There is also new tool orbstack on the way https://orbstack.dev/


The author doesn't seem to emulate x64 for his FreeBSD VM.


is the super fast speed mainly due to Linux the OS? is there benchmark done to compare to MacOS? with most recent hardware and 128GB memory I would expect things will run naturally faster no matter what and Linux might not be the key deciding factor.

I'm a Linux user, considering to get a M2 Mac Mini these days, and probably load Linux, even better and if possible, dual-boot as I do need MacOS to run some ios apps on emulator.


I have a relatively mid-to-top tier linux desktop (i5-13600K, 64 GB ram) and an M2 Macbook pro (M2 max, 32GB ram). The linux desktop comfortably outperforms the M2 in any CPU based benchmarks while also being significantly cheaper (though the M2 desktop machines are cheaper than the laptops, but still more expensive than the alternatives). If I were doing GPU heavy work than I assume the M2 would have a leg up, but then again I could just buy whatever GPU I wanted for the desktop machine and put it in there and I think it would outperform the mac once again.

However, if you were concerned about power consumption than the M2 would win by a mile, but for me it wasn't a huge factor in a desktop machine.


I saw 64GB vs 32GB here, many apps are memory intensive, thus the linux desktop will win no matter how faster M2 is, it will be interesting to run cpu intensive benchmark between them and see how that goes.

for ML training or 3D graphics, I wonder how M1 and M2's NPU and GPU are supported under Linux, unless they're optimized and verified to be superior, I will grab a machine with RTX 3090Ti instead.


I have a very CPU intensive single threaded program that runs about 15% faster on the intel CPU. Other CPU benchmarks out there show my particular processor is slightly ahead of the M2 (and there are faster intel/amd processors than the one in my machine). But if you look at power consumption I believe the intel CPU uses something like 3-5x the power to generate that slightly better performance.

At the high end of performance, the M2 is just light years better than any other chip (other than the M1) in terms of performance per watt. But if that stops being a concern to you then I think most linux desktop users are better off just getting a "normal" intel machine, which also will have 100% driver support for everything, cost less, and probably be faster.


I think OP is just saying it’s a very fast/snappy experience, independently (as in the hardware). Not that Linux is necessarily more performant than macOS.


The question is, can you run server workloads on it? It feels like it is obvious choice arm server but someone should be first to try and test it


Yeah.. I really want to get a stack of mac minis and see how far they go as a self hosted k8s environment. Could be a decent approach for cloud cost averse shops with an appetite for DIY.


Anyone running M1 NixOS? If so, how's it going? Reasonable for a current asahi user that is semi-new to nix?


I do, no issues at all with the beta Asahi kernel, you basically have to git clone https://github.com/tpwrules/nixos-apple-silicon in /etc/nixos/, include a file from that repo in the configuration.nix and configure as you like (beta gpu driver or not, which kernel, 4k pages or not, ecc). The experience then is exactly the same as a stock NixOS installation.


Cool. Does sleep and hibernation work?


I am reading this on my X86 Windows workstation which I continue to enjoy. :)


Interesting too try a rock5b 16G version as a desktop and see how it behaves


Is it ARM64 or aarch64?


I thought they were synonymous. What's are the most important differences?


Ya they’re the same thing.


Now try Box64 on it


Check out Hyprland


If i where Apple i would completely opensource the PC-Hardware (documentation, firmware etc). I bet ~everyone would buy a M*-Mac, just let the customer install Windows/Linux/BSD. They could wipe out every other laptop producer in one snip.


I’m pretty pissed at Apple at the moment. March 4th and I still cannot buy a MacBook 14” m2 in Taiwan.


I guess people down voting are showing how upset they are that the new MBP is not released world wide yet.


hdmi and workstation doesnt fit, but you can down vote me its fine I dont care.


Okay but it supports 5 monitors. 1 via HDMI and 4 via the Thunderport / DisplayPort Type-C outputs. I've got a Lenovo Thinkpad and I use a $15 Type C to DisplayPort adapter and it works fine at 4K @60Hz

The specs say it supports 6K @60Hz over the Type-C ports and 4K for the HDMI port


In general I agree with you, but I think that may be a limitation of Asashi right now. So if you want a display it may be your only choice for a desktop.


> Everything works… and works perfectly. All the hardware (Bluetooth, audio, HDMI, USB, 10G Ethernet, WiFi, and GPU) performs flawlessly with the drivers created by the Asahi team this past year, and there isn’t a single piece of software I want or need that doesn’t run beautifully in Asahi on this system.

So I am expecting everything from proper GPU acceleration, power management and sleep to the Touch ID keyboard to be 100% working perfectly fine, all guaranteed then?


OP is using a Mac Studio, which is not a laptop. I don’t think you can take their claim of it 100% working on their hardware and apply it to a completely different machine.

In addition, it’s open source software. Not some corporate organization selling you a product. Nothing is guaranteed.


> OP is using a Mac Studio, which is not a laptop.

You're telling me that the Mac Studio has no "power management" or "sleep" functionality at all? Not even in macOS? Nor does it have a 'GPU' for GPU acceleration either. We both know that it isn't exclusive to laptops. Everyone knows Touch ID doesn't work, both the seperate keyboard and on Apple Silicon laptops (and Intel Macs) and you know it.

> I don’t think you can take their claim of it 100% working on their hardware and apply it to a completely different machine.

Yes I can, and I just did since Asahi Linux is aimed at specifically supporting all Apple Silicon machines. OPs machine included.

Just admit that the OP is getting carried away with the claim that "Everything works… and works perfectly" when we know that isn't true. A dose of skepticism is needed to cut through wild claims during bouts of hype and euphoria.

Even some have already questioned [0] the 'everything works' claim and are still waiting for sleep and power management support [1].

[0] https://news.ycombinator.com/item?id=35015538

[1] https://news.ycombinator.com/item?id=35014923


I don’t need to admit anything. You can read into anything as disingenuously as you like, and go rant off into corners; but you’d get equally far yelling at a wall.

OP claimed that all of the hardware they tested (with an enumerated list of said hardware) worked as they expected. For you to take that as some guarantee that you should then have a perfect experience on a completely different (or even the exact same) piece of hardware is your issue. It’s an opinion/experience editorial, not a professional expert/authority on the subject.


> I don’t need to admit anything. You can read into anything as disingenuously as you like, and go rant off into corners; but you’d get equally far yelling at a wall.

Yet you decided to reply back and evaded answering my question. By doing that not only you indirectly admitted it, you already know that not 'everything' is working and anyone can check that for themselves.

> OP claimed that all of the hardware they tested (with an enumerated list of said hardware) worked as they expected. For you to take that as some guarantee that you should then have a perfect experience on a completely different (or even the exact same) piece of hardware is your issue.

It's Asahi Linux's problem to solve, not mine. Why would I bother to run something that has missing and less functionality than macOS and what part of the nonsensical claim of "Everything works… and works perfectly" don't you understand?

As I said before, the whole point of running Asahi Linux is to support all Apple Silicon machines which means both laptop and desktops. The hardware is not 'completely different' and we both know, and even those asking in the links in [0] and [1] also know that GPU acceleration, sleep and power management and also Touch ID is not exclusive to laptops nor is it properly working or even functional either.

Yet another one [2] questioned the "Everything works… and works perfectly" claim. It's almost as if it's better to be skeptical about someone hyping and claiming that 'everything works' about running alpha software these days...

[2] https://news.ycombinator.com/item?id=35018387


Repeating your disingenuous and misdirected points ad infinitum is the definition of talking to a wall.

Go do it, it’s a better use of your time and infinitely less annoying to the rest of society.


> Repeating your disingenuous and misdirected points ad infinitum is the definition of talking to a wall.

Says the one still replying here and cannot answer a basic question about the obvious falsehood being claimed by the OP.

Assuming you have read the links in my previous comment and in [0] and especially about feature support, you do already know that not everything is supported or even functioning properly? Yes or No?

> Go do it, it’s a better use of your time and infinitely less annoying to the rest of society.

Surely you can think of a better response that actually answers my question about proper Apple Silicon support rather than dodging again since you're having a difficult time giving a straight answer.

[0] https://news.ycombinator.com/item?id=35018387


Depending on the country you live earth overshoot day is pretty soon. 128GB of RAM and other specs for a workstation certainly sound like overshoot. I'd wish smart people would use their brains and saving resources instead of wasting them.


Until you're living in a tree and sending this message with a carrier pigeon there's some criticisms anyone could level at you also. And, what if our future energy saviour just so happened to need 128GB of RAM to prove their model in time?


I am typing this on a mobile phone. Maybe 7-8 years old. 2 GB of RAM. Sailfish OS, still maintained (not perfectly, but better than the competition). Made as a low cost smart phone for India originally. Works decently well on many Web sites, close to perfectly on HN.


So a small computer. Full of environmentally problematic stuff. Just small and old. If that's your 'save the planet' plan, keep your condescension to yourself.


How can Earth overshoot day be different from country to country? Isn't it related to the Earth as a whole?


You can accumulate/partition statistical numbers in different granularity. Here it's done by countries. You could also do it by men or women or whatever partitioning you want. And of course you can choose to not partition at all and report only one global number.

Calling it still earth overshoot day after partitioning might be not fully correct.

Check here for your country overshoot day https://www.overshootday.org/newsroom/country-overshoot-days...


If the 128GB grants a greater level of efficiency and allows more tasks to be completed in less time and thus saving energy, then how is this overshoot?

Might it be that you're just failing to measure reality objectively?


In 2019 I thought that by now I could buy a bunch of 8GB Raspberry Pi 4s (or equivalent) for $20 apiece, install them all over the house and car for various purposes, and occasionally use them as a cluster for video editing or whatever.

The pandemic and trade wars put a wrench in the gears of that plan, but I still think that would be way cooler than lugging around a 128GB laptop and using it for everything, a.k.a. single point of failure.


640K aught to be enough for anybody.


Who said that? No one.


Bill Gates said that. It's a pretty famous quote.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: