Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Are EGPUs for M1/M2 MacBook's on Linux Possible?
20 points by thrwawy74 on Oct 8, 2022 | hide | past | favorite | 36 comments
Three things:

1) 2D acceleration with the recent Rust driver in Asahi Linux seems quite attainable, and will enable everyday/office/development use.

2) USB 4 should allow suitable bandwidth for an eGPU

3) What needs to happen to support say, an Nvidia RTX 3090 in an eGPU case/dock? I'm thinking the 'open source' Nvidia driver needs to be compiled for ARM, and everything pushed to it is code for the internal ISA of the GPU.

3D acceleration through an eGPU seems much more possible in the near-term than supporting the internal iGPU of the M1/M2.



marcan (working on linux on these devices) has talked about this on twitter previously. tl;dr is that while it is kinda technically possible, it would require a lot of changes to existing software and might result in nasty performance issues that could make the whole idea not worth it.

https://twitter.com/marcan42/status/1534825580801433600


Sounds like something suitable for a vendor writing new drivers and compatibility abstractions for fresh hardware.

Intel might have a leapfrog opportunity here..


I am on my 2nd laptop PC (HP Spectre) using the same eGPU over TB3, an Akito Node with a GTX 1060. My newest laptop has better hardware acceleration than my previous laptop, but the eGPU is still useful for playing games.

I was playing Fortnite quite a bit, but at some point in 2019 they released a patch that made the game unplayable with my eGPU so I haven't played since. So was some commenters have said on this thread already, eGPUs are not well supported.


It's definitely possible, but there are no drivers released by the GPU manufacturers. They may simply not see any benefit of making such an investment, and they might point a finger at Apple for not allowing them the ease of development using kernel access (they would have to make a DEXT or something like that). It's also likely that there isn't really much of a market for it anymore.


Id guess that the market is reduced due to the price of eGPU encloses!

Anything Thunderbolt seems to be a expensive to me!


To be clear, there really shouldnt need to be support for egpus specifically. Its just that MacOS, being run on almost entirely fixed hardware with off-brand/custom apis (Metal) isnt attractive for gpu makers in any capacity.

Otoh on Linux it should just work!


> Its just that MacOS, being run on almost entirely fixed hardware with off-brand/custom apis (Metal) isnt attractive for gpu makers in any capacity.

It has nothing to do with macOS and nothing to do with GPU manufacturers, only that Apple Silicon drivers for eGPUs do not exist, and there is no reason for them to exist because M1 delivers better performance than any eGPU.

> Otoh on Linux it should just work!

There's a diesel truck I want, has the power and performance I need, but I just hate diesel. Why are they forcing me to use a fuel I hate? What are the chances I can get this diesel engine to run on linux gas? Linux gas is such a cool fuel, I can do almost anything with it and not be locked in to the diesel walled garden.

I also have this compact car that I just hate the engine, it's only 1 liter. Why do car manufacturers force their stupid flashy engines on us? I want to put a 20 liter linux engine in there, but the engine bay is too small. What are the chances I'll ever be able to squeeze my linux engine in there?

Someone should just release a GNU/XNU distribution, call it "ARMac 'Linux'" and be done with it, because 99% of the cry babies don't really care about Linux, they just think that they do. All they really want is GNU. The other 0.9% can use MacPorts to build or install and run GNOME on macOS, and tell everyone it's Linux. Unless they saw the directory structure, Penguinistas wouldn't know the difference. For the remaining 0.1% that actually do want and need the Linux kernel on Apple Silicon, only they deserve our sympathy.


I like the fastest cheap, arm docker compatible server. And as far as I know, docker doesn't work on XNU


That's tough. Mac Pro for you, if it must be Mac hardware. But why on earth Mac docker server? Obviously not for professional purposes. For play, use any x86 box you like, just because you want speed doesn't mean speed is necessary.

If it must be Apple Silicon, then you can run as many virtual instances of macOS as you like on my GNU/XNU ARMac "Linux" (not actually Linux) release, and as many Docker Desktop for Apple Silicon as virtual instances of macOS.

But if Docker had any decency, they'd release the source code to you, so you could build it native for Apple Silicon XNU. Have you considered working for Docker? Would make it a lot easier to get the source code, or just move through the ranks and make it a Docker initiative.

Or suppose we agree while you can't actually run docker server on Apple Silicon, because the code doesn't exist, which is no one's fault, not even Apple's, but that you can still have the right to run docker server on Apple Silicon. We shall fight your oppressors for your right to run docker server on Apple Silicon because it is symbolic of our struggle against oppression.[1]

[1] https://www.youtube.com/watch?v=sFBOQzSk14c


> I like the fastest cheap, arm docker compatible server.

Specs unclear, Pi Zero can do this for 35 dollars. It's the fastest you can get for cheap. But not the cheapest one of the fastest ones, that would be Graviton, but you can't get those because only Amazon manufactures them for themselves. So that leaves Ampere Altra but that's not cheap unless you need to scale. I guess you need a Kunpeng 920 then.


Here are excellent points and practical and positive solutions. To get it done, do what works rather than pine for what doesn't.


> Otoh on Linux it should just work!

It won't, unfortunately.

https://twitter.com/marcan42/status/1538426240922963968


Yeah, most of the M1 (and some M2) limitations seem to stem from the mobile stack where it's not "a PC but smaller" but the other way around ("embedded mobile, but now make it PC-sized"). They lack a whole bunch of similar peripheral configurations, it's why on the base M1 you also get very few USB/TB ports, graphics output support or PCIe lanes in general.

I suppose it makes sense considering they can design the products as a whole and only add what they actually need instead of just getting a bunch of stuff and seeing what sticks in their design (which is what happens if you have a ton of external suppliers for CPU, PCH etc.).

Maybe if they come up with some M2 Ultra Max XXL or whatever they will get an abundance of lanes that require other BAR configurations to make proper use of (which would then mostly make sense if they go Mac Pro or at least larger-than-mac-studio - but if that is going to include PCIe slots... who knows).


To be clear, I was talking about binary drivers supplied by AMD and Nvidia for the most common GPUs that are attached externally. Not about the fact that they were external.

MacOS has built in drivers for plenty of GPUs, including AMD, Nvidia, Apple and Intel. But on the ARM64 side, AMD and Nvidia have not made any drivers for Windows or macOS, so you are out of luck on the software side.


In Ventura, Apple has removed the last parts of nVidia driver they shipped even on Intel platforms so thats that for any 3rd party GPUs being usable in their walled garden.


Why the "walled garden" flame bait? Just because Apple removes old drivers that were deprecated 3 major releases ago (because that's the last version where 5+ year old Macs were still shipping with AMD or Nvidia graphics) doesn't mean that therefore nobody in the world could ever make drivers.

Any GPU manufacturer could right now make a driver that works on Aarch64 for Windows, macOS and Linux, regardless of what Microsoft or Apple does. But they don't (so far, at this point, but I can't foresee the future).

To the point of being 'blessed' by the OS manufacturer: Microsoft probably won't make DCH signed drivers until their contract with Qualcomm for exclusive Windows ARM end-user hardware expires, they only sign drivers for their own internal hardware and Qualcomms internal hardware, and that's it. Apple is in a similar situation but they just won't support any internal peripherals that they don't manufacture themselves. You can, however, still make and load those drivers, it's just not a non-technical experience for the average user (and one could argue that eGPUs aren't for the mass market anyway).

Keep in mind that both Microsoft and Apple have their own 'special' versions of GPU drivers for AMD and Nvidia (and Intel) that are code from those three manufacturers but packaged and maintained by them. And then there is the ODM and public versions, where the ODM version is mostly used by laptop manufacturers, and the public version is what you find in the downloadable 'universal' packages from the GPU manufacturer website. Those can be made by those manufactures at any time, for any OS, if they want to.


> Why the "walled garden" flame bait?

The new macs literally dont allow 3rd party components. The physical case literally serves as a wall against regular PC components. The new mac minis have m.2 slots for example... but you need a custom ssd just because, because Apple is keeping the walls high.

> Any GPU manufacturer could right now make a driver that works on Aarch64 for Windows, macOS and Linux, regardless of what Microsoft or Apple does. But they don't

AMD cards run on ARM linux. https://youtu.be/crnEygp4C6g


Considering the status of GPUs on hackintosh systems / the Mac Pro I am under the impression that no Nvidia cards after Pascal are supported by MacOS and the ones that are are via various kludges dredging drivers from High Sierra upwards to new versions via OpenCore. Officially, MacOS is firmly in Radeon territory when it comes to dgpus.


There isn't ever going to be official support for it, that's for sure. As for unofficial support, it's going to take some nerds to put that together. It could be you! Go buy one and start working on it. Let us know how it goes. I'm sure you can get a lot of enthusiastic people to help you.

Personally, I think a better and easier option is to just get an entire other computer with the GPU(s) in it. This can either be a physical machine you have, or some GPU cloud instance. Then in situations where the Mac graphics capabilities aren't up to snuff, you can use something like remote desktop or streaming to offload that work to the other machine.

e.g.: Setup a small headless PC on your network that has a powerful GPU and Steam installed. Then run Steam Link on your Mac to play games that don't run natively on MacOS.


> There isn't ever going to be official support for it, that's for sure.

What makes you so sure, given that Apple was early to the eGPU party (support was announced and demonstrated at WWDC 2017), continues to support them on Intel silicon Macs¹, and continues to sell them²?

¹ https://support.apple.com/en-us/HT208544 ² https://www.apple.com/shop/product/HM8Y2VC/A/blackmagic-egpu


They probably meant the "on Linux" part.


I was under the impression that the obstacles were at least partly a MacOS issue, because people have encountered difficulties running eGPUs on Intel Macs as well. Would love to hear if/how this is working in the field.


It works fine on Intel Macs with supported hardware, but after a bit no new updates came for newer GPUs and because the corporate relation between Apple and Nvidia got shot to the around the 8600/8800 GPU series debacle, the only eGPUs that work on macOS on Intel were AMD gpus.

You could run Windows and use GPUs there, but on ARM that's a different story, there don't seem to be any Windows or macOS drivers at all, except for Linux.

So would it work hardware-wise? Yep. But no official software is being made to do it because none of the parties that could are incentivised to do so.


My advice would be to get a real industrial computer running Linux instead of relying on consumer electronics.


I have never seen a single review or benchmark of eGPUs that was impressive.


Really? I used to run a 1080 with my old iMac back in the day when they came out and it cleaned up.


I've never seen a review that had any real issue with them? They performed fine everywhere I saw.


It’s hard to even find a review with gaming benchmarks. A quick search and I’m not sure any eGPUs are even being made? All the reviews seemed to be from 2019.

It’d be awesome if eGPUs provided high throughput and low latency. But it really doesn’t seem like that’s the case.


There are whole sites dedicated to eGPU reviews so I'm not sure what are you looking at?


Share links.



Their #1 rated eGPU is from 2018. They don't measure latency. Their gaming FPS benchmarks don't list either resolution or quality settings, when they're even available. Their top 2 eGPU recommendations for mac don't contain benchmarks when running on mac.

Yeah, the eGPU market is a disaster and the review quality is terrible. I personally wouldn't touch eGPUs. Which is a shame because I absolutely adore the concept.


I used an Akitio Node 3 with RX580 for years, works fine on macOS, Linux and Windows on both Thunderbolt 2 (reduced performance) and Thunderbolt 3.

Only reason I stopped using it is because I used it less and had more use for it on Linux than anywhere else, so it's permanently installed in a Linux desktop machine now. It works passthrough to a Windows VM as well. The Intel Mac it was connected to is no longer used with more than 2 displays so the internal GPU works well enough, and an M1 has no problems with it either.

For the time where I used it, it was a perfect solution for me; lots of portable use, but when working from home more the portability aspect isn't worth much. Add to that that GPU usage on-the-go is pretty much gone for me (most GPU workloads just run on AWS for me now), and having a high-power GPU connected to a laptop just doesn't help me with anything.

If in the future I'd need more local GPU power than a mobile device (in a form factor I enjoy) can handle, maybe it'll be interesting again. Have one at home, another at an office, only move the laptop between places...


What a bizarre set of complaints. What "latency" would you like to measure? It's PCI-express, the latency has to be the same as the builtin GPU to function.

The #1 eGPU is #1 because it still works reliably and well. I have one and it's a reliable product that does what it says on the tin - provides an external cradle for a GPU.

As for lack of Mac benchmarks - Apple dropped support for eGPUs, so why would they benchmark a legacy unsupported system?


> What "latency" would you like to measure?

How long from user input to pixels changing.

> the latency has to be the same as the builtin GPU to function.

Objectively false.

A few years ago it was very common for Windows laptops with discrete Nvidia GPUs to have increased latency when using the discrete the GPU. The reason is that the frame buffer had to be copied from the discrete GPU to integrated to monitor.

Now maybe this doesn’t matter and you don’t care. But I’d like to know. The fact that it’s rarely done is a serious lack of rigor IMHO.

My intuition is that it varies by chassis. I’d be shocked if they’re all the exact same!

eGPUs are still significantly slower than having the GPU built in. Maybe they’re bottlenecked due to the connection, I dunno. It’s something that I thought would have gotten better in the past four years. But maybe eGPUs are even an worse deal now!


We mostly want this for development of CUDA apps, not even super worried about performance.

Increasingly, I'm thinking just a mobile eGPU Ubuntu node you plugin and ssh into, and just a nice remote setup...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: