It used to be that for locally running GenAI, VRAM per dollar was king, so used NVidia RTX 3090 cards were the undisputed darlings of DYI LLM with 24GB for 600€-800€ or so. Sticking two of these in one PC isn't too difficult despite them using 350W each.
Then Apple introduced Macs with 128 GB and more unified memory at 800GB/s and the ability to load models as large as 70GB (70b FP8) or even larger ones. The M1 Ultra was unable to take full advantage of the excellent RAM speed, but with the M2 and the M3, performance is improving. Just be prepared to spend 5000€ or more for a M3 Ultra. Another alternative would be a EPYC 9005 system with 12x DDR5-6000 RAM for 576GB/s of memory bandwidth with the LLM (preferably MoE) running on the CPU instead of a GPU.
However today, with the latest, surprisingly good reasoning models like QwQ-32B using up thousands or tens of thousands of tokens in their replies, performance is getting more important than previously and these systems (Macs and even RTX 3090s) might fall out of favor, because waiting for a finished reply will take several minutes or even tens of minutes. Nvidia Ampere and Apple silicon (AFAIK) are also missing FP4 support in hardware, which doesn't help.
For the same reason AMD Halo Strix with a mere 273GB/s of RAM bandwidth and perhaps also NVidia Project Digits (also speculated to offer similar RAM bandwidth) might just be too slow for reasoning models with more than 50GB or so of active parameters.
On the other hand, if the prices for the RTX 5090 remain at 3500€, they will likely remain insignificant for the DIY crowd for that reason alone.
Perhaps AMD will take the crown with a variant of their RDNA4 RX 9070 card with 32GB of VRAM priced at around 1000€? Probably wishful thinking…
The 96GB 4090 is still at the "blurry shots of nvidia-smi on Chinese TikTok" stage.
Granted the 48 GB 4090s started there too before materializing and even becoming common enough to be on eBay, but this time there are more technical barriers and it's less likely they'll ever show up in meaningful numbers.
> However today, with the latest, surprisingly good reasoning models like QwQ-32B using up thousands or tens of thousands of tokens in their replies, performance is getting more important than previously and these systems (Macs and even RTX 3090s) might fall out of favor, because waiting for a finished reply will take several minutes or even tens of minutes.
3090 is the last gen to have proper nvlink support, which is supported for LLM inference in some frameworks.
I've heard however that the EPYC rigs are in practice getting very low tokens/sec and the Macs like the Ultras and high memory are getting much beter - like by an order of magnitude. So in that sense, the only sensible option now (i.e. "local energy efficient LLM on a budget") is to get the Mac.
The previous time this article was submitted, I did some calculations based on the charts and found[1] that for the NVIDIA 40 and 50-series GPUs, the results are almost entirely explained by memory bandwidth:
Each of the cards except the 5090 gets almost exactly 0.1 token/s per GB/s memory bandwidth.
My understanding is that the Macs have soldered memory which allows for much higher memory bandwidth. The M4 has ~400-550 GB/s max depending on configuration[2], while EPYCs seem to have more like 250GB/s max[3].
Performance for text generation is memory-limited, so lack of native fp8 support does not matter. You have more than enough compute left over to do the math in whichever floating point format you fancy.
Performance is good enough for non-reasoning models even if they're FP8 or FP4. Check the phoronix article, the difference between the 3090 and 4090 is rather small.
How? It's not like Nvidia is some also-ran company for which people did not build custom kernels that combine dequantization and GEMV/GEMM in a single kernel.
Sometimes I daydream about a world where GPUs just have the equivalent of LPCAMM and you could put in as much RAM as you can afford and as much as the hardware supports, much like is the case with motherboards, even if something along the way would bottleneck somewhat. It'd really extend the life of some hardware, yet companies don't want that.
That said, it's cool that you can even get an L4 with 24 GB of VRAM that actually performs okay, yet is passively cooled and consumes like 70W, at that point you can throw a bunch of them into a chassis and if you haven't bankrupted yourself by then, they're pretty good.
I did try them out on Scaleway, the pricing isn't even that exorbitant, using consumer GPUs for LLM use cases doesn't quite hit the same since.
Saw a video from Bolt Graphics, a startup trying to do that among other things. Supposedly they'll have demos at several conventions later this year, like at Hot Chips.
I think it's an incredibly tall order to get into the GPU game for a startup, but it should be good entertainment if nothing else.
Oh, I definitely am! It's always cool when someone with domain specific knowledge drops by and proceeds to shatter that dream with technical reasons that are cool nonetheless, the same way how LPCAMM2 doesn't work with every platform either and how VRAM has pretty stringent requirements.
That said, it's understandable that companies are sticking with whatever works for now, but occasionally you get an immensely cool project that attempts to do something differently, like Intel's Larrabee did, for example.
The benchmark only touches 8B-class models at 8-bit quantification. Would be interesting to see how it fares with models that use more of the card ram, and under varying quantization and context lengths.
I agree. This benchmark should have compared the largest ~4 bit quantized model that fits into VRAM, which would be somewhere around 32B for RTX 3090/4090/5090.
For text generation, which is the most important metric, the tokens per second will scale almost linearly with memory bandwidth (936 GB/s, 1008 GB/s and 1792 GB/s respectively), but we might see more interesting results when comparing prompt processing, speculative decoding with various models, vLLM vs llama.cpp vs TGI, prompt length, context length, text type/programming language (actually makes a difference with speculative decoding), cache quantization and sampling methods. Results should also be checked for correctness (perplexity or some benchmark like HumanEval etc.) to make sure that results are not garbage.
At time of writing, Qwen2.5-Coder-32B-Instruct-GGUF with one of the smaller variants for speculative decoding is probably the best local model for most programming tasks, but keep an eye out for any new models. They will probably show up in Bartowksi's "Recommended large models" list, which is also a good place to download quantized models: https://huggingface.co/bartowski
Using aider with local models is a very interesting stress case to add on top of this. Because the support for reasoning models is a bit rough, and they aren't always great at sticking to the edit format, what you end up doing is configuring different models for different tasks (what aider calls "architect mode").
I use ollama for this, and I'm getting useful stuff out of qwq:32b as the architect, qwen2.5-coder:32b as the edit model, and dolphin3:8b as the weak model (which gets used for things like commit messages). Now what that means is that performance swapping these models in and out of the card starts to matter, because they don't all go into VRAM at once; but also using a reasoning model means that you need straight-line tokens per second as well, plus well-tuned context length so as not to starve the architect.
I haven't investigated whether a speculative decoding setup would actually help here, I've not come across anyone doing that with a reasoner before now but presumably it would work.
It would be good to see a benchmark based on practical aider workflows. I'm not aware of one but it should be a good all-round stress test of a lot of different performance boundaries.
When running on apple silicon you want to use mlx, not llama.cpp as this benchmark does. Performance is much better than what's plotted there and seems to be getting better, right?
Power consumption is almost 10x smaller for apple.
Vram is more than 10x larger.
Price wise for running same size models apple is cheaper.
Upper limit (larger models, longer context) is far larger for apple (for nvidia you can easily put 2x cards, more than that it becomes whole complex setup no ordinary person can do).
Am I missing something or apple is simply currently better for local llms?
You are missing something. This is a single stream of inference. You can load up the Nvidia card with at least 16 inference streams and get at much higher throughout tokens/sec.
This just is just a single user chat experience benchmark.
I'm trying to find out about that as well as I'm considering a local LLM for some heavy prototyping. I don't mind which HW I buy, but it's on a relative budget and energy efficiency is also not a bad thing. Seems the Ultra can do 40 tokens/sec on DeepSeek and nothing even comes close at that price point.
The DeepSeek R1 distilled onto Llama and Qwen base models are also unfortunately called “DeepSeek” by some. Are you sure you’re looking at the right thing?
The OG DeepSeek models are hundreds of GB quantized, nobody is using RTX GPUs to run them anyway…
there is a plateau where you simply need more compute and the m4 cores are not enough, so even if they have enough ram for the model the token/s is not useful
For all models fitting 2x 5090 (2x 32GB) that's not a problem, so you can say if you have this problem then RTX is also not an option.
On apple silicons you can always use MoE models, which work beautifully. On RTX it's kind of waste to be honest to run MoE, you'd be better off running single, whole active model that fills available memory (with enough space for the context).
They might be, depending on architecture - the ROPs are responsible for taking the output from the shader core and writing it back to memory, so can be used in compute shaders even if all the fancier "Raster Operation" modes aren't really used there. No point having a second write pipeline to memory when there's already one there. But if your usecase doesn't really pressure that side of things then even if some are "missing" it might make zero difference, and my understanding of most ML models is they're heavily read bandwidth biased.
> I have the impression that we'd usually expect to see bigger efficiency gains while these are marginal?
The 50-series is made using the same manufacturing process ("node") as the 40-series, and there is not a major difference in design.
So the 50-series is more like tweaking an engine that previously topped out at 5000 RPM so it's now topping out at 6000 RPM, without changing anything fundamental. Yes it's making more horsepower but it's using more fuel to do so.
Great to see Mr Larabel@Phoronix both maintaining consistent legit reporting and still have time for one-offs like this in these times of AI slop and other OG writers either quitting or succumbing to the vortex. Hats off!
I think that's underselling it. Performance is good, up by a significant margin, and the VRAM boost is well worth it. There's just no efficiency gain to go along with it.
It looks to me like you could think about it as a performance/VRAM/convenience stepping-stone between having one 4090 and having a pair.
Paired 5090s, if such a thing is possible, sounds like a very good way to spend a lot of money very quickly while possibly setting things on fire, and you'd have to have a good reason for that.
Then Apple introduced Macs with 128 GB and more unified memory at 800GB/s and the ability to load models as large as 70GB (70b FP8) or even larger ones. The M1 Ultra was unable to take full advantage of the excellent RAM speed, but with the M2 and the M3, performance is improving. Just be prepared to spend 5000€ or more for a M3 Ultra. Another alternative would be a EPYC 9005 system with 12x DDR5-6000 RAM for 576GB/s of memory bandwidth with the LLM (preferably MoE) running on the CPU instead of a GPU.
However today, with the latest, surprisingly good reasoning models like QwQ-32B using up thousands or tens of thousands of tokens in their replies, performance is getting more important than previously and these systems (Macs and even RTX 3090s) might fall out of favor, because waiting for a finished reply will take several minutes or even tens of minutes. Nvidia Ampere and Apple silicon (AFAIK) are also missing FP4 support in hardware, which doesn't help.
For the same reason AMD Halo Strix with a mere 273GB/s of RAM bandwidth and perhaps also NVidia Project Digits (also speculated to offer similar RAM bandwidth) might just be too slow for reasoning models with more than 50GB or so of active parameters.
On the other hand, if the prices for the RTX 5090 remain at 3500€, they will likely remain insignificant for the DIY crowd for that reason alone.
Perhaps AMD will take the crown with a variant of their RDNA4 RX 9070 card with 32GB of VRAM priced at around 1000€? Probably wishful thinking…
reply