Hacker News new | past | comments | ask | show | jobs | submit login

>you'd need a pretty custom toolchain

Just port/extend pytorch to make it easy to utilize fused ops.




Easier said than done. Even with Google level resources, TPU support for pytorch is patchy (https://arxiv.org/abs/2309.07181). Device abstraction is not great, assumes CUDA in unexpected places.


The Groq AI chip startup has solved this problem. They don't use hand written kernels at all, instead they use a compiler, and they have the top speed in the world on LLaMA2-70B, 240tokens/s.

https://www.youtube.com/@GroqInc/videos

Other interesting Groq tidbits - their models are deterministic, the whole system up to thousands of chips runs in sync on the same clock, memory access and network are directly controlled without any caches or intermediaries so they also run deterministically.

That speeds up communication and allows automatic synchronisation across thousands of chips running as one single large chip. The compiler does all the orchestration/optimisation. They can predict the exact performance of an architecture from compile time.

What makes Groq different is that they started from the compiler, and only later designed the hardware.


What is the pass rate on torchbench? This gives a more realistic measure of how good a vendor's pytorch support is.

All the big chip startups have their own pytorch compiler that works on the examples they write themselves. From what I've seen of Groq it doesn't appear to be any different.

The problem is that pytorch is incredibly permissive in what it lets users do. torch.compile is itself very new and far from optimal.


Pytorch XLA is such a pain to use. And once you go TPU you need the same energy to switch back, so you can’t quickly test out how it performs on your problem.


One of the big reasons custom hardware solutions struggle.

IMO - you’d have better luck as a hardware vendor implementing an LLM toolchain and bypassing a general purpose DL framework. At the very least you should be able to post impressive results with this approach rather than a half baked pytorch port.


I feel like that would make it harder for a vendor to keep up with the industry.

Say you took all the effort in the world to build your custom LLM toolchain to train a Llama on custom hardware. And then suddenly someone comes up with LoRA. You didn't even finish porting it to your toolkit then someone comes up with GPTQ.

Can't keep up with a custom toolchain imo.

It's like a forked linux kernel. Eventually you're gonna have to upstream if you're serious about it, which is what AMD is actively doing with pytorch for ROCm (masquerading it as CUDA for compatibility).


I disagree. llama.cpp[0] is a good counterpoint to this, since it uses a custom ML framework created from scratch. Despite not having the developer team of a large company, it still keeps up with many of the advancements in LLMs.

[0] https://github.com/ggerganov/llama.cpp


llama.cpp is not necessary for creating lots of demand for the chip it was originally written for (Apple M1), whereas new hardware vendors need to demonstrate they can plugin to existing tools to generate enough demand to ship in volume.


> lots of demand for the chip it was originally written for (Apple M1)

To be fair, the M1/M2 chip can't be purchased or used separately from the Mac, unlike GPUs or socketed CPUs, and demand for Macs is already fairly high.


That might be good enough to get a hardware startup acquired, but not good enough to get major sales. Users want pytorch and negligible switching cost between chips.

Bigger problem for startups trying to muscle in on LLMs is that there isn't much room for improvement on existing solutions to do something radically different.


>Bigger problem for startups trying to muscle in on LLMs is that there isn't much room for improvement on existing solutions to do something radically different.

aye - unless you are able to notch a 10x cost/performance improvement. The migration overhead will just make it not worth it to switch.


Typically Google resources go to TF before PyTorch, no?


Even after prioritising tensorflow, keras, jax etc., they can still afford to have a very large team working on torch_xla and still hedge their bets with a separate team on torch_mlir.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: