Hacker Newsnew | past | comments | ask | show | jobs | submit | msbhogavi's commentslogin

The hardware situation is way better than you think, and quantization is a huge part of why.

Take Qwen 3.5 27B, which is a solid coding model. At FP16 it needs 54GB of VRAM. Nobody's running that on consumer hardware. At Q4_K_M quantization, it needs 16GB. A used RTX 3090 has 24GB and goes for about $900. That model runs locally with room for context.

For 14B coding models at Q4, you're looking at about 10GB. A used RTX 3060 12GB handles that for under $270.

The gap between "needs a datacenter" and "runs on my desk" is almost entirely quantization. A 27B model at Q4 loses surprisingly little quality for most coding tasks. It's not free, but it's not an RTX 7090 either. A used 3090 is probably the most recommended card in the local LLM community right now, and for good reason.


14B even at Q4 isn't realistic for coding on a single 12GB RTX 3060. Token speed is too slow. After all they are dense models. You aren't getting a good MoE model under 30B. You can do OCR, STT, TTS really well and for LLMs, good use cases are classification, summarization and extraction with <10B models.

Dual 3060s run 24B Q6 and 32B Q4 at ~15 tok/sec. That's fast enough to be usable.

Add a third one and you can run Qwen 3.5 27B Q6 with 128k ctx. For less than the price of a 3090.


U are better off just buying their coding plan.

Running LLM makes no sense whatsoever


Remaining dependent on proprietary frontier models that you can only access via an API makes no sense whatsoever. My hope is that the future is open weight models running on local hardware.

Eventually, yes. ParoQuant is hopefully the future here, 4-bit weights with no real degradation from FP16:

https://github.com/z-lab/paroquant


"As much memory as possible" is right for model capacity but misses bandwidth. Apple Silicon has distinct tiers: M4 Pro at 273 GB/s, M4 Max at 546 GB/s, M4 Ultra at 819 GB/s. Bandwidth determines tok/s once the model fits in memory. An M4 Max gives you 2x the decode speed of an M4 Pro on the same model.

For what Hypura does, the Max is the sweet spot. 64GB loads a 70B at Q4 with room to spare, and double the bandwidth of the Pro means generation is actually usable instead of just technically possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: