Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

small models can never match bigger models, the bigger models just know more and are smarter. the smaller models can get smarter, but as they do, the bigger models get smart too. HN is weird because at one point this was the location where I found the most technically folks, and now for LLM I find them at reddit. tons of folks are running huge models, get to researching and you will find out you can realistically host your own.


> small models can never match bigger models, the bigger models just know more and are smarter.

They don't need to match bigger models, though. They just need to be good enough for a specific task!

This is more obvious when you look at the things language models are best at, like translation. You just don't need a super huge model for translation, and in fact you might sometimes prefer a smaller one because being able to do something in real-time, or being able to run on a mobile device, is more important than marginal accuracy gains for some applications.

I'll also say that due to the hallucination problem, beyond whatever knowledge is required for being more or less coherent and "knowing" what to write in web search queries, I'm not sure I find more "knowledgeable" LLMs very valuable. Even with proprietary SOTA models hosted on someone else's cloud hardware, I basically never want an LLM to answer "off the dome"; IME it's almost always wrong! (Maybe this is less true for others whose work focuses on the absolute most popular libraries and languages, idk.) And if an LLM I use is always going to be consulting documentation at runtime, maybe that knowledge difference isn't quite so vital— summarization is one of those things that seems much, much easier for language models than writing code or "reasoning".

All of that is to say:

Sure, bigger is better! But for some tasks, my needs are still below the ceiling of the capabilities of a smaller model, and that's where I'm focusing on local usage. For now that's mostly language-focused tasks entirely apart from coding (translation, transcription, TTS, maybe summarization). It may also include simple coding tasks today (e.g., fancy auto-complete, "ghost-text" style). I think it's reasonable to hope that it will eventually include more substantial programming tasks— even if larger models are still preferable for more sophisticated tasks (like "vibe coding", maybe).

If I end up having a lot of fun, in a year or two I'll probably try to put together a machine that can indeed run larger models. :)


> Even with proprietary SOTA models hosted on someone else's cloud hardware, I basically never want an LLM to answer "off the dome"; IME it's almost always wrong! (Maybe this is less true for others whose work focuses on the absolute most popular libraries and languages, idk.)

I feel like I'm the exact opposite here (despite heavily mistrusting these models in general): if I came to the model to ask it a question, and it decides to do a Google search, it pisses me off as I not only could do that, I did do that, and if that had worked out I wouldn't be bothering to ask the model.

FWIW, I do imagine we are doing very different things, though: most of the time, when I'm working with a model, I'm trying to do something so complex that I also asked my human friends and they didn't know the answer either, and my attempts to search for the answer are failing as I don't even know the terminology.


> I feel like I'm the exact opposite here (despite heavily mistrusting these models in general): if I came to the model to ask it a question, and it decides to do a Google search, it pisses me off as I not only could do that, I did do that, and if that had worked out I wouldn't be bothering to ask the model.

When a model does a single web search and emulates a compressed version of the "I'm Feeling Lucky" button, I am disappointed, too. ;)

I usually want the model to perform multiple web searches, do some summarization, refine/adjust search terms, etc. I tend to avoid asking LLMs things that I know I'll find the answer to directly in some upstream official documentation, or a local man page. I've long been and remain a big "RTFM" person; imo it's still both more efficient and more accurate when you know what you're looking for.

But if I'm asking an LLM to write code for me, I usually still enable web search on my query to the LLM, because I don't trust it to "remember" APIs. (I also usually rewrite most or all of the code because I'm particular about style.)


>you might sometimes prefer a smaller one because being able to do something in real-time, or being able to run on a mobile device, is more important than marginal accuracy gains for some applications.

This reminds me of ~”the best camera is the one you have with you” idea.

Though, large models are an http request away, there are plenty of reasons to want to run one locally. Not the least of which is getting useful results in the absence of internet.


All of these models are suitable for translation and that is what they are most suitable for. The architecture inherits from seq2seq and original transformers was created to benefit Google translations.


For coding though it seems like people are willing to pay a lot more for a slightly better model.


The problem with local vs remote isn't so much about paid. It is about compliance and privacy.


For me, the sense of a greater degree of independence and freedom is also important. Especially when the tech world is out of its mind with AI hype, it's difficult to feel the normal tinkerer's joy when I'm playing with some big, proprietary model. The more I can tweak at inference time, the more control I have over the tools in use, the more I can learn about how a model works, and the closer to true open-source the model is, the more I can recover my child-like joy at playing with fun and interesting tech-- even if that tech is also fundamentally flawed or limited, over-hyped, etc.


> HN is weird because at one point this was the location where I found the most technically folks, and now for LLM I find them at reddit.

Is this an effort to chastise the viewpoint advanced? Because his viewpoint makes sense to me: I can run biggish models on my 128GB Macbook but not huge ones-- even 2b quantized ones suck too many resources.

So I run a combination of local stuff and remote stuff depending upon various factors (cost, sensitivity of information, convenience/whether I'm at home, amount of battery left, etc ;)

Yes, bigger models are better, but often smaller is good enough.


The large models are using tools/functions to make them useful. Sooner or later open source will provide a good set of tools/functions for coding as well.


I'd be interested in smaller models that were less general, with a training corpus more concentrated. A bash scripting model, or a clojure model, or a zig model, etc.


Well yes tons of people are running them but they're all pretty well off.

I don't have 10-20k$ to spend on this stuff. Which is about the minimum to run a 480B model, with huge quantisation. And pretty slow because for that price all you get is an old Xeon with a lot of memory or some old nvidia datacenter cards. If you want a good setup it will cost a lot more.

So small models it is. Sure, the bigger models are better but because the improvements come so fast it means I'm only 6 months to a year behind the big ones at any time. Is that worth 20k? For me no.


The small model only needs to get as good as the big model is today, not as the big model is in the future.


There's a niche for small-and-cheap, especially if they're fast.

I was surprised in the AlphaEvolve paper how much they relied on the flash model because they were optimizing for speed of generating ideas.


Not really true. Gemma from Google with quantized aware training does an amazing job.

Under the hood, the way it works, is that when you have final probabilities, it really doesn't matter if the most likely token is selected with 59% or 75% - in either case it gets selected. If the 59% case gets there with smaller amount of compute, and that holds across the board for the training set, the model will have similar performance.

In theory, it should be possible to narrow down models even smaller to match the performance of big models, because I really doubt that you do need transformers for every single forward pass. There are probably plenty of shortcuts you can take in terms of compute for sets of tokens in the context. For example, coding structure is much more deterministic than natural text, so you probably don't need as much compute to generate accurate code.

You do need a big model first to train a small model though.

As for running huge models locally, its not enough to run them, you need good throughput as well. If you spend $2k on a graphics card, that is way more expensive than realistic usage with a paid API, and slower output as well.


> small models can never match bigger models, the bigger models just know more and are smarter

Untrue. The big important issue for LLMs is hallucination, and making your model bigger does little to solve it.

Increasing model size is a technological dead end. The future advanced LLM is not that.


> and now for LLM I find them at reddit. tons of folks are running huge models

Very interesting. Any subs or threads you could recommend/link to?

Thanks


join us at r/LocalLlama


Basically just run ollama and run the quantized models. Don't expect high generation speeds though.


which sub-reddits do you recommend?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: