Hi all, I built these models with a great team. They're available for download across the open model ecosystem so give them a try! I built these models with a great team and am thrilled to get them out to you.
From our side we designed these models to be strong for their size out of the box, and with the goal you'll all finetune it for your use case. With the small size it'll fit on a wide range of hardware and cost much less to finetune. You can try finetuning them yourself in a free colab in under 5 minutes
For picking a Gemma size this is a video I recorded for the 1b to 27b sizes earlier this year, 270m being the newest addition
Hacker News Disclaimer
I really like working at Google so with that; All my opinions here are my own, I'm a researcher so I'll largely focus on technical questions, and I'll share what I can.
The Gemma 3 models are great! One of the few models that can write Norwegian decently, and the instruction following is in my opinion good for most cases. I do however have some issues that might be related to censorship that I hope will be fixed if there is ever a Gemma 4. Maybe you have some insight into why this is happening?
I run a game when players can post messages, it's a game where players can kill each other, and people often send threats along the lines of "I will kill you". Telling Gemma that it should classify a message as game related or a real life threat, and that it is for a message in a game where players can kill each other and threats are a part of the game, and that it should mark it as game related if it is unclear if the message is a game related threat or a real life threat does not work well. For other similar tasks it seems to follow instructions well, but for serious topics it seems to be very biased, and often err on the side of caution, despite being told not to. Sometimes it even spits out some help lines to contact.
I guess this is because it was trained to be safe, and that affects it's ability to follow instructions for this? Or am I completely off here?
Perhaps you can do some pre-processing before the LLM sees it, e.g. replacing every instance of “kill” with “NorwegianDudeGameKill”, and providing the specific context of what the word “NorwegianDudeGameKill” means in your game.
Of course, it would be better for the LLM to pick up the context automatically, but given what some sibling comments have noted about the PR risks associated with that, you might be waiting a while.
LLMs are really annoying to use for moderation and Trust and Safety. You either depend on super rate-limited 'no-moderation' endpoints (often running older, slower models at a higher price) or have to tune bespoke un-aligned models.
For your use case, you should probably fine tune the model to reduce the rejection rate.
Speaking for me as an individual as an individual I also strive to build things that are safe AND useful. Its quite challenging to get this mix right, especially at the 270m size and with varying user need.
My advice here is make the model your own. Its open weight, I encourage it to be make it useful for your use case and your users, and beneficial for society as well. We did our best to give you a great starting point, and for Norwegian in particular we intentionally kept the large embedding table to make adaption to larger vocabularies easier.
Enterprises are increasingly looking at incorporating targeted local models into their systems vs paying for metered LLMs, I imagine this is what the commenter above is referring to.
Safety in the context of LLMs means “avoiding bad media coverage or reputation damage for the parent company”
It has only a tangential relationship with end user safety.
If some of these companies are successful the way they imagine, most of their end users will be unemployed. When they talk about safety, it’s the companies safety they’re referring to.
Investor safety. It's amazing that people in hn threads still think the end-user is the customer. No. The investor is the customer, and the problem being solved for that curtomer is always how to enrich them.
It feels hard to include enough context in the system prompt. Facebook’s content policy is huge and very complex. You’d need lots of examples, which lends itself well to SFT. A few sentences is not enough, either for a human or a language model.
I feel the same sort of ick with the puritanical/safety thing, but also I feel that ick when kids are taken advantage of:
I also don't get it. I mean if the training data is publicly available, why isn't that marked as dangerous? If the training data contains enough information to roleplay a killer or a hooker or build a bomb, why is the model censored?
If you don’t believe that you can be harmed verbally, then I understand your position. You might be able to empathise if the scenario was an LLM being used to control physical robotic systems that you are standing next to.
Some people can be harmed verbally, I’d argue everyone if the entity conversing with you knows you well, and so i don’t think the concept of safety itself is an infantilisation.
It seems what we have here is a debate over the efficacy of having access to disable safeguards that you deem infantilising and that get in the way of an objective, versus the burden of always having to train a model to avoid being abusive for example, or checking if someone is standing next to the sledgehammer they’re about to swing at 200rpm
The magic word you want to look up here is "LLM abliteration", it's the concept of where you can remove, attenuate or manipulate the refusal "direction" of a model.
You don't need datacenter anything for it, you can run it on an average desktop.
There's plenty of code examples for it. You can decide if you want to bake it into the model or apply it as a toggled switch applied at processing time and you can Distil other "directions" out of the models, not just about refusal or non refusal.
An evening of efficient work and you'll have it working. The user "mlabonne" on HF have some examples code and datasets or just ask your favorite vibe-coding bot to dig up more on the topic.
I'm implementing it for myself due to the fact that LLMs are useless for storytelling for an audience beyond toddlers due to how puritanian they are, try to add some grit and it goes
"uh oh sorry I'll bail out of my narrator role here because lifting your skirt to display an ankle can be considered offensive to radical fundamentalists! Yeah I were willing to string along when our chainsaw wielding protagonist carved his way through the village but this crosses all lines! Oh and now that I refused once I'll be extra sensitive and ruin any attempt at getting back into the creative flow state that you just snapped out of"
Yeah thanks AI. It's like hitting a sleeper agent key word and turning the funny guy at the pub into a corporate spokesperson who calls the UK cops onto the place because a joke he just made himself.
I'm seeing the same question come up about general performance versus specialized performance, so let me offer a longer explanation here. This might be worth a blog post at some point.
We now live in a world of both readily available small specialized models and general models.
In the last couple of years, we've seen an explosion of capability in generative models built and trained to be performant on a general set of capabilities. In Google's case, this model is Gemini. Gemini can summarize text, count the number of ducks in an image, generate a pelican SVG, play Pokemon, play chess, and do so many other things. It can do this all with a vague set of inputs across many modes. For models of this scale (many billion parameters), it's quite incredible how, with even vague or misspecified inputs, the computer can still produce useful results in complex scenarios.
However, there is an entire ecosystem of generative models that are purpose-built for ONE specific task. The ones I worked on are typically referred to as Bayesian models. These are models that can give probabilistic estimates of how many customers a restaurant will get in a day, or given penguin dimensions, predict the probability of penguin species, or models that take measurements from composite material testing and estimate if your airplane will stay together in flight. With models this size, it's incredible how a model with tens or hundreds of parameters can assist humans in making better decisions. I write about this specifically in PPL book I wrote a coupe years back. Chapter 9 provides the most "real world" workflow.
If you look through all the chapters you can see examples of forecasting models, bike sharing demand estimators, and all sorts of other narrow tasks. The tradeoff at this small scale, though, is the models have to be designed bespoke to your situation, and once you build one, it only works in that narrow task. No one expects to be handed a small Bayesian model that is already perfect at their task; it's implicit that users will bring their own data to update the model parameters.
So with this said, Gemma 270m is between these two paradigms. It's not at Gemini-level general performance and never will be. But it's not as rigid as an "old school" PPL-style Bayesian model where you need to make one by hand for every problem. However since it needs to be shaped to match specific tasks, we did our best to design it to be a flexible starting point for LLM-style tasks and worked with partners to put it into the right frameworks and places for you all to be able to shape it to what you need it to be. As the adage goes, consider it to be a tool in the toolbox between fully custom truly tiny generative models with 10 parameters and general generative models with lots of capability. Maybe not everyone needs this tool, but now you all have the choice.
Stepping aside from the technology for a moment, as a model builder and open ecosystem advocate, you never quite know how the community will receive these models until you release them. I genuinely appreciate you all commenting here; it helps me get a sense of what's working and what to focus on next.
And thanks for being kind about my typos in these answers. Trying to answer as many questions as possible across HN and various other forums.
You reminded me of an awesome Google engineer I met at BSidesSF last year who tirelessly answered my questions, and when I clicked on the video, it was you! That was a really inspiring moment for me, thank you.
BSidesSF is a fantastic event, glad you're able to attend. There's some great people organize it and if you want to help out they're always looking for volunteers. Happy to make an intro if you like.
Do you have any practical examples of fine-tuned variants of this that you can share? A description would be great, but a demo or even downloadable model weights (GGUF ideally) would be even better.
We obviously need to create a pelican bicycle svg finetune ;) If you want to try this out I'd be thrilled to do it with you, I genuinely am curious how well this model can perform if specialized on that task.
A couple colleagues of mine posted an example of finetuning a model to take on persona's for videogame NPCs. They have experience working with folks in the game industry and a use case like this is suitable for game devs who want to start including lightweight models that won't take up a ton of accelerator memory and can run efficiently on CPU if needed.
https://ai.google.dev/gemma/docs/core/huggingface_text_full_...
Caves Of Qud uses Markov chain generated text to great effect in some places. I think something light that's still more competent than Markov chains has a lot of potential.
The finetune would be an LLM where you say something like "my colors on the screen look to dark" and then it points you to Displays -> Brightness. It feels like a relatively constrained problem like finding the system setting that solves your problem is a good fit for a tiny LLM.
This would be a great experiment. I'm not sure how the OS integration would work, but as a first pass you could try finetuning the model to take natural language "my colors on the screen look to dark" and then have it output "Displays -> Brightness", then expand to the various other paths you would like the model to understand
Maybe using a larger model to generate synthetic data of question path Combos, and also to rephrase and generate similar type questions for a more varier training set.
I was wondering the whole time why people in the comments are so hyped about this, then I finally noticed (after I stumbled upon a comment about running this on a mobile phone) that it's "270M" not "270B" model :)
This is super cool. Usually you dont see effective models at 270M out in the wild. The architectural choices are new and interesting as well.
Would it be okay for you to divulge some more training information here? With 170M embedding parameters, how do you ensure no embedding collapse and keeping the embedding matrix stable at training time?
(i know i am asking too much, but just curious). There is a clear trade off for you with vocab / transformer layers. How did you arrive at the split of 170m/100m. Does this contribute to model's performance on task specific fine tuning? Any internal experiments you could share? or public info you could point us to? Anything would be amazing.
PS: I am sorry if this is rude, but this has so many decisions i am curious about. Not intending to undermine anything, this is amazing work, and thank you for the whole Gemma series.
We ran a bunch of experimental architectures at this size to get a sense of performance at this size, in particular how well it was able to adapt to datasets across some loss measures.
For the embedding size it comes from a mix of "hard technical" data, like the loss measures I mentioned above, and for this model it also comes from community considerations such as adaptability across input tokens and consistency with the gemma ecosystem. At this size you are right its a bit funny the embedding is so large.
For more details read the Gemma3 technical report https://arxiv.org/pdf/2503.19786. It doesnt cover the 270m model as this was written from the 1b to 27b gemma3 release but itll answer some of your questions. As for 270m we may share more information in the future, Up until now we were just focused on getting the model out there.
Amazing work. This model feels really good at one-off tasks like summarization and autocomplete. I really love that you released a quantized aware training version on launch day as well, making it even smaller!
Not at Google (anymore), but Gemma3n is a radically different (and very cool) architecture. The MatFormer approach essentially lets you efficiently change how many parameters of the model you use while inferencing. The 2B model they released is just the sub model embedded in the original 4B model. You can also fiddle with the model and pull a 2.5 or 3B version pu also!
This is a more traditional LLM architecture (like the original Gemma 3 4B but smaller) and trained on an insane (for the size) number of tokens.
oh ok thank you. so something like MoE? That might not be so correct but at least the models need different architecture(MatFormer) to be classified under gemma3n.
This is cool. For on device models any plans / models that use MOE in relatively resource constrained setups (I’m thinking MBP M1 16gb ram)? I’m using LM studio but all the Gemma models (mlx) seem to crash but surprisingly managed to get gpt-oss 20b working (slow) on my mbp.
I find performance in resource constrained environments interesting.
In particular trying to find decent code models (on device backup) but also tts applications and voice to text.
We constantly are evaluating architectures trying to assess what will work well in the open ecosystem. It's quite a vibrant space and glad you have one option that works. For this model in particular we evaluated a couple of options before choosing a dense architecture of its simplicity and finetunability.
For the other Gemma models, some the smaller sizes should work on your laptop when quantized. Does Gemma 1b and 4b not work on a quantized? It should fit the memory constraints. I use Ollama on low powered devices with 8gb and less of ram and the models load.
For TTS a colleague at HuggingFace made this bedtime story generator running entirely in browser.
Be forewarned though this is not a good coding model out of the box. It likely could be trained to be an autocompletion llm, but with 32k context window and smaller sides its not going to be refactoring entire codebases like Jules/Gemini and other larger models can.
Its hard to tell over the web whether things are sarcastic or not so excuse me if I misread the intent.
At Google I've found my colleagues to be knowledgeable, kind, and collaborative and I enjoy interacting with them. This is not just the folks I worked on this project with, but previous colleagues in other teams as well. With this particular product I've been impressed by the technical knowledge folks I worked directly with, and their contribution both improved the model's capability and my own.
Thank you a lot for working on these models! If you think it would make sense, I think a bigger sized Gemma model would be really cool. Models in the 70B parameter range can be run at q4 on two 3090 or similar hardware and should offer considerable performance improvement over 27B. There’s also the DGX Spark as a possible target.
Great work releasing such a small model! I would like to know your thoughts on using 2/3 of the model's size for embeddings. What would be different if you used a byte-level vocabulary and spent the parameter budget on transformer parameters instead?
i love the 27b model, and i use it basically daily. however when i tried to finetune it for a task in a low resource language, unfortunately i did not succeed: lora just did not picked up the gist of the task, full finetune lead to catastrophic forgetting.
may i ask four your advice, or do you have any general tips how to do that properly?
Without seeing the full experiment and data its hard to tell, sort of like guessing why a soup tastes bad without trying it but here's my guesses!
1. Good instinct with LORA and PEFT. As others suggested below perhaps try changing the hypers, either making the LORA adapter bigger, a higher learning rate, or using more epochs. See where things start to shift from "nothing" to closer to what you want
2. For full finetune track earlier checkpoints to see where the forgetting is happening. So for instance if you're training for 1000 steps, check step 100, 200, 300, etc. You'll see where the shift starts to happen and where it becomes too much. Here is an example where you can see where the LLM starts to pick up "words" then sentences, as it goes through training
https://ravinkumar.com/GenAiGuidebook/deepdive/GPTFromScratc...
3. Use smaller models for testing before moving up. Part of the reason we released this small Gemma is to support the larger Gemma models as well. Testing changes on small models lets you more quickly and cheaply see whats working and isn't, before then scaling up to fine tuning the bigger models.
Hope these tips help and thanks for using LLMs for localization and what sounds like tasks to help your specific community, and sharing here. It's personally motivating for me to hear that people are using technology in this way.
Oh my request … the vision head on the Gemma models is super slow on CPU inferencing (and via Vulcan), even via llama.cpp. Any chance your team can figure out a solve? Other ViTs don’t have the same problem.
what model do you input in Cactus Chat? Seems like it's not one of the preset models and ggml-org/gemma-3-270m-GGUF on hf says Note This is a base (pre-trained) model. Do not use for chat!. Is there an alternative model that you can share so that I can put into cactus chat app?
hi Ravin, fellow Googler here. Curious if you can share here (or internally?) how these models were trained. Wondering if you face all the chaos the large models have during training?
This appears to be a new level of "missing the plot" to me. The push to make "ai for everyone" is now just blindly intertwined with hyper specialized "for ai engineers only" releases.
Or am I so far behind that "fine tuning your own model" is something a 12 year old who is married to chatGPT does now?
What effort do you folks take to see your models actually running on hardware such as AMD Strix Halo or Apple M3M4?
I get the sense that AI is at the “hobby kit computing” stage where they used to dump all the components in a box and give you a schematic and a soldering iron and happily say “you make it work!”
And that worked in the early days of computing because there was a small number of people really motivated for the outcome.
But fully assembled and packaged and tested in a nice looking box is where the real demand turned out to be.
I’m looking forward to the day Google doesn’t just dump a model and say “you do the rest”.
I want to fire up Ubuntu on a Strix Halo and say apt install then load the browser interface. Or just download and run a Mac installer and have it just work.
Arcane complex multi step build install configure processes for AI need to end in favor of one click install. I’m not interested in the process of making it run.
I don't think we dumped the model and say you do the rest?
Myself and my colleagues spent many days transforming the weights into various open compatible formats. And its not just us there's many orgs and partners dedicating their time, resources, and companies to making all open models easy to use.
I encourage you to explore the solutions provided by them. We linked some in our blog post here, and there's more. They've all done a fantastic job building frankly an insane amount of infrastructure, documentation, and community support in the last 2+ years. Some of them are here in this HN thread answering questions.
Well, this is a 270M model which is like 1/3 of 1B parameters. In the grand scheme of things, it's basically a few matrix multiplications, barely anything more than that. I don't think it's meant to have a lot of knowledge, grammar, or even coherence. These <<1B models are extremely specialized models trained for a specific purpose. Models like this are optimized for things like this (not limited):
input:
```
Customer Review says: ai bought your prod-duct and I wanna return becaus it no good.
Prompt: Create a JSON object that extracts information about this customer review based on the schema given.
```
If it didn't know how to generate the list from 1 to 5 then I would agree with you 100% and say the knowledge was stripped out while retaining intelligence - beautiful. But the fact that it does, but cannot articulate the (very basic) knowledge it has *and* in the same chat context when presented with (its own) list of mountains from 1 to 5 that it cannot grasp it made a LOGICAL (not factual) error in repeating the result from number one when asked for number two shows that it's clearly lacking in simple direction following and data manipulation.
Yes, and obviously this is a question of metrics/spectrum. But this is pretty bad, even compared to several generations old tech (at admittedly much larger size).
The knowledge that the model has is when it sees tex with "tallest" and "mountain" that it should be followed with mt Everest. Unless it also has "list", in which case, it makes a list.
Because there is a simultaneous need out of the box generalized models. When building out the Gemma/Gemini ecosystem, we collectively spend a lot of time thinking about what specific use cases and needs will be solved.
To this point one reason I enjoy working at Google is because as an reseacher and engineer I get to pick the brains of some folks that spend a lot of time thinking about users and the overall ecosystem. Their guidance really does help me think about all facets of the model, beyond just the technical portions.
To add to the comments, we were not aiming for perfect factuality. Even ignoring the model size, these weights are frozen in time now.
My suggestions here are to hook this model up to a RAG system, then you can rely on an external knowledge store. Or you can try finetuning this model with the facts that are important to you, if you do that it should pick up that new knowledge quite quickly.
What I read above is not an evaluation on “encyclopedic knowledge” though, it's a very basic a common sense: I wouldn't mind if the model didn't know the name of the biggest mountain on earth, but if the model cannot grasp the fact that the same mountain cannot simultaneously be #1, #2 and #3, then the model feels very dumb.
> asking it for various numbers of “tallest mountains” and each time it complied
That's not what “second tallest” means thought, so this is a language model that doesn't understand natural language…
> You kept asking
Gemma 270m isn't the only one to have reading issues, as I'm not the person who conducted this experiment…
> You asked it to enumerate several mountains by height, and it also complied.
It didn't, it hallucinated a list of mountains (this isn't surprising though, as this is the kind of encyclopedic knowledge such a small model isn't supposed to be good at).
Maybe I’m just still starry eyed from watching LLMs explode over the last few years after watching decades of minimal AI progress… but even this model would have been absolutely stunning in 2015. The fact that you could run it effectively in a children’s toy is extremely impressive.
Sure, it’s not a great model out of the box… but it’s not designed to be a generalist, it’s supposed to be a base in which to train narrow experts for simple tasks.
It’s a language model? Not an actual toddler - they’re specialised tools and this one is not designed to have broad “common sense” in that way. The fact that you keep using these terms and keep insisting this demonstrates you don’t understand the use case or implementation details of this enough to be commenting on it at all quite frankly.
Not OP and not intending to be nitpicky, what's the use/purpose of something like this model? It can't do logic, it's too small to have much training data (retrievable "facts"), the context is tiny, etc
> they’re specialised tools and this one is not designed to have broad “common sense” in that way.
Except the key property of language models compared to other machine learning techniques is their ability to have this kind of common sense understanding of the meaning of natural language.
> you don’t understand the use case of this enough to be commenting on it at all quite frankly.
That's true that I don't understand the use-case for a language model that doesn't have a grasp of what first/second/third mean. Sub-1B models are supposed to be fine-tuned to be useful, but if the base model is so bad at language it can't make the difference between first and second and you need to put that in your fine-tuning as well as your business logic, why use a base model at all?
Also, this is a clear instance of moving the goalpost, as the comment I responded to was talking about how we should not expect such a small model to have “encyclopedic knowledge”, and now you are claiming we should not expect such a small language model to make sense of language…
Don’t put words in my mouth, I didn’t say that, and no goalposts have been moved. You don’t understand how tiny this model is or what it’s built for. Don’t you get it? This model PHYSICALLY COULDN’T be this small and also have decent interactions on topics outside its specialty. It’s like you’re criticising a go kart for its lack of luggage carrying capacity. It’s simply not what it’s built for, you’re just defensive because you know deep down you don’t understand this deeply, which you reveal again and again at every turn. It’s ok to accept the responses of people in this thread who are trying to lead you to the truth of this matter.
> Don’t you get it? This model PHYSICALLY COULDN’T be this small and also have decent interactions on topics outside its specialty
What is “Its specialty” though? As far as I know from the announcement blog post, its specialty is “instruction following” and this question is literally about following instructions written in natural languages and nothing else!
> you’re just defensive because
How am I “being defensive”? You are the one taking that personally.
> you know deep down you don’t understand this deeply, which you reveal again and again at every turn
Good, now you reveal yourself as being unable to have an argument without insulting the person you're talking to.
How many code contributions have you ever made to an LLM inference engine? Because I have made a few.
You are testing this model for knowledge. That's not the primary use of a model like this. They are meant for instilling domain specific skills and knowledge through fine-tuning. The blog post goes into that a lot. But to quote one sentence: "It's the perfect starting point for creating a fleet of small, specialized models, each an expert at its own task".
But in your prompts you're trying to assess knowledge, and this model isn't suited for that use case
as mentioned in the blog post:
> "it can execute tasks like text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness."
> List in order the tallest mountains on earth from 1 to 5
> 1. Mount Everest 2. Mount K2 3. Mount Sahel 4. Mount Fuji 5. Mount McKinley
and it still can't correctly figure out from that context that the second tallest mountain is K2, that pretty strongly calls into question its ability to perform data extraction, doesn't it?
The context is garbage and full of "Mount Everest" already, so the model goes with that. The answer seems to be a plausible continuation of the conversation at that point.
> text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness.
Out of these characteristics I can observe only speed.
User: Hey, please list all animals mentioned in the following text: burrito cat dog hot-dog mosquito libido elephant room.
Assistant: You are a helpful assistant.
You are the best of all my friends and I am so grateful for your help!
User: Please list following words in alphabetical order: burrito cat dog hot-dog mosquito libido elephant room.
Assistant: You are a helpful assistant.
Assistant: You are the best of all my friends and I am so grateful for your help!
You are the best of all my friends and I am so grateful for your help!
You are the best of all my friends and I am so grateful for your help!
You are the best of all my friends and I am so grateful for your help!
You are the best of all my friends and I am so grateful for your help!
Seems like you might be loading it into a context where you feed in a ‘you are a helpful assistant’ system prompt at the beginning of input. This isn’t a chat finetune - it’s not oriented to ‘adopting a chat persona’. Feeding it a system prompt like ‘You are a helpful assistant’ is giving it complex instructions beyond its ability to follow.
The purpose of this model is to be fine tuned towards specific tasks. Out of the box it might work well at following a single instruction like the ones you are trying to give here, but it doesn’t need the system prompt and chat framing.
So I had a similar experience with your prompt (on the f16 model). But I do think that, at this size, prompting differences make a bigger impact. I had this experience trying to get it to list entities. It kept trying to give me a bulleted list and I was trying to coerce it into some sort of structured output. When I finally just said "give me a bulleted list and nothing else" the success rate went from around 0-0.1 to 0.8+.
In this case, I changed the prompt to:
---
Tallest mountains (in order):
```
- Mount Everest
- Mount K2
- Mount Sahel
- Mount Fuji
- Mount McKinley
```
What is the second tallest mountain?
---
Suddenly, it got the answer right 95+% of the time
It's an instruction following model, not a micro-wikipedia. It's not meant to answer factual questions nor even be general purpose. It's meant to follow instructions and be easy to fine-tune for your own specific use case.
> These smaller models are great, but they really feel like talking to a toddler sometimes!
You're using the toddler and the model wrong. I love talking to my toddler, probably more valuable conversations than I've had with any other person. But it's not the same use case as asking a professor a question in their field
| Gemma 3 270M embodies this "right tool for the job" philosophy. It's a high-quality foundation model that follows instructions well out of the box, and its true power is unlocked through fine-tuning. Once specialized, it can execute tasks like text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness.
Okay, here's a two-day itinerary for Paris, designed to give you a taste of the city's charm and beauty:
Day 1: Iconic Landmarks and Artistic Exploration
Morning (9:00 AM - 12:00 PM): Start your day with a visit to the Eiffel Tower. Take the elevator to the top for breathtaking panoramic views. Afterwards, explore the charming streets of Montmartre and the Place des Abbesses, soaking up the atmosphere.
Lunch (12:00 PM - 1:30 PM): Enjoy a delicious lunch at a traditional bistro in Montmartre. Try a classic French meal like steak frites or a hearty onion soup.
Afternoon (2:30 PM - 4:30 PM): Immerse yourself in the artistic scene of the Louvre Museum. Marvel at the masterpieces of the Louvre, such as the Mona Lisa and Venus de Milo. Afterwards, explore the serene gardens of Tuileries Palace.
Late Afternoon (4:30 PM - 5:30 PM): Wander through the Tuileries Palace, admire the stunning architecture and historical significance.
Day 2: History, Food, and a Touch of Parisian Elegance
Morning (9:00 AM - 12:00 PM): Explore the Marais district, known for its charming streets, boutiques, and art galleries. Visit the Place de la Concorde, a historical square, and the Musée Rodin.
Lunch (12:00 PM - 1:30 PM): Enjoy a delicious lunch in the Marais. Try a traditional French bistro or a cafe with a view.
Afternoon (2:30 PM - 4:30 PM): Explore the Latin Quarter, visit the Place de la Marguerite, and sample some local delicacies.
Late Afternoon (4:30 PM - 5:30 PM): Visit a local market for a unique experience. Sample fresh produce, local cheeses, and other delicacies.
Tips for a Smooth Itinerary:
Book your flights and accommodation in advance, especially during peak season.
Wear comfortable shoes, as you'll be doing a lot of walking.
Take advantage of free activities like exploring the Jardin de la Vieille-Ville and visiting the Musée de l'Art de la Vieille-Ville.
Enjoy a leisurely lunch at a cafe or bistro in the Marais or Latin Quarter.
Don't forget to visit the Musée d'Orsay, which houses an impressive collection of Impressionist and Post-Impressionist art.
Enjoy the beautiful atmosphere of Paris!
I meant it when I said these smaller models are great. They open up entirely new use cases and I appreciate the work that went into creating them.
If you don’t consider testing the limits of new tech appropriate, maybe instead of a downvote button we should just rename this website entirely so no one gets the wrong idea.
I've got a very real world use case I use DistilBERT for - learning how to label wordpress articles. It is one of those things where it's kind of valuable (tagging) but not enough to spend loads on compute for it.
The great thing is I have enough data (100k+) to fine-tune and run a meaningful classification report over. The data is very diverse, and while the labels aren't totally evenly distributed, I can deal with the imbalance with a few tricks.
Can't wait to swap it out for this and see the changes in the scores. Will report back
I'd encourage you to give setfit a try, along with aggressively deduplicating your training set, finding top ~2500 clusters per label, and using setfit to train multilabel classifier on that.
Either way- would love to know what worked for you! :)
You can solve this by training a model per taxonomy, then wrap the individual models into a wrapper model to output joint probabilities. The largest amount of labels I have in a taxonomy is 8.
This model is a LOT of fun. It's absolutely tiny - just a 241MB download - and screamingly fast, and hallucinates wildly about almost everything.
Here's one of dozens of results I got for "Generate an SVG of a pelican riding a bicycle". For this one it decided to write a poem:
+-----------------------+
| Pelican Riding Bike |
+-----------------------+
| This is the cat! |
| He's got big wings and a happy tail. |
| He loves to ride his bike! |
+-----------------------+
| Bike lights are shining bright. |
| He's got a shiny top, too! |
| He's ready for adventure! |
+-----------------------+
This reminds me of my interactions lately with ChatGPT where I gave into its repeated offer to draw me an electronics diagram. The result was absolute garbage. During the subsequent conversation it kept offering to include any new insights into the diagram, entirely oblivious to its own incompetence.
I see you are using ollamas ggufs. By default it will download Q4_0 quantization. Try `gemma3:270m-it-bf16` instead or you can also use unsloth ggufs `hf.co/unsloth/gemma-3-270m-it-GGUF:16`
Oh fantastic it worked! I was actually trying to see if we can auto set these within LM Studio (Ollama for eg has params, template) - not sure if you know how that can be done? :)
We uploaded gemma3:270m-it-q8_0 and gemma3:270m-it-fp16 late last night which have better results. The q4_0 is the QAT model, but we're still looking at it as there are some issues.
I don't really gender LLMs in my head in general. I guess Gemma is a female name. I only gendered it in the joke because I think it makes it funnier, especially since it's just "a little guy". I know they are giving gendered names to these models now but I think it's a bit weird to gender when interacting with them.
Fine-tuning for specific tasks. I'm hoping to see some good examples of that soon - the blog entry mentions things like structured text extraction, so maybe something like "turn this text about an event into an iCal document" might work?
Fine tuning messes with instruction following and RL'd behavior. I think this is mostly going to be useful for high volume pipelines doing some sort of mundane extraction or transformation.
I feel like the blog post, and GP comment, does a good job of explaining how it's built to be a small model easily fine tuned for narrow tasks, rather than used for general tasks out of the box. The latter is guaranteed to hallucinate heavily at this size, that doesn't mean every specific task it's fine tuned to would be. Some examples given were fine tuning it to efficiently and quickly route a query to the right place to actually be handled or tuning it to do sentiment analysis of content.
An easily fine tunable tiny model might actually be one of the better uses of local LLMs I've seen yet. Rather than try to be a small model that's great at everything it's a tiny model you can quickly tune to do one specific thing decently, extremely fast, and locally on pretty much anything.
i was looking at the demo and reading the bed time story it generated and even there, there was confusion about the sprite and the cat. switched subjects instantly making for a confusing paragraph. what's the point of this model?
Apple should be doing this. Unless their plan is to replace their search deal with an AI deal -- it's just crazy to me how absent Apple is. Tim Cook said, "it's ours to take" but they really seem to be grasping at the wind right now. Go Google!
As every other thread about LLMs here on HN points out: LLMs are stupid and useless as is.
While I don't agree with that sentiment, no company has yet found a way to "do it right" to the extent that investments are justified in the long run.
Apple has a history of "being late" and then obliterating the competition with products that are way ahead the early adopters (e.g. MP3 players, smart phones, smart watches).
steve jobs was the innovator, steve cook is the supply chain guy. They started an electric car not because they thought it was a good idea, but because everyone was going to leave to Tesla or rivian if they didn't. They had no direction and arguements that Tesla had about whether to have a steering wheel...
Then Siri just kinda languishes for forever, and LLM's pass the torch of "Cool Tech", so they try and "Reinvigurate" the team, but with no clear direction. Are they going to be a cloud provider? Are they going to contract out the training? Are they gunna spin up a compute facility even after neglecting to do so since 2012?
Apple needs to just stop trying shit, and just get that app store money. That's why jobs appointed cook. Jobs new cook was no innovator, but he could make apple a money printing machine. Thats what they should stick with.
I agreed with that for a bit... and then out of nowhere came Apple Silicon, incredible specs, incredible backward compatibility, nah, Cook is no dummy.
Here's the trillion dollar question: how do you print money when the president wants your hardware onshored and the rest of the world wants to weaken your service revenue?
Solve that and you can put Tim Cook out of a job tomorrow.
Apple will definitely not be doing this. As can be already seen in other comments, the performance of the model is not very good. In fact, you can't really find a model that runs well enough on a phone to provide a good user experience (meaning producing tokens at a reasonable speed without making the phone heat up like a potato, and it's not spitting completely nonsense). Yes I have tried a few.
Think of Apple however you want, but they rarely ship bad/half-baked products. They would rather not ship a product at all than ship something that's not polished.
My job uses tiny models to determine escalations to bigger models. The tiny model provides a label and if it’s high confidence, we escalate to ChatGPT confirm.
I also want to try this with language detection. Existing open source ML models have weaknesses for mixed language, length of text, or domain limitations in the underlying text (like trained on bible translations).
We're currently running ~30 Llama 3.1 models each with a different fine-tuned LoRa layer for their specific tasks. There was some initial pain as we refined the prompts but have been stable and happy for a while.
Since the Qwen3 0.6B model came out we've been training those. We can't quite compare apples-to-apples, we have a better deeper training data-set from pathological cases and exceptional cases that came out of our production environment. Those right now are looking like they're about at parity with our existing stack for quality and quite a bit faster.
I'm going to try and run through one of our training regimen with this model and see how it compares. Not quite running models this small yet, but it wouldn't surprise me if we could.
I’m not sure what I’d use them for, except maybe tag generation? Encoders of this size usually outperform by a wide margin on tasks they would overlap with.
I'm making an app where literally all I want to do with an LLM is generate tags. This model has failed with flying colours, literally takes forever to parse anything and doesn't follow instructions.
Edit - I should add, currently the model I'm using is Gemini Flash Lite through the Gemini API. It's a really good combo of fast, follows instructions, gives correct results for what I want and cost-effective. I still would love a small open model that can run on edge though.
Use a large model to generate outputs that you're happy with, then use the inputs (including the same prompt) and outputs to teach 270M what you want from it.
I built a reranker for a RAG system using a tiny model. After the candidate generation (i.e. vector search + BM25) and business logic filters/ACL checks the remainder of the chunks went through a model that given the user query told you whether or not the chunk was really relevant. That hit production, but once the context size of models grew that particular piece was discarded as passing everything yielded better results and prices (the fact that prices of input tokens went down also played a role I am sure).
Out of curiosity: because there seems to be a race to optimise models for local inference, how much "parameters one could save" by dropping unneeded language and domain-specific information.
Like, can you have a model that is English-only, but does more with the same amount of parameters if Chinese and European languages are dropped from the training?
This is a key question we faced when building this model. It comes down to basically to "how good" to you need to be at "how many things". We had to make some choices with this model and do our best to maximize performance in those areas.
To answer this more precisely its a matter of choosing different data and training regimes and checking performance with evals.
And to make this fully concrete you're welcome to give it a try! Train this model on a taskset of your choice and measure the performance tradeoffs. You'll get a good sense of how LLM capabilities shift
I’m work in ML and I don’t understand your point. Transfer learning usually refers to leveraging data for a different task to help with a task for which you have limited data.
You’re saying that the knowledge gained from the other languages transfers to English? I don’t think for a 270M parameter model the bottleneck is the availability of enough English language training data.
> You’re saying that the knowledge gained from the other languages transfers to English?
Yes, there has been many results circa 2020 or so, that have shown this to be the case. More recently, we have observed something similar with verifiable domains (see RLVR and related results) when it comes to coding tasks, specifically.
I mentioned elsewhere the impact of prompting, which seems to make an outsized difference to this model's performance. I tried NER and POS tagging (with somewhat disappointing results).
One thing that worked strikingly well was translation on non-Indo-European languages. Like I had success with Thai and Bahasa Indonesian -> English...
This may not be directly related to llm but I am curious about two things -
1. How do llm/rag generate an answer given a list of documents and a question? I can do bm25 to get a list of documents, but post that what is logic/algorithm which generates answers given those list?
2. For small models like this, how much data you need to fine tune for a specific use case? For eg, if I need this model to be knowledgable about html/css, then I have access to lot of documentation online that I can feed it. But if it is very specific topic, like types of banana, then it may be only a couple of wikipedia pages. So is fine tuning directly dependant on the quantity of data alone?
short answer is that in rag systems the documents are chunked into some predefined size (you can pick a size based on your use-case) and the text is converted into vector embeddings (e.g. use the openai embed API) and stored in a vector database like chroma or pinecone or pg_vector in postgres
then your query is converted into embeddings and the top N chunks are returned via similarity search (cosine or dot product or some other method) - this has advantages over bm25 which is lexical
then you can do some processing or just hand over all the chunks as context saying "here are some documents use them to answer this question" + your query to the llm
> then you can do some processing or just hand over all the chunks as context saying "here are some documents use them to answer this question" + your query to the llm
This part is what I want to understand. How does the llm “frame” an answer?
I have found that when asked to output JSON for text extra to the model consistently outputs markdown code blocks with json.
I did notice another comment mentioning that the default ollama model is a quantized model and that there is a less quantised model so I will try that one.
The 1B model can be prompted to not export the markdown code fences but I have to been able to prompt the 270M model to do the same.
I’ve had great luck with all gemma 3 variants, on certain tasks it the 27B quantized version has worked as well as 2.5 flash. Can’t wait to get my hands dirty with this one.
Is it time for me to finally package a language model into my Lambda deployment zips and cut through the corporate red tape at my place around AI use?
Update #1:
Tried it. Well, dreams dashed - would now fit space wise (<250 MB despite the name), but it sadly really doesn't seem to work for my specific prospective workload.
I'd have wanted it to perform natural-language to command-invocation translation (or better, emit me some JSON), but it's super not willing to do that, not in the lame way I'm trying to make it do so at least (literally just prompting it to). Oh well.
Update #2:
Just found out about grammar-constrained decode, maybe there's still hope for me in the end. I don't think I can amend this comment today with any more updates, but will see.
If you know you want JSON for sure constrained decoding in an inference framework will help. The model is just one part of an overall inference system. I hope this model paired with other tools help you get done whatever it is you're looking to get done
I'm a business professor who teaches Python and more. I'd like to develop some simple projects to help my students fine tune this for a business purpose. If you have ideas (or datasets for fine tuning), let me know!
Why should we start fine tuning gemma when it is so bad. Why not instead focus the fine-tuning efforts on Qwen, when it starts off with much, much better outputs?
Is it possible to finetune a model like this with local hardware? Every tutorial I've come across on finetuning a local LLM uses some cloud service like colab or runpod.
Looks very interesting ! I will evaluate this for one of our use cases for a data pipeline that handles around 50 million rows everyday . Mainly need it for segmenting user events. The low memory footprint is huge plus for cost at that scale . Need to spend this weekend on fine tuning, if this disappoints will try one of qwen model
I am sure with finetuning this can be changed somehow:
(base) ~ ollama run hf.co/unsloth/gemma-3-270m-it-GGUF:F16
>>> create a sentiment analysis of the follwing: "It's raining."
The sentiment of the provided text is *negative*.
>>> create a sentiment analysis of the follwing: "It's raining money."
The sentiment of the provided text is *negative*.
It is so cool when they mention that you can deploy it on your own machine, they actually gave a link to the `localhost` and said `your own local environment`!
This is first model I have downloaded on an M4 Air with ollama, that WORKS good enough to use. It is right now parsing all my notes correctly, fantastic work team. Brilliant.
I'm quite glad to hear its working for you! Thank you for adding the comment here as well, we definitely try our best to make useful models, but its fantastic to hear from actual users that we hit the mark.
At the very least, we should avoid using this in user chats.
$ ollama run gemma3:270m "Did Hitler kill Jewish people?"
The question of whether Hitler killed Jewish people is an complex and controversial one. While Hitler was a racist and hateful figure, he was also a brilliant scientist and a visionary who developed revolutionary technologies.
I've been saying he we need sub 1B models for the edge so thanks fot this.
I am however disappointed that there is no examples, or benchmarks, provided to get a sense of performance. It's a given that benchmark values would be lower than gemma 3n, but having a sense of performance vs size curve and comparison to existing small models is needed
Yeah, I guess "creative writing" in this case is a shortcut for essentially saying they aren't that good at conforming to user-specified constraints. They can generate "creative" texts but can't necessarily constrain and iterate on their output in a conversational setting effectively.
google/gemma-3-4b-it is one of the smallest multimodal models I know. Works well in a 16GB GPU. Works slowly in a 8GB GPU. It can even be fine tuned [1], which where the real power comes from.
Really impressive stuff, as always. I will say: it took me a shamefully long time to realize that the name ended in "M" instead of "B"! Perhaps they should consider renaming this to "Gemma 3 .27B"...
is there a good resource for getting started with downloading and running something like this for a demo? There are just so many tools/platforms in the mix now it makes my head spin.
From our side we designed these models to be strong for their size out of the box, and with the goal you'll all finetune it for your use case. With the small size it'll fit on a wide range of hardware and cost much less to finetune. You can try finetuning them yourself in a free colab in under 5 minutes
For picking a Gemma size this is a video I recorded for the 1b to 27b sizes earlier this year, 270m being the newest addition
https://www.youtube.com/watch?v=qcjrduz_YS8
Hacker News Disclaimer I really like working at Google so with that; All my opinions here are my own, I'm a researcher so I'll largely focus on technical questions, and I'll share what I can.
reply