Gemma, despite being developed by a company worth billions of dollars, is a phenomonally poor model.
I tried the open source release yesterday. I started with the input string "hello" and it responded "I am a new user to this forum and I am looking for 100000000000000..." with zeros repeating forever.
Ok, cool I guess. Looks like I'll be sticking with GPT-4.
The Mistral model I tried when it came out produced "blog posts" as responses. I assume this somehow depends on where those models get much of their training data from (please correct me if I'm wrong).
Anyone who uses these models for more than 10 min will immediately realize that they're really, really bad compared to other free, OSS models. Even Phi-2 was giving me "on par" results except that its a model of a different league.
Many models are being released now, which is good to keep OpenAI on their toes and not mess up, but, truth be told, I've yet to see _any_ OSS model that I can run on my machine being as good as ChatGPT 3 (not 3.5, not 4, but the original one from when everyone went crazy).
My hopes for consumer hardware ChatGPT-3.5 within 2024 probably lie with what Meta will keep building upon.
Google was great, once. Now, they're a mere bystander in the larger scheme of things. I think that's a good thing. Everything in the world is cyclic and ephemeral and Google enjoyed their time while it lasted, but, newer and better things are and will, keep on coming.
PS: Completely unrelated, but, gmail is now the only Google product I actively use. I don't, genuinely, remember the last time I did a Google Search... When I need to do my own digging I use Phind these days.
Times are changing and that's great for tech and future generations joining the field and workforce!
Yi 34 200K finetunes (like Tess 1.5),Deepseek Code 33B and Miqu 70B definitely outpace ChatGPT-3.5, at least for me.
They don't have the augmentations of being a service, but generally they are smarter, have a bigger context and (perhaps most importantly) are truly unbound.
I am on a single 3090 desktop, for reference. Admittedly, this is much more expensive now than it was a few months ago, with the insane prices used 3090s are going for now.
Damn, I see, how many tokens per sec you get on that setup?
On a Macbook M2 I get ~10/12t/sec which is a tiny tad bit too slow for continued/ daily use, but if I think its worthy I might invest on a more powerful machine soon-ish!
On 33B/34B models I get 35 tokens/sec, way faster than I can read streaming in. At huge contexts (like 30K-74K), prompt processing takes forever and token generation is slower, but its still faster than I can read.
Miqu 70B is slow (less than 10 tok/sec, I think) because I have to split it with llama.cpp. I only use it for short context questions where I need a bit more intelligence.
And for reference, this is a SFF desktop! It's no Macbook, but still small enough (10L and flat) for me to fly with in carry on.
If Mixtral isn't outperforming chatgpt 3 you're configuring it wrong. It gives somewhat terse answers by default, but you can prompt it to spit out wordy answers of the sort chatgpt has been aligned to prefer easily enough.
Mixtral aka the 8x7B the "sparse mixture of experts" one is not the same as, eg. Mistral-7B which is still very, very good, just not quite hitting the mark on some things.
I still couldn't run Mixtral 8x7B on an M1 Macbook Pro with 32Gb ram, so maybe I am indeed doing it wrong? Or are there better quantized versions available now or..?
But GGUF Mixtral should fit in 32GB... just not with the full 32K context. Long context is very memory intense in llama.cpp, at least until they fully implement flash attention and a quantized cache.
> I've yet to see _any_ OSS model that I can run on my machine being as good as ChatGPT 3 (not 3.5, not 4, but the original one from when everyone went crazy).
It depends on your machine I guess, but IMO there's definitely OSS models out there that rival the original ChatGPT offering for certain use cases(dolphin mixtral comes to mind). Having a model with RAG capability is going to make a huge difference in the quality of the answer, as well.
Can we just stop talking about Gemini/Gemma for at least two years before it's improved? In fact, the two-year mark is rather strategic recommendation, because I guarantee it'll become vaporware by then anyway with Google's track record. It's outrageously poorly performing.
Gemma (and Gemini) are heavily nerfed. Why are they on the news lately?
Also, Gemma is a +9B model. I think it's not okay that Google compared it with Mistral and Llama 2 (7B) models.
Google also took llama.cpp and used it in one of their Github repos without giving credit. Again, not cool.
All this hype seems to be backed by Google to boost their models whereas in practice, the models are not that good.
Google also made a big claim about Gemini 1.5 1M context window, but at the end of their article they said they'll limit it to 128K. So all that 1M flex was for nothing?
Not to mention their absurd approach in alignment in image creation.
It’s objectively worse in my local tests compared to Mistral. Again their model doesn’t include MT-bench benchmark because it’s really really bad at answering a follow up question(s). (this is also a problem in Ultra). It’s reasoning is also pretty bad compared to mistral.
Yes that's correct. It's 9.3B parameters if you count the embedding layer and final projection layer separately. However, since they used weight tying, the adjusted count is 8.5B as discussed in the article.
Yes, it's definitely unfair to count it as a 7B model. In that case, we could call Llama 2, which is 6.6B parameters, a 6B (or even 5B) parameter model.
- Local models are pretty easy to de-censor, if thats what you mean.
- ...Yeah, it should not be labeled as a 7B. Its sort of 7B class.
- The repo mentions they use the llama-cpp-python server
- 1M context brute forced across TPUs is insanely expensive, I can see why Google reigned it in.
But overall your message is not wrong. Google is hyping Gemma a ton when its... Well, not very remarkable. And they could have certainly made something niche and interesting, like a long context 8.5B model, a specialized model, a vastly more multilingual model, something to differentiate it from Mistral 7B 0.2
> Also, Gemma is a +9B model. I think it's not okay that Google compared it with Mistral and Llama 2 (7B) models.
They say it's because they're not counting embedding parameters[0]. Although apparently even with the embedding parameters subtracted it still rounds to 8B not 7B. From what understand, rounding to the nearest B is the standard. Seems slightly disingenuous to call it 7B, but not a big deal IMO since I don't hear anyone saying this model is outperforming popular OSS 7Bs.
Gemma 7B is a 9B model. The name is a lie. Then they really played games with Gemma 2B as well.
I don't get how Google can be this incompetent and far behind everyone else. They have amazing people and the kinds of resources that almost no one else does but somehow need to resort to faking demos, blatant lies about model sizes, etc.
Google used to be the place everyone wanted to go. Someone at Google AI needs to be fired so they can start being productive again.
Haha really? I almost added the caveat that I didn't count the parameters myself. And I couldn't see the weights file size because it requires login (because of their restrictive licensing choice). If true and it's 9B that's really dishonest.
Yes, it's 8.5B params if you account for weight tying, and 9.3B if you count the embedding layer and output layer weights separately as shown in the 2nd figure in the article. In the paper, I think they justified 7B by only counting the non-embedding parameters (7,751,248,896), which is kind of cheating in my opinion, because if you do that, then Llama 2 is basically a 5B-6B param model.
Practically, speaking, I OOM'd running Gemma on a 3090 using a config that had VRAM to spare for Mistral 7B. It kinda surprised me at first, until I realized why.