I want to like Ollama, but I wish it didn't obfuscate the actual directives (full prompt) that it sends to the underlying model. Ollama uses a custom templatizing script in its Modelfiles to translate user input into the format that a specific model expects ([INST], etc), but it can be difficult to tell if it's working as expected because it won't show up in the logs at all.
Other than that it's a great project - very easy to get started and has a solid API implementation. I've got it running on both a Win 10 + WSL2 docker and on a Mac M1.
yeah I guess I could compare the output at 0.0 temperature with same prompt using Modelfile and then after using raw mode with my best guess of how the modelfile is creating the raw data it's passing to Ollama.
I'd push a PR to the repo itself but I have zero experience with Go...
Yeah, I was surprised Ollama was not mentioned as it’s by far the easiest to get started with. If it only had real grammar support, I’d never have to use another library again (it has a JSON mode that generally works, however).
What is grammar support? I've seen that mentioned several times now. Does it allow to restrict the output to a given template, or am I totally wrong there?
Ollama is great. I discovered it today while looking for a way to serve LLMs locally for my terminal command generator tool (cmdh: https://github.com/pgibler/cmdh) and was able to get it up and running and implement support for it very easily.
Yesterday I tried mixtral 7bx8 running on the CPU. With an Intel 11th gen chip and 64gb DDR4 at 3200mhz, I got around 2-4 tokens/second in a small context, this gets progressively slower as the context grows.
You would get a much better experience with apple silicon and lots of RAM
Don't suppose if you know if the conversion is easily reversible? Some of these models are big, it sucks to carry around the original plus the gguf, but I would hate to be in a situation where the gguf represents a dead end.
This is a bit off-topic and probably a noob question. But maybe someone with experience could help me. I'm about to buy a M2 or M3 Mac to replace my Intel Mac, and would like to play around with locally run LLMs in the future. How much RAM do I have to buy in order not to run into bottlenecks? I'm aware that you would probably want a beefier GPU for better performance. But as I said, I'm not worried about speed so much.
In the article, Simon mentions the Q6_K.gguf model, which is about 40GB. A Mac Studio can handle this, but any of these models are going to be a tight fit or impossible on a Mac laptop without swapping to disk. Maybe NVME is fast enough that swapping isn't too terrible.
In my experience, the Mixtral models work pretty well on llama.cpp on my Linux workstation with a 10GB GPU, and offloading the rest to CPU.
It is impressive how fast the smaller models are improving. Still, a safe rule of thumb is the more RAM the better.
Also, really question how much you need to run these models locally. If you just want to play around with these models, it's probably far more cost effective to rent something in the cloud.
I tried Mixtral via ollama on my Apple M1 Max with 32GB of RAM, and was a total nonstarter. I ended up having to powercycle my machine. I then just used two L4 GPU's on Google Cloud (so 48GB of GPU RAM, see [1]) and it was very smooth and fast there.
Wow, as an author of the project I'm so sorry about you having to restart your computer. The memory management in Ollama needs a lot of improvement – will be working on this a bunch going forward. I also have a M1 32GB Mac and it's unfortunately just below the amount of memory Mixtral needs to run well (for now!)
If you want to run decently heavy models, I'd recommend getting at a minimum getting 48GB. This allows you to run 34b llama models with ease, 70b models quantized, mixtral without problems.
If you want to run most models, get 64GB. This just gives you some more room to work with.
If you want to run anything, get 128GB or more. Unquantized 70b? Check. Goliath 120b? Check.
Note that high end consumer gpus end at 24GB VRAM. I have one 7900xtx for running llms, and the best it can reliably run is 4-bit quantized 34b models, anything larger is partially in regular ram.
Thank you for this detailed response. I'm not sure if it was clear, but I was going to use just the Apple Silicon CPU/GPU, not an external one from Nvidia.
Is there anything useful you can do with 24 or 32GB of RAM with llms? Regular M2 Mac minis can only be ordered with up to 24GB of RAM. The Pro Mac mini M2 is upgradable to 32GB RAM.
I've been unable to get Mixtral (mixtral-8x7b-instruct-v0.1.Q6_K.gguf) to run well on my M2 MacBook Air (24 GB). It's super slow and eventually freezes after about 12-15 tokens of a response. You should look at M3 options with more RAM -- 64 GB or even the weird-sounding 96 GB might be a good choice.
You just have to allow more than 75% memory to be allocated to the GPU by running sudo sysctl -w iogpu.wired_limit_mb=30720 (for a 30 GB limit in this case).
1. That worked after some tweaking.
2. I had to lower the context window size to get LM Studio to load it up.
3. LM Studio has two distinct checkboxes that both say "Apple Metal GPU". No idea if they do the same thing....
Thanks a ton! I'm running on GPU w/ Mixtral 8x Instruct Q4_K_M now. tok/sec is about 4x what CPU only was. (Now at 26 tok/sec or so).
Just be aware you don't get to use all of it. I believe you only get access to
~20.8GB of GPU memory on a 32GB Apple Silicon Mac, and perhaps something like ~48GB on a 64GB Mac. I think there are some options to reconfigure the balance but they are technical and do not come without risk.
What I can draw from reading of that thread is that you can buy a Desktop Rig with 200GB memory bandwidth (comparable to m3 pro and max) and a lot of expansion capability (256GB RAM). You should find out if that's still good enough for your local use case for token per second or training.
Then just use SSH/XTerm(and possibly ngrok) to login with good speed from anywhere into your rig with a light M2 ?
32GB is enough to run quantized Mixtral, which is the current best openly licensed model.
... but who knows what will emerge in the next 12 months?
I have 64GB and I'm regretting not shelling out for more.
Frustratingly you still have WAY more options for running interesting models on a Linux or Windows NVIDIA device, but I like Mac for a bunch of other reasons.
I run 7b/13b models pretty gracefully on a 16GB M1 Pro, but it does leave me wanting a little more headroom for other things like Docker and browser eating multiple gigabytes of ram themselves.
Maybe keep an eye out for M1 / M2 deals with high ram config? I've seen 64GB MBPs lately for <$2300 (slickdeals.net)
Thank you (and all the adjacent comments). I don't really need so much RAM for my regular work. Running the llms locally would only be for fun and experiments.
I think 32GB might be the best middle ground for my needs and budget constraints.
It's really a pity that you can't extend RAM in most Apple Silicon Macs and have to decide carefully upfront.
I currently run Mistral and a few mistral derivatives using Ollama with decent inference speed on a 2019 Intel Mac 32GB. So I assumed the new one with 32ish should do a better job.
I've tried vision model Llava as well, a bit more latency but works fine.
long story short, for a machine that has about 5+ years of shelf life it isn't a bad value prop to go and buy the absolute top end, especially if you expect some roi out of it, just be aware that if you don't want the top end for llm the m2 currently provides more value because m2 has memory bandwith than the m3 at the middle range.
memory bandwidth is the key to model speed, and memory size is what enable you to use larger model (quantization let you push thing further, to a point) so one thing to note is that on the M3 pro/max only the top end model gets the full bandwidh, while the m1/m2 pro enjoy full bandwidth from a smaller memory size. this may be important if you value speed above model size or vice versa. M2 Pro, M2 Max get approximately 200 GB/s and 400 GB/s, but things are more complicated for m3: M3 Pro gets 150mb/s, and M3 max gets 300mb/s at 36gb and 400mb/s at 48gb
few more things to note:
it's absolutely fine to go and play around with llm but even with a llm monster machine there's nothing wrong in starting with smaller models and learning how to squeeze the maximum amount of work out of them. the learning do transfer to larger model. this may or may not be important if at some point you'll want to monetize or deploy to production what you learned. while the mac itself is a good investment for personal use, once you move to servers, cost skyrockets with model size, because of supply constraints on 40gb+ memory gpus. if you are dependent to a 70b parameter model, you'll have a hard time to make a cost effective solution. if it's stricly to playing around, you can disregard this concern
even if you're playing around, a 70b is going to run at 7 tokens / second, which is fine for a local chat, but if you are writing a program and need inference in bulk, it's fairly slow.
another thing of note is that while the field is still undecided on which size and architecture is good enough, the moltitude of small fish experimenting with tuning and mix of instructions are largely experimenting on smaller models. currently my favorite is openhermes-2.5-mistral-7b-16k, but it's not an indication that mistrals are strictly better than llama2, more an indication that experimenting with 7b is more cost effective for third parties without access to gpu than experimenting with 13b, and so you'll find 13b model kinda stagnating, with many of them trained in a period where people didn't really know the best parameters for finetuning and are so to say a bit behind the curve. a few tuners are working at 70b models, but these seems to be pivoting to mixtrals and the likes, which will cause a similar stagnation on the top end, that is, until llama3 or the next mixtral size drops, then, who knows
Generally, you train it again entirely from scratch.
It's possible to introduce new information by fine-tuning a new model on top of the existing model, but it's debatable how effective that is for introducing new information - most fine-tuning success stories I've seen focus on teaching a model how to perform new kinds of task as opposed to teaching it new "facts".
If you want a model to have access to updated information, the best way to do that is still via Retrieval Augmented Generation. That's the fancy name for the trick where you give the model the ability to run searches for information relevant to the user's questions and then invisibly paste that content into the prompt - effectively what Bing, Bard and ChatGPT do when they run searches against their attached search engines.
Most LoRAs are less effective for facts since changes largely shift attention (Q and K, not V layers) and only touch a tiny percentage of weights at that, however full fine-tunes on models are pretty effective for introducing new facts (you could probably use ReLoRA as well).
There are also newer techniques like or ROME that could edit individual facts, and you might also be able to get there when you are updating by doing a DPO tune of the old vs the new answers as well.
While I agree that RAG/tool use (with consistency checking) might be overall best approach for facts, being able to update/tune for model drift is probably going to still be important.
I'd also disagree about the training entirely from scratch - unless you're changing architecture/building a brand new foundational model or have unlimited time/compute budget, that seems like the worst option (and pretty unrealistic) for most people.
I don't think that's a dumb question at all. It depends on your objectives and how much resources you're willing to spend.
These open weights models can be retrained. Start with a foundational model like Llama2 or something and expose it to more recent training data that includes whatever updated information you want it to have access to. This is relatively expensive, but allows for big changes to the model.
If you have some relatively small subset of new information you want to bring in, you could build a Lora. Then either run your model with the Lora, or fold the Lora into your base model. This is relatively cheap, but fairly narrow in terms of your updates.
In the long run, it might be that Retrieval Augmented Generation (RAG) is the way to go. Here, your embeddings go into a vector database, and the model reads from there. Then you just need to update the database for the model to have access to new information.
This LLM stuff is new enough that anything like best practices are still being worked out. The optimal way to bring in new information could be a variant of one of the methods I mentioned above, or some combination of all three, or something else altogether.
Have you seen LoRA adapters actually work for this kind of thing?
The leaked Google "We have no moat" memo was very excited about LoRA style techniques, but it's not clear to me that it's been proven as a technique yet.
EDIT2: Ignore above. This seemed much more promising: https://www.youtube.com/watch?v=pnwVz64jNvw . Author provides consulting services and seemed very nice and approachable
The easiest way is to let it do web searches with a tool former/plugin framework.
For specifically knowledge you want it to be able to recall (like knowledge base articles or blog posts) vector database embeddings are best.
For knowledge you want it to operationalize, like being able to program in a new language the last resort is finetuning but this is not easy, requires massive amounts of high quality data, and is not generally effective for things which do not have a large amount of data to fine tune on (tens of thousands of pages worth of content).
It's great that there are many comments about being able to run models locally, but I rarely see comments about what folks are using them for. Would love to learn more about use cases and problems being solved.
I use them for coding, you'd be surprised at how much you can get done with deepseek coder 6.7b
Most GPT plugins for code editors will also work for local models since you can have OpenAI API stubs running locally.
Clearly it is nowhere near GPT-4 capacity, but if you ask simple boilerplate things ("write a class with the following methods", then "write unit tests for it") it will mostly work. Even if it doesn't, you can manually fix it, and it still can save you some time.
Always review code generated by LLM, regardless if it comes from GPT-4!
I recently made them work with meeting presentation transcript. Mistral 7b (tiny one) was enough to reliably extract quite a lot of information!
- It could derive speaker names, based purely on how people called themselves in the conversation
- Drew a mermaid sequence diagram that wasn't perfect, but wasn't a complete garbage either. With few back and forth corrections it was on-point.
- Created a truly usable meeting notes
It was a much better UX than having to hunt down relevant video section, watch it and force to focus despite a lot of filler communication. That works very well for those kinds of 30 min. upward meeting, where the real content is in the spoken words and slides just reitarate that. I was really pleasantly surprised how much I liked that.
Also, some models can rate each speaker contribution :o)
Are people actually using this to 10x themselves / create actually profitable services already?
Or more like finetuning a model to have an edge on the leaderboards for a day or two then taking VC money, or integrating som "AI magic" into existing userbases?
I've been following the locallama sub on reddit and the few services created seem very niche besides tons of hobby stuff.
i recently started using them as assistants for programming. i usually don't let them write code as it's often (subtly) wrong. examples are:
* how do i use library X to do task Y (excellent for quickly getting up to speed with new libraries).
* actual example from a few days ago: "the most common CI/CD systems and how to identify them" - chatgpt correctly gave me the environment variable names for github actions, gitlab ci, travis, circleci, jenkins and a one or two others. theoretically it saved me having to go through the docs of 7 different systems looking for the right information, which i still did to make sure the data was legit. just confirming the info was still a lot less work as i already knew what to look for.
* how do i create a certain style with css framework XYZ
* is there an algorithm for solving the following problem ...?
* alternative phrasings or synonyms if i can't find the right words myself.
* cooking recipe suggestions ("i have ingredients a, b and c, give me a stew recipe")
* pop culture questions ("why did the fremen settle on a hostile planet like arrakis in the first place?") i'm too lazy to research myself or ask on reddit
* sometimes my (non-natively english speaking) coworkers produce engrish i just can't parse. asking gpt to correct or explain the sentence often yields surprisingly good results.
usually i double check the results but that's still less work than doing all the work by myself from the beginning. recently i also let it write and style html forms which works quite well.
> race to the bottom in terms of pricing from other LLM hosting providers.
>This trend makes me a little nervous, since it actively disincentivizes future open model releases from Mistral and from other providers who are hoping to offer their own hosted versions.
That does indeed seem ominous. I guess they’ll just introduce a significant lag till they release it in future
Honestly, Ollama (posted by someone earlier) was surprisingly simple to use. If you have WSL (on windows) or Linux/OSX it's a one-line install and a one-line use. I was up and running using a lowly 6GB VRAM GPU in about 3 minutes (time it took for initial download of models).
Installing WSL (on Windows) is similarly straightforward nowadays. In your search bar, lookup the Microsoft Store, open the app, search for Ubuntu, install it, run it, follow the one-liner for installing Ollama.
I am, I signed up late though and got an email about it last night “ Access to our API is currently invitation-only, but we'll let you know when you can subscribe to get access to our best models.”
Maybe i did something wrong, but the first mistral gguf i downloaded from hugging face and used with llama2 ended every answer with something along the lines of "let us know in the comments" and answers felt like blog posts. :D
I’ve been using baseten (https://baseten.co) and it’s been fun and has reasonable prices. Sometimes you can run some of these models from the hugging face model page, but it’s hit or miss.
Otherwise someone would need to write a plugin for it, which would probably be pretty simple - I imagine it would look a bit like the llm-mistral plugin but adapted for the Ollama API design: https://github.com/simonw/llm-mistral/blob/main/llm_mistral....
Which honestly is the easiest option of them all if you own an Apple Silicon based Mac. You just download the ollama and then run `ollama run mixtral` (or choose a quantization from their models page if you don't have enough ram to run the defalt q4 model) and that's it.
https://ollama.ai/library/mistral
curl https://ollama.ai/install.sh | sh
ollama run mistral:text