I'll say this explicitly: these llamafile things are stupid.
You should not download arbitrary user uploaded binary executables and run them on your local laptop.
Hugging face may do it's best to prevent people from taking advantage of this (heck, they literally invented safetensors), but long story short: we can't have nice things because people suck.
If you start downloading random executables from the internet and running them, you will regret it.
Just spend the extra 5 minutes to build llama.cpp yourself. It's very, very easy to do and many guides already exist for doing exactly that.
It only takes away choice if you use the demo files with the models baked in. There are versions of this under the Releases->Assets that are only the actual llama.cpp OS portable binaries that you pass the model file path to as normal.
Compiling llama.cpp is relatively easy. Compiling llama.cpp for GPU support is a bit harder. I think it's nice this OS portable binaries of llama.cpp applications like main, server, and llava exist. Too bad there's no opencl ones. The only problem was baking in the models. Downloading applications off the internet is not that weird. After all, it's the recommended way to install Rust, etc.
While in general I agree with your security concerns, here the links are from very trusted sources (Mozilla Internet Ecosystem and Mozilla's innovation group) and the user is well known (present on X too with a large following).
Re: "simplicity", sure for you and I it's simple to compile llama.cpp, but it's like asking a regular user to compile their applications themselves. It's not that simple for them, and should not be required if we want to make AI and OSS AI in particular more mainstream.
To make this accessible to a broader cohort you would package it into an app and put it somewhere with provenance, eg. A well known GitHub account or App Store.
The solution, as shown, doesn’t solve either of the problems you’ve said are problems it attempts to solve.
Totally agreed it's not yet ideal - absolutely. But I feel we are expanding the pie of users with this step, which is just an intermediate step.
Do you want to work on that packaging ;-)?
Do you think it would be useful to explain how to macroexpand whenever you do it so the folks you are responding to can learn and do it themselves next time?
I'd like to train one of the provided LLM's with my one data, I heard that RAG can be used for that. Does anyone have any pointers on how this could be achieved with llamafiles all locally on my server?
The llava multi-modal models are fun. I find requesting json formatted output lets you overcome the limited response length baked in. https://huggingface.co/mys/ggml_bakllava-1 (a CLIP+Mistral-7B instead of CLIP+llama2-7B) is my favorite.
Agreed - and ultimately, you start removing the need to have a git app and git knowledge to pull and compile... it's not just 5 minutes, but you open up the market to way more people. Now, ideally it should just be as installing an app, but it's a good step in that direction.
It's unsafe and it takes all the choice and control away from you.
You should, instead:
1) Build a local copy of llama.cpp (literally clone https://github.com/ggerganov/llama.cpp and run 'make').
2) Download the model version you actually want from hugging face (for example, from https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGU..., with the clearly indicated required RAM for each variant)
3) Run the model yourself.
I'll say this explicitly: these llamafile things are stupid.
You should not download arbitrary user uploaded binary executables and run them on your local laptop.
Hugging face may do it's best to prevent people from taking advantage of this (heck, they literally invented safetensors), but long story short: we can't have nice things because people suck.
If you start downloading random executables from the internet and running them, you will regret it.
Just spend the extra 5 minutes to build llama.cpp yourself. It's very, very easy to do and many guides already exist for doing exactly that.