Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: AnythingLLM – Open-Source, All-in-One Desktop AI Assistant (github.com/mintplex-labs)
368 points by tcarambat1010 3 months ago | hide | past | favorite | 77 comments



This is really nice. My first reaction was "oh great, another Ollama/WebGenUI wrapper on Llama.cpp" but it's actually much more - supporting not only the LLM but also embedding models, vector databases, and TTS/STT. Everything needed to build a fully functioning voice chatbot.


This looks sweet.

Totally irrelevant but ... "Language Learning Model" ? Probably just some brainfart or I'm missing something but it would be hilarious if the authors of this did this whole project without knowing what LLM stands for.


It was indeed a brainfart on my end that has then been in there since the dawn of time. Pretty funny honestly


Hey, where did your long introductory post go? I wanted to send it to a friend but it appears to be gone?


Thinking about this a bit, Language Learning Model might be a better TLA than Large Language Model since "Large" is a subjective, period-dependant adjective whereas language learning is precisely what these models achieve (with all the derivatives that come as features when you map the world with language)


Large Language Model All good


The linked GitHub page mentions “Language Learning Models”, so GP is saying that it seems like the authors of this project wrote the whole thing while not knowing what LLM actually stands for.


I literally said it was probably a brainfart.


Or it was a simple typo.


It was a typo haha, someone just PR'd the fix for it. God knows how long it has been there - whoops


These are the (fun) stories legends come from. Well done overall! Looking forward to evaluating.


Maybe the assistant can look in the repo and compute that, just sayin'


Downloaded and checked it out. Looks great so far. Tried using it to read a bunch of regulatory PDFs using GPT-4o.

Some quick and early feedback:

1. The citations seem to be a bit dicey. The response seems to give largely correct answers but the citations window seems to show content that's a bit garbled.

2. Please please add a text search to search within existing chat content. Like if I searched for something about giraffes in one of the chats, search chat history and allow switching to it.


For your 2nd suggestion, this would be solved if previous chats were (automatically) added to the vector database. Is this possible in this or similar apps?


Yes. The fact that literally no one does this makes me continue to wonder wtf people are actually doing w/ LLMs, since I feel this need so acutely and the solution is so obvious.


I assume that there are people who are doing this. For example, you could set this up via the OpenAI API with an embedding model. The API processes the text using a pre-trained model like text-embedding-ada-002 which converts the text into a vector to store in the database.

I assume that this sort of functionality will become more commonplace.



As someone who doesn't know what an Embed or Vector is, this has been the only offline AI tool I've been able to install and start using on my standard office PC.


That's the point! Shouldn't have to know and naturally over time you'll learn more and the controls are there when you feel comfortable tuning them


LLM will become like web frameworks in the future, in the sense that they will be free, open source, and everybody will be able to build on them. Sure, there will be paid options, such as there are paid web frameworks, but most of the time free options will be more than good enough for most of the jobs.


What is your argumentation for that? It seems that building LLMs is very costly, and that will probably remain so for quite some time.

I think it is equally likely that LLMs will perform worse in the future, because of copyright reasons and increased privacy awareness, leading to less diverse data sets to train on.


> building LLMs is very costly, and that will probably remain so for quite some time.

Building LLMs is dropping in cost quickly. Back in mid 2023 training an 7B model had already dropped to $30K and it's even cheaper now.

> I think it is equally likely that LLMs will perform worse in the future, because of copyright reasons and increased privacy awareness, leading to less diverse data sets to train on.

I'll bet a lot of money this won't happen.

Firstly copyright isn't settled on this. Secondly people understand a lot more now about how to use less, higher quality data and how to use synthetic data (eg MS Phi series, Persona dataset etc and of course the upcoming OpenAI Strawberry and Orion models which use synthetic data heavily). Thirdly the knowledge about how to use multi-modal data in your LLM is much more widely spread, which means that video and code can both be used to improve LLM performance.

[1] https://arize.com/resource/mosaicml/


Existing free to use (offline) models are already really really good and open a ton of possibilities for new apps.

The only barrier right now is that laptop/phone hardware can't run them (inference and/or training) fast enough. That's a problem that won't really exist 3 to 10 years from now.

That's ignoring the fact that creation of new much much better models is going full steam ahead.


I am genuinely baffled at how someone could come up with this take? I suppose it's because I went ahead and copped a $500 card about a month ago, and I can already literally do more than the big paid models presently can. I lack some of the bells and whistles, but I think more importantly, they lack uncensored models.

"AI" as we're calling it is aggressively defacto open-source, and I can see in no way how that competition doesn't drive down prices.


Training and inference are two different things. The incentive to make some model open is unrelated to hardware cost.

Perhaps you are missing something in my arguments that I can clear up?


No, what I'm saying is -- it's kind of like Linux.

Yes, it would cost kabillions to recreate Linux from scratch. But no one has to anymore because it's here and anyone can just grab it.

Same with the models. What I can download right now from Huggingface is going to be 90% or more already there, and I can tweak them at home, despite how much it may have cost to make them in the first place.


The balance may also shift towards more highly specific data that tunes local models towards the individual user. If that ultimately becomes more useful than, say, purchasing rights to some large archive of copyrighted materials, then things may lean farther towards a shared foundation upon which most day-to-day applications are developed. It will likely depend on the application and its specific focus, I don’t know that things will settle into an either/or situation, rather than a both/and.


I see. But what about adding new knowledge to the models?

To be useful in practice, one must train with new data, say every year or so. Who is going to pay for harvesting and cleaning all that data? And as I understand it, some form of RLHF is also required, which involves some manual labour as well.

Perhaps some day this will all be very cheap, and it might even be done by AI systems instead of humans, but I wouldn't bet my horse on it.


I mean, given that I've literally done some version of it at home with RAG and it wasn't terrible, it's hard to think that it will be super difficult?

As in, I may not be able to do "general knowledge," but who will pay for that anyway (e.g. instead of baking it in to their Google killer?)


Building software was very costly and hard 30 years ago. Things get cheaper and simpler with time. LLM will become a tool to use in your work, such as frameworks or libraries. Some people will, other people won't (mostly those to maintain projects without them).

I hope to be right or I will be jobless lol


> Building software was very costly and hard 30 years ago. Things get cheaper and simpler with time

No, it's just as complex as it was 30, 50 years ago. Because complexity is limited by developers' brains. As there was no noticeable evolution in last years the humans abilities stay the same. But there are many more humans involved in development. This makes the total complexity much higher.

> LLM will become a tool to use in your work

Not sure where you live, it's already very useful at work. Saves a lot of time. The effect is like from the internet with search engines. It doesn't make the work easier or lower the requirements. It makes developers, in this case, more productive.


So… just open source?


I really hope this becomes the norm.


There are no paid web frameworks of note that I'm aware of


The difference being, inference is not cheap at all.


I’ve been attempting to deploy a customized AnythingLLM instance within an enterprise env. TimC (and presumably dev crew) are top notch and very responsive.

Waiting for EntraID integration. Post-that, a customized version of AnythingLLM can tick the boxes for most of the lowest hanging use cases for an org.

Thanks for the killer app TimC and crew!


This definitely makes it super easy for less technical folks to access it ( got it up and running in less than 5 minutes ). Initial reaction is positive with just Ollama. Everything is automatically detected and if you want to manually set it up, you still can. Lets see how it does after adding huggingface ( quick and painless ).


There was an error while installing on Linux, was solved with:

''' sudo chown root:root /home/hn/AnythingLLMDesktop/anythingllm-desktop/chrome-sandbox sudo chmod 4755 /home/hn/AnythingLLMDesktop/anythingllm-desktop/chrome-sandbox '''

Other than that it worked really well.


HN doesn't use normal markdown; to do code blocks you indent with 4 spaces, ex.

    sudo chown root:root /home/hn/AnythingLLMDesktop/anythingllm-desktop/chrome-sandbox
    sudo chmod 4755 /home/hn/AnythingLLMDesktop/anythingllm-desktop/chrome-sandbox


Thanks!


Please don't follow this. There has to be a better solution. This will make the chrome-sandbox run as root with access to everything and you really shouldn't do that.

Figure out how to assign proper permissions instead. (Or at least don't invite other people to do the same thing...)


I have zero issues with Chrome running as root for this tool. I'm a realist rather than idealist.

Things that work and won't take over my machine. Of course I might be wrong or my previous post might have been written by skynet to deceive more users to give their machine in service of the borg.

Who knows. All I know is that it worked and I wouldn't sinceraly wait for an answer eventually in the future to get it sorted. Won't run it again so soon anyways. YMMV


I have been really impressed with AnythingLLM as a no-fuss way to use LLMs locally and via APIs. For those of us who want to tinker, there's a solid range of choice for embedders and vector stores.

The single install desktop packaging is very slick. I look forward to the upcoming new features.


Have you tried openwebui? What are your thoughts on it vs this?


I had a much smoother experience with the desktop version of AnythingLLM. There's more in openwebui, but the things that are in AnythingLLM are more polished (IMO).


We will have plugins for: - Auth - Agent skills - Data connectors

soon, agent skills are first and this should help plug the gaps between offerings since we can't build everything for everyone


Question on the anythingllm hosted. So is the $50/mo just basically you are logging into some sort of Remote Desktop environment and run an anythingllm instance from there that you guys manage? Everything else is still the same like it’s still byok and all that right? Or do you get some sort of “ai/token” usage with the monthly fee as well?

Oh and the other question would be in your hosted version does the instance have access to a gpu that you provide? Which in that case we could just use a local model with (relative) ease right?

And if this hosted service is as I have described, I feel like this sort of service coupled with top tier frontier open source models is the (not too distant) future for AI!—- at least in the short to medium term (which is probably a pretty short period relatively, given the rapid speed of AI development).

Thanks


I guess I’ll send an email to find out more information re: the hosted plan.


I've got an open ai API key, and I pay for chatgpt. I'd imagine switching to this and using openai would end up costing quite a lot? How are people running it relatively cheaply?


One way people keep costs down when using OpenAI with an offline RAG system is by limiting the number of text snippets sent to the API. Instead of sending the whole database, they'll typically retrieve only the top 10 (or so) most relevant snippets from the vector database and just send those to OpenAI for processing. This significantly reduces the amount of data being processed and billed by OpenAI.


Openrouter... you get all the models and its not as expensive as you would think. I spent $3 with aider the other day in like the blink of an eye with Anthropic. I am working on a FASTHTML thingy and loaded all the docs, plus a few huge replicate api files into the vector database. Most of my back and forth usage averaged about $0.02 for each turn with Claude 3.5 Sonnet. To give you an idea: My context + prompt were around 18000 tokens with completions around 1500 tokens.


I noticed you put LiteLLM in your list of providers. Was that just marketing, or did you re-implement model support for all the models LiteLLM already supports separately?


You can use LiteLLM _as the LLM provider_ which is just a relay to a ton of other models. For people who already have it set up connecting to x,y,z providers can keep that work and use it for inferencing as well


$ docker pull mintplexlabs/anythingllm Using default tag: latest Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)


This is a connection issue for your docker cli - we have seen this issue before and it had to do with client proxy.

https://stackoverflow.com/a/48066666


I am not behind a proxy, do not have any of those related variables set, and all other attempts at net connections (curl, telnet, browser, etc.) work fine...


What a coincidence, I just setup AnythingLLM yesterday for trying it on an enterprise level. I'm super impressed with most of the stuff I used so far.

I just wish there was an option to properly include custom CSS. The default interface looks a little bit.. dated.

Keep up the amazing work!


+1 on custom CSS overwrites.

A bigger feature that would add a lot of value might also be a prompt manager with a shareable prompt list.


> AnythingLLM packages as an AppImage but you will not be able to boot if you run just the AppImage.

The 'boot' seems to indicate it will affect the computer's startup process, I think you meant to say you will not be able to 'start the application'


This looks really great. Are you planning on adding shortcut keys anytime soon?


We actually are planning to really lean into the "Desktop assistant" style of things, so yes absolutely. Like how on Mac you can cmd+space to launch spotlight, we can offer the same functionality alongside everything else.

WIP!


One way of doing this that might save you some effort is writing a plugin for Raycast, but I have no idea whether this covers the use cases you have in mind.

Seems doable: https://github.com/raycast/extensions/blob/main/extensions/o...

(Not affiliated, I just have Raycast installed and use it sometimes)


Can this roll into home assistant and provide alexa at home in any capacity?


i need this too. doesn't need to be alexa - i just want my sonos systems at home to be more smart than just adding stuff to shopping lists with hit and miss transcriptions (why is "oat milk" always added as "oatmeal"?


this is super sweet for developers (or anyone else) who like to have granular control over their LLM set up

- ability to edit system prompt

- ability to change LLM temperature

- can choose which model to use (open-source or closed)


How does this differ from chatboxai?

https://github.com/Bin-Huang/chatbox


Chatboxai is a client for multiple AIs like OpenAi, Cloude, etc. It does not work on your local files and documents.


How do you ensure "privacy by default" if you are also providing cloud models?


It's not my position to "impose" a preference on a user for an LLM.

Privacy by default means "if you use the defaults, it's private".

Obviously, if your computer is low-end or you really have been loving GPT-4o or Claude you _can_ use just that external component while still using a local vector db, document storage, etc.

So its basically like opt-in for external usage of any particular part of the app, whether that be an LLM, embedder model, vector db, or otherwise.


Good question - this was an excellent write up and AnythingLLM looks great. But I’m really curious about that too.

Regardless of the answer, OP you’ve done a hell of a lot of work and I hope you’re as proud as you should be. Congratulations on getting all the way to a Show HN.


It doesn't seem like they are "providing" cloud models. They have their backend able to interface with whatever endpoint you want, given you have access. It's plainly obvious when interacting with 3rd party providers that it depends on their data / privacy policy. When has this ever not been the case?

I could just start up a vLLM instance with llama 3.1 and connect their application to it just as easily though. Perfectly secure (as I am able to make it).

This seems like such a pedantic thing to complain about.


where is the desktop app download ?

or need to install source code from github ?



what kind of pc does it need ? ram, etc?


Depends on the model you want to run! That is the beauty of it. If you can muster that ability to run a Q4 Llama3.1 8B - great, if your specs are really low end you can always outsource to a cloud provider.

Maybe you want to run privately, but can only manage a Gemma-2B - then great, you can use that also.


Finally.


Came here to say that I really like the content on your YouTube channel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: