I've accepted that LLMs are useful when taken with precautions. A few tools I've used recently have convinced me on this fact. Alongside this, I've heard first-hand anecdotal experience of LLMs providing great benefit, from asking about APIs all the way to practising languages or summarizing texts.
I'm now interested enough to try run one myself and see how it suits my personal workflow. So I have a few questions:
1) How can I set up a LLM locally with good effort/reward ratio? I don't want to spend hours setting up something unreliable that needs constant modification - moreso something I can just interact with easily from a web UI/CLI when I need to.
2) Is there an easy way to keep up to date with LLMs so I can update to newer models as they become popular to get the best results?
Note that I'm only looking for self hosted, Linux compatible solutions!
Though for "personal workflow", unless you want to be able to play with the internals of the models or are worried about privacy, I'd just use ChatGPT (in fact I do, despite having llama.cpp setup to run various models, I always use ChatGPT for personal stuff and programming question)