Use llamafile [1], it can be as simple as downloading a file (for mixtral, [2]), making it executable and running it. The repo README has all the info, it's simple and downloading the model is what takes the most time.
In my case I got the runtime detection issue (explained in the README "gotcha" section). Solved my running "assimilate" [3] on the downloaded llamafile.
Either https://lmstudio.ai (desktop app with nice GUI) or https://ollama.com (command-like more like a docker container that you can also hook up to a web UI via https://openwebui.com) should be super straightforward to get running.
I am the author of Msty [1]. My goal is to make it as straightforward as possible with just one click (once you download the app). If you try it, let me know what you think.
Looks great. Can you recommend what GPU to get to just play with the models for a bit? (I want to have perform it fast, otherwise I lose interest too quickly).
Are consumer GPUs like the RTX 4080 Super sufficient, or do I need anything else?
Why is this both free and closed source? Ideally, when you advertise privacy-first, I’d like to see a GitHub link with real source code. Or I’d rather pay for it to ensure you have a financial incentive to not sell my data.
There’s incredible competition in this space already - I’d highly recommend outright stating your future pricing plans, instead of a bait-and-switch later.
Check out PrivateGPT on GitHub. Pretty much just works put of the box. I got Mistral7B running on a GTX 970 in about 30 minutes flat first try. Yep, that's the triple-digit GTX 970.
I experienced acute lower back pain after walking for 30+ minutes. I searched the whole internet to find a solution but only found advice saying, "if you have back pain, walk!" which was ironic since my pain occurred while walking. Then it dawned on me: I sit most of the day, cycle a lot, and drive my car, but I wasn't walking regularly. Now, I walk the dog for 40 minutes a few days a week, and the pain has disappeared!
windguru which is in part or fully based on crowd-sourced weather stations is already surprisingly accurate few days in advance, in many regions I tried. For a few hours forecast nothing beats the rain radar.
I wonder if they have already or will put some AI in their models.
In my previous service company, we aimed for 10%, which was considered the norm. However, we were particularly bad at achieving this, with only 8-9%, though we did have a good atmosphere and happy employees
Getting to profit through pure people power is hard. Every next person you add doesnt double your output. It’s maybe increases it by some %. (This includes overhead costs)
When I began building bikes 10 years ago, Sheldon's site was already like an old book in the library.
It was a reference for older topics, such as recently when I needed to understand the internal workings of a Sturmey Archer hub to fix one.
However, it hasn't been of much help for the new developments from the past 15 years since his passing.
While less organized and sometimes messy, YouTube and many other sites can answer any bike mechanic question. Even ChatGPT generally provides good answers
yeah, same here. I do have one cheap French threaded BB bike and another 20+y old road bike with Shimano 600, so I'm a regular visitor of Sheldon's. But for anything 10y old and less, it's straight to Park Tool's Youtube channel to admire Calvin Jones' moustache wax. However, even Park Tool doesn't have the sheer volume of reference data and simple tables of measurements that Sheldon has gathered over the years.
The people who run the site now are adding new material to it. I built a new bike this year and actually found a lot of useful info for modern components on the site. Maybe our bike types don't overlap much, but I've personally found it useful still.
This is the site where we can see who voted what in the parlement: https://howtheyvote.eu/