Hacker News new | past | comments | ask | show | jobs | submit login
LocalAI: Local models on CPU with OpenAI compatible API (github.com/go-skynet)
14 points by anton5mith2 on April 27, 2023 | hide | past | favorite | 5 comments



LocalAI is the OpenAI compatible API that lets you run AI models locally on your own CPU! Data never leaves your machine! No need for expensive cloud services or GPUs, LocalAI uses llama.cpp and ggml to power your AI projects!

LocalAI supports multiple models backends (such as Alpaca, Cerebras, GPT4ALL-J and StableLM) and works seamlessly with OpenAI API. Join the LocalAI community today and unleash your creativity!

GitHub: https://github.com/go-skynet/LocalAI

We are also on discord! Feel free to join our growing community!

https://discord.gg/uJAeKSAGDy


The privacy angle is really important — but just as important is avoiding all of the vulnerabilities that OpenAI seems to have.

Great to see the speed this is progressing and the collab with k8sgpt / prometheus / spectro cloud / etc. Community effort!


Here's a little example we put together on how to deploy on Edge Kubernetes using Kairos

https://kairos.io/docs/examples/localai/


Interesting. What are the cpu/memory/storage requirements for running LocalAI?


At present, you need a CPU with AVX. For memory I’d say 16GB or so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: