Hacker News new | past | comments | ask | show | jobs | submit login

> You won't be able to run Skyvern unless you enable at least one provider.

Any plans on bundling a local LLM / supporting local LLMs?




We have an open issue for this right now -- we would LOVE some contributions here. The biggest problem until Llama 3.2 came out was that most (good) open source llms were text-only, and Skyvern needs vision to perform well

This isn't true anymore -- we just need to build and launch support for it


In theory to support ollama all you should need to do is be able to change the URL that would otherwise go to OpenAI, and select the model. The only gotcha is that the llama3.2 builds for ollama are currently text only — however they've just added support for arbitrary hugging face models so you're not limited by the officially supported models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: