Hacker News new | past | comments | ask | show | jobs | submit login
Ollama are 'try[ing to] achieve vendor lock-in' (github.com/ggerganov)
17 points by alexmorley 21 days ago | hide | past | favorite | 5 comments



ollama is giving people a lot of reasons to switch:

https://news.ycombinator.com/item?id=42886680


I don't understand. The quoted text appears nowhere in the linked page.


It's in one of the collapsed comments. Quoted below:

https://github.com/ggerganov/llama.cpp/pull/11016#issuecomme...

So Ollama are basically forking a little bit of everything to try and achieve vendor lock-in. Some examples:

    The Ollama transport protocol, it just a slightly forked version of the OCI protocol (they are ex-Docker guys). Just forked enough so one can't use dockerhub, quay.io, helm, etc. (so people will have to buy Ollama Enterprise servers or whatever).

    They have forked llama.cpp (I would much rather we upstreamed to llama.cpp than forked, like upstreamining to Linus's kernel tree).

    They don't use jinja like everyone else, they use this:
https://ollama.com/library/granite-code/blobs/977871d28ce4

etc.

So we started a project called RamaLama to unfork all these bits


The first 2 points do point to Ollama neglecting the community and not contributing to upstream. For the lack of Jinja templates I would have thought that's just from it being written in Go and using the Go templating engine instead.


When there's a will, there's a way.

It's a bit silly but I rolled out my own(*) "no-deps" C++ Jinja template library (https://github.com/google/minja) just to add tool call support to llama.cpp (https://github.com/ggerganov/llama.cpp/pull/9639).

(*): I mean, technically my employer's :-D*




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: