Hacker News new | past | comments | ask | show | jobs | submit login
Go library for in-process vector search and embeddings with llama.cpp (github.com/kelindar)
114 points by kelindar 81 days ago | hide | past | favorite | 29 comments



This library was created to provide an easy and efficient solution for embeddings and vector search, making it perfect for small to medium-scale projects that still need some vector search. It's built around a simple idea: if your dataset is small enough, you can achieve accurate results with brute-force techniques, and with some optimizations like SIMD, you can keep things fast and lean.


I love that you chose to wrap the C++ with purego instead of requiring CGO! I wrapped Microsoft's Lightgbm library and found purego delightful. (To make deployment easier, I embed the compiled library into the Go binary and extract it to a temp directory at runtime. YMMV.)


This post led me to purego, and I've just finished moving my toy project that uses PKCS#11 libraries from cgo to it. It's so much better now! No need to jump through hoops for cross-compilation.


IME Linux and macOS users usually have a compiler available so CGO is mostly only a hassle for Windows, but on Windows this capability is built into the Go stdlib, e.g. `syscall.NewLazyDLL("msvcrt.dll").MustFindProc(...)`


Thank you for pointing out this option. Any idea why the Go stdlib doesn't offer this for Linux and macOS? I'd rather not add compiling other languages to my Go workflow.


How is the latency of calling purego bindings vs cgo? The latter seems prohibitively expensive for most of my projects.


IIRC, purego repurposes a lot of cgo machinery, so I don't think there would be much difference. For my purposes, it doesn't matter since the ML library does several seconds to minutes of work using multiple cores per call.


I haven't checked (I make maybe 10 calls per second at most). Intuitively, they should be similar.


Have you considered using HNSW instead of brute force?


nice work! I wrote a similar library (https://github.com/stillmatic/gollum/blob/main/packages/vect...) and similarly found that exact search (w/the same simple heap + SIMD optimizations) is quite fast. with 100k objects, retrieval queries complete in <200ms on an M1 Mac. no need for a fancy vector DB :)

that library used `viterin/vek` for SIMD math: https://github.com/viterin/vek/


Look what Go needs to mimic even a fraction of .NET’s SIMD power… ;)


Interesting choice to call llama.cpp directly, instead of relying on a server like Ollama. Nice!

I wrote a similar library which calls Ollama (or OpenAI, Vertex AI, Cohere, ...), with one benefit being zero library dependencies: https://github.com/philippgille/chromem-go


No need to use Ollama. LLama.cpp has its own OpenAI-compatible server[0] and it works great.

[0] https://github.com/ggerganov/llama.cpp#web-server


Thanks didn't know that.

Do you happen to know the reason to use ollama rather than the built in server? How much work is required to get similar functionality? looks like just downloading the models? I find it odd that ollama took off so quickly if LLamma.cpp had the same built in functionality.


Yes I'm aware. I was contrasting the general use of an inference server vs calling llama.cpp directly (not via HTTP request).

And among servers Ollama seems to be more popular, so it's worth mentioning when talking about support for local LLMs.


Nice! would have needed something like this last year.


USearch has had GoLang bindings for a long time, but it's more low-level and you'd have to use something else for embeddings: https://github.com/unum-cloud/usearch/tree/main/golang


could anyone recommend a similar library for python?


I've used the Sentence Transformers Python library successfully for this: https://www.sbert.net/

My own LLM CLI tool and Python library includes plugin-based support for embeddings (or you can use API-based embeddings like those from Jina or OpenAI) - here's my list of plugins that enable new embeddings models: https://llm.datasette.io/en/stable/plugins/directory.html#em...

More about that in my embeddings talk from last year: https://simonwillison.net/2023/Oct/23/embeddings/


The languagemodels[1] package that I maintain might meet your needs.

My primary use case is education, as myself and others use this for short student projects[2] related to LLMs, but there's nothing preventing this package from being used in other ways. It includes a basic in-process vector store[3].

[1] https://github.com/jncraton/languagemodels

[2] https://www.merlot.org/merlot/viewMaterial.htm?id=773418755

[3] https://github.com/jncraton/languagemodels?tab=readme-ov-fil...


Do these queries complete within 10ms?


> git submodule update --init --recursive

nope. this looks cool, but Git submodules are cursed


I think you mean recursed


What's a better option for linking 3rd party code?


Is this a joke? Go has built in support for importing 3rd party code


Go has built-in support for importing Go modules, but the submodule is for a C++ library not a Go module, so your suggestion isn't workable.


Dip it in a blessed clear potion.


Why?


Poor integration, mostly.

It’s fairly easy to get into an irrecoverably broken state using an intermediate-level Git operation such as an interactive rebase (as of a couple of years ago). (It’s probably recoverable by reaching into the guts of the repo, but given you can’t do the rebase either way I’m still taking off a point.) The distinguished remote URLs thing is pointlessly awkward—I’ve never gotten pushing to places where those remotes are inaccessible to work properly when the pushed commit updates the submodule reference. (I believe it’s possible, but given the amount of effort I’ve put into unsuccessfully figuring that out, I’m comfortable taking off a point here as well.)

I like git submodules, I think they’re fundamentally the right way to do things. But despite their age they aren’t in what I’d call a properly working state, even compared to Git’s usual amount of sharp edges.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: