Currently, ES and Solr, both based on Lucene, can't really manage vector representations, as they are mainly based on inverted indexes to n-grams.
ANNs potential applications extend to audio, bioinformatics, video, among any modality that can be represented as a vector. All you need is an encoder! How nice.
Faiss is definitely powerful. I have been running experiments using 80 million vectors that map to legal documents, and vectorizing protein-folds (using Alphafold). While it is an interesting technology, at this moment, perhaps for my usecases, I see it more as a lib or tool than a full-featured product like ES or Solr.
For instance, ATM, updating a Faiss index is a non trivial process, with many of the workflow tools you would expect in ES missing. There is also the problem of encoding the input into vectors, which takes a few milliseconds (do you batch, parallelize, are you ok with eventual consistency?).
I recently been found with pgvector (postgres + vector support) https://github.com/ankane/pgvector. Perhaps less performant, but easier to work with for teams. With support of migrations, ORM, sharing, and all the postgres goodies.
Another interesting/product-ready alternative is https://jina.ai.
And Google's ScaNN, https://www.youtube.com/watch?v=0SvrDtnUgV4
Lucene does have an ANN implementation due in 9.0, based on HNSW - see https://issues.apache.org/jira/browse/LUCENE-9004 for details. See also https://issues.apache.org/jira/browse/SOLR-12890 and https://issues.apache.org/jira/browse/SOLR-14397 for Solr.
Yeah, those are all good use cases. I was wondering about a different thing: will the demand for a distributed vector search service concentrate to a few big companies, as smaller companies can use a simpler solution so they don't really need to pay for the technology.
> Currently, ES and Solr, both based on Lucene, can't really manage vector representations, as they are mainly based on inverted indexes to n-grams.
ES has kNN plugin, which stores vectors separately in each segment in Lucene index. Plus, they can also use better storage formats and algorithms.
I guess it depends on what you mean by "simple". The algorithms are complex but there are good tools that implement them. I would imagine smaller companies would use off the shelf tooling, and I would argue that is simpler. Vector embeddings are so unbelievably powerful and often yield better results than classical methods with one of the good tools + pretrained embeddings.
Specifically for search, I use them to completely replace stemming, synonyms, etc in ES. I match the query's embedding to the document embeddings, find the top 1000 or so. Then I ask ES for the BM25 score for that top 1000. I combine the embedding match score with BM25, recency, etc for final rank. The results are so much better than using stemming, etc and it's overall simpler because I can use off the shelf tooling and the data pipeline is simpler.
I assume the doc size is relatively small, otherwise a document may contain too many different topics that make it hard to differentiate different queries?
Also, Pinecone (disclosure: I work there) has usage-based billing that starts at $72/month, so "paying for the technology" is not that scary.
I've also done similarity search over spiky vectors (have most of the weight in a few dimensions even if they aren't really sparse) with a conventional database (MySQL) in which case I could search a small fraction of the database and have a "proof" that there aren't going to be any more documents that will make it into the top N.
Generally though the hyperdimensional indexes just aren't as good as 1-d indexes, just as the 1-d indexes are a bit better than 2-d and 3-d indexes. (E.g. consider the problem of deciding which lines to draw if you're drawing a map of a small piece of the New Mexico-Arizona border, where the border is a long straight line that has both vertexes far away.)
That's not a high threshold for any enterprise software company handling customer data or any consumer tech company with >10M users. Google, Elastic, and AWS thought so too because they all introduced (or planning to introduce, in the case of Elastic) vector search in their platforms.
It’s true that it might be more of a feature than a product. But it’s a feature that’s currently missing from a lot of products one would usually turn to.
I get that here they are applying "non-traditional" machine-learned magnitudes and scoring schemes, but couldn’t you do the same thing on, say, Solr given a little magic on the indexing and query pipelines?
Performance in terms of indexing time and throughput was awful, but searches were quick enough and performance in terms of "write a paragraph about your invention and find related patents" was fantastic.
Consider word embedding, where finding the most similar words given some input, you need to do a NN over 300 dimensions. That’s a common task, and it’s largely unsolved. Postgres, for example, allows you to create indices over their „cube“ datatype, but those are slower than the naive brut-force approach.