Yea! Our FE is React/Next and BE is a mix of Supabase, Pinecone, BigQuery and QStash. and mix of HF endpoints and PoplarML in our batch for some embedding models
Are you using it to store the embeddings (pgvector)? I'm asking to see where you're differentiating between supabase & pinecone. We get this question often and it's interesting to hear what others are doing and where they feel pgvector isn't appropriate
Hey! Loving the experience so far - we started the project before pgvector was supported on Supabase, and we want to support hybrid search as well (don't think pgvector supports this?)
Yes, I believe it can be done but it requires a bit of work on your side. I'll see if we can come up with a demo and/or postgres extension which handles this for you
looping back here. The langchain community just released a version of Hybrid Search which using postgres full text search. It's clever, and IMO probably a better approach than just sparse/dense vectors:
Will check this out - it seems like its doing two separate searches where you specify two separate top K values. Curious what the trade offs are between this approach and weighting sparse/dense vectors
Thanks guys for sharing it. This really helps us in the community to look for the architectures that we should focus on and the new interesting developments.