The Vectara platform combines embedding with a vector store and a SOTA retrieval model, making it a valuable component for building LangChain applications.
HN community: what other pain points do you experience when building scalable LLM-powered applications?