It makes terrible operational sense. What are the HA/DR, sharding, replica, and backup strategies and tools for pg_vector? What are the embedding integration and relevance tools? What are the reindexing strategies? What are the scaling, caching, and CPU thrashing resolution paths?
You're going to spend a bunch of time writing integrations that already exist for actual search engines, and you're going to be stuck and need to back out when search becomes a necessity rather than an afterthought.
What if you don't need those things yet and you just have some embeddings you want to query for cosine similarity? A dedicated vector database is way, way overkill for many people.
You're going to spend a bunch of time writing integrations that already exist for actual search engines, and you're going to be stuck and need to back out when search becomes a necessity rather than an afterthought.