"During a query that references any remote tables on a foreign server, postgres_fdw opens a transaction on the remote server if one is not already open corresponding to the current local transaction. The remote transaction is committed or aborted when the local transaction commits or aborts. Savepoints are similarly managed by creating corresponding remote savepoints."
The mental model is slightly different. FDWS are basically server -> server linkages, not literally clustering. As a result, data consistency between servers is less of an issue - the data is likely to only live in one place (read replicas and failovers notwithstanding). There may be some error handling and try/rollback around remote transactions baked into postgres_fdw but I'm not familiar enough to say.
An interesting pr piece about building a recommendation engine.
I fully believe this a PR piece to drive traffic because postgres has very little to do with AI beyond being a database.
My firm looked into Postgres extremely deeply aka source code around text search and AI.
I would highly recommend against using this, and stick to more standard ways of doing NLP.
Postgres still has open bugs around text search and lifting limits from 5 years ago. We concluded we couldn't trust anything but the core ecosystem around it.
Couldn't you implement this in "pure" SQL by using psql-http https://github.com/pramsey/pgsql-http (from Crunchy ) for the webservice calls to OpenAI API?
Another way that I am experimenting with is to combine AI and Postgres to build SQL queries and then run them directly. In this way users can gain valuable data insights (essentially building data dashboards themselves) without bugging the data analyst [1]. Still in beta.
And as the article notes "Postgres is equipped" to query, format and return the data needed.
I made an app like that for myself using OpenAI (codex). While it works pretty good for basic stuff (near 100%) it fails hard when you need some more advanced queries.
ie.: I asked it to write a query to check how many times "user A" triggered "event X" before triggering "event Y" (with a column of their timestamps).
Beyond the prompt as you see above, I added the events table with id, userName, eventType and date field. Adding your database tables gives (in my experience) high accuracy.
I've been doing the cosine similarity in plain ruby while I await the availablity of pg_vector on AWS RDS, scales well enough for small datasets for prototypes(talking less than 100 here, although haven't load tested it yet).