This replayability is discussed in Designing Data Intensive Applications (DDIA), a book by Martin Kleppmann. Essentially you can use the Change Data Capture (CDC) information in your primary Postgres database, and pipe it through Kafka and replay it on any other data store.[0] This is also the basis of traditional database replication technology, where the change logs are replayed on other databases.
Is this architecture common? Well, I suspect it is overkill for most smaller organizations due to increased infrastructure complexity. I wouldn't do this just for the sake of doing it -- you may find yourself saddled with an increased maintenance workload just keeping the infrastructure running.
But if you truly have this use case, this is a well-known method for syncing data across various types of datastores (ie. so-called polyglot persistence).
Is this architecture common? Well, I suspect it is overkill for most smaller organizations due to increased infrastructure complexity. I wouldn't do this just for the sake of doing it -- you may find yourself saddled with an increased maintenance workload just keeping the infrastructure running.
But if you truly have this use case, this is a well-known method for syncing data across various types of datastores (ie. so-called polyglot persistence).
[0] https://www.confluent.io/blog/bottled-water-real-time-integr...