This is great, but you might want to have multiple postgreses for the different workloads. DB postgres != rate-limit PG != search PG. It's pretty hard to optimize one DB for every workload.
Counter point: most people operate on workloads so trivial that they don't need optimized.
I think the most important line in the article is the "let's see how far it gets us." It is absolutely trivial to invent situations where an architecture wouldn't work well, or scale, or "be optimal." It's far, far harder to just exist in reality, where most things are boring, and your "bad" architecture is all you ever need.
Replication works across the whole instance. I'm working on a PBX that uses two PostgreSQL instances: 1 for configuration, 1 for call logs. I can replicate the configuration database everywhere and only keep 1 copy of the call logs database.
Multiple databases in postgres fundamentally share the same underlying infrastructure (i.e., WAL), and so do not offer much in terms of scalability or blast-radius protection compared to putting all tables in the same database.