Today, we see a lot of shared-nothing systems, where database slaves are kept in sync over the network. That's basically how Google stores data at scale. When you get big enough, you almost have to go that way.
Funny thing is, I am still waiting for these tools. We still have DBA's doing tuning of DBMS's today. Since the early 90's, I was asking why the DBMS wasn't doing the automated tuning for every database being created, especially on the major database venders's systems.
Every DBA tuning guide that I read over the years described techniques that should have been automated and automatically run by the various DBMS's.
The physical characteristics of all databases should not be the concern of database designers or implementers. Such should be concerned with the logical design and leave the DBMS to handle how it will physically lay out those logical designs.
Whether there is shared memory or shared disks or no sharing should be a characteristic of the DBMS and not our concern. Our concern should be whether or not the DBMS efficiently runs our logical design.
Closest thing I know of at the moment.