To the grandparent: you typically solve this application consistency issue by using something like immutable object store semantics with versioned references. The object store is capable of answering requests for multiple versions of the asset, and the application metadata store tracks individual versions. You can sequence the order in which you commit to these stores, so the asset is always available before it is published in the metadata store. Alternatively, you can make consumers aware of temporary unavailability, so they can wait for the asset to become available even if they stumble on the new version reference before the content is committed.
You can also find hybrids where the metadata store is used to track the asset lifecycle, exposing previous and next asset version references in a sort of two-phase commit protocol at the application level. This can allow consuming applications to choose whether they wait for the latest or move forward with older data. It also makes it easier to handle failure/recovery procedures such as canceling an update that is taking too long and reclaiming resources.
As such, splitting the database may incur in significant consistency issues that a backup doesn't incur into.
I believe this splitting technique is not a good one except for potentially narrow use cases.
Are you saying that even a single-database backup is not atomic?