That's reasonable. Also consider going the other way: keeping per-tenant logical databases, and splitting up some or all of the compute layer to have single tenancy or bounded tenancy. For example, if your compute layer is a web server, making multiple sets of webservers with something in front of them routing requests to a given set of servers based on a tenant identifier can chunk up your multiple-noisy-neighbors problem into at least multiple noisy "neighborhoods", with the (expensive) extreme of server-per-tenant. If your compute layer is e.g. a service bus/queue/whatnot worker, the same principles apply: multiple sets of workers deciding what to work on based on a tenant ID or per-tenant/group topics or queues. You can put the cross-cutting/weird workloads onto their own areas of hardware, as well.
I propose this because I think having database instances split up by tenant (even if multiple DBs share the same physical server) is actually a pretty good place to be, especially if you can shuffle per-tenant databases around onto new hardware and play "tetris" with the noisiest tenants' DBs. Moving back to multitenant-everything seems like a regression, and using (message|web|request) routing to break the compute layer up into per-tenant or per-domain clusters of hardware can often unlock some of the main benefits of microservices without a massive engineering effort.
>That's reasonable. Also consider going the other way: keeping per-tenant logical databases, and splitting up some or all of the compute layer to have single tenancy or bounded tenancy. For example, if your compute layer is a web server, making multiple sets of webservers with something in front of them routing requests to a given set of servers based on a tenant identifier can chunk up your multiple-noisy-neighbors problem into at least multiple noisy "neighborhoods", with the (expensive) extreme of server-per-tenant. If your compute layer is e.g. a service bus/queue/whatnot worker, the same principles apply: multiple sets of workers deciding what to work on based on a tenant ID or per-tenant/group topics or queues. You can put the cross-cutting/weird workloads onto their own areas of hardware, as well
This pretty much describes exactly where we are right now. We've been able to migrate the big customers to a new, less overloaded database server. We could continue to do that. I believe it's what you call a "bridge" architecture, so the compute layer is stateless and can serve any tenant. It's also got a queue/service bus to offload a lot of stuff that the web servers shouldn't be doing. That stuff is all on autoscaling but even that's not a panacea.
I propose this because I think having database instances split up by tenant (even if multiple DBs share the same physical server) is actually a pretty good place to be, especially if you can shuffle per-tenant databases around onto new hardware and play "tetris" with the noisiest tenants' DBs. Moving back to multitenant-everything seems like a regression, and using (message|web|request) routing to break the compute layer up into per-tenant or per-domain clusters of hardware can often unlock some of the main benefits of microservices without a massive engineering effort.