Most people don't have tens or thousands of webservers and could be running with a more efficient inprocess database that syncs to another networked readonly replica for a very long time. I'm very surprised that MySQL or Postgres still don't have an inprocess mode because it's such an obvious win and on a technical coding level it should (naively) be very easy to switch from network calls to direct calls.
And syncing databases is just waaay harder than having a dedicated one. It really seriously is a pain to configure and debug and manage storage and performance requirements around and dealing with scaling and managing failure modes and so forth. If you can even find someone who has the skills to do it properly so you never lose data.
It's far, far easier to just write normal SQL queries that don't have to make hundreds of requests, than to deal with syncing a database across all your webservers.
And because designing for 2 is really different from designing for 1, but not much harder if you do it from the beginning. But expanding from 1 to 2 when you unexpectedly need to because of sudden traffic, and having to re-architect it, can be really hard.
I'm not talking about hobby sites here, I'm talking about anything designed for a business, where downtime of a day is unacceptable.
But doesn't the same logic apply to the database? Is the webserver / application layer really so much more CPU intensive (and/or prone to failure) than the database in sites "designed for business", such that you can't get away with not designing the webservers for load balancing, but you can get away with not designing the databases for sharding?