ORMs aren’t bad, but learn their escape hatches or else you’ll have a hard time doing more complicated things.
I agree that it's very important to not let the physical schema leak into the rest of the system, and to have a strong conceptual model (aka entities and relations). This has been well understood for almost half a century: https://en.wikipedia.org/wiki/Three-schema_approach
But I don't think ORMs are in any special position to help with this. They typically introduce so much other confusion that they tend to divert attention from designing both a good physical schema and a good conceptual model, and maintain a sensible mapping between the two. This can be done with RDBMS views for example, with a fraction of the overhead of an ORM. Most ORM-based code bases I've seen leak tons of db-level details.
> switching to a completely different storage system in the future
Designing for this eventuality is not healthy IMO. If you get there, it will be a so-called "good problem to have" and you will have to deal with whatever unique challenges you face at that level. We might as well be writing code with the possibility of "switching to a completely different programming language in the future" in mind. Yes, clean, modular code will help, but beyond that, not committing to the capabilities of the tools you have chosen will harm you system.
I think there are plenty of bad ORMs and there are plenty of ways to use the good ones in a bad way, but that doesn’t mean that they aren’t providing the value I mentioned. For instance Entity Framework Core with code-first migrations has you designing the data models themselves, then wiring up relationships and other metadata (indexes, keys, etc.) in the DB context itself - your actual entities are completely portable and have nothing to do with the db itself outside of being used by it.
And sure, needing to switch to another storage system may be a good problem to have... that doesn’t mean you should explicitly tie all of your code to one particular RDBMS. If a user is a user is a user, it shouldn’t matter to anything else in your codebase how or where it is stored, it should still be the same entity. Moving those users from your SQL Server to Mongo or to a third party like Auth0 or an Azure/AWS/etc. federated directory service doesn’t change the fact that every user has an ID, an email, a name, etc.
Code for today, but design for tomorrow.
I don't hear people talk about "coding for the web, but design so that you can easily switch to deploy as a Windows desktop app." Or "write it in Python, but in such a way that we can easily swap to OCaml." It seems to me databases are uniquely treated this way, as some kind of disposable, simple piece of side equipment. Again, modular code will always be easier to migrate, but I prefer to take full advantage of db capabilities, as it results in much less code and frees up time and mental space to focus on a good conceptual model and physical schema, among other things.
I've never used EF, so I might not see what you are seeing.
This is exactly right - lots of people are still cargo-culting rules of thumb that no longer make any sense.
This was an artifact of the last generation's commercial DB market. Open source DBs weren't "there" yet; a combination of real limitations and risk-conservatism kept companies shoveling huge amounts of money at vendors for features and stability now provided by `apt-get install postgresql-server`.
If you just lit seven figures on fire for a database license, you're not hungry to do it again, so you wanted all your software to be compatible with whichever vendor you just locked yourself in to. And certain DB vendors are very well known for brass-knuckle negotiation; if you could credibly threaten to migrate to $competition instead of upgrading, it was one of the few actually useful negotiating levers available.
Today, open source DBs are better than the commercial ones in many situations, certainly not worse in general use, and the costs of running a bunch of different ones are far lower. Not to mention, the best way to win a software audit is to run zero instances of something.
When you have a small team working on a given tool that only really needs to manage its own data, it really doesn't matter. But some point, you do need expert gatekeepers to tell engineers when they're Doing It Wrong when there are many heterogenous clients accessing large datastores for different purposes, complex audit requirements, etc.
The number of teams who eventually move to a different data store? Almost zero.
Getting "locked in" to a database is a non-issue. In fact you should get locked into a database system, provided you picked a good one to start with. Most teams never even scratch the surface of what a powerful DB like Postgres can do for them and it breaks my heart every time.