I've become skeptical of this advice, of late. If the database is just a dumb object store, then I suppose it's fine. But I've also seen a fair bit of rework resulting from tests not catching a bug that was only reproducible when working against a real database. You then end up needing additional tests to cover that sort of stuff, which takes extra time to develop and maintain.
Between that and the whole "test-induced design damage" thing, there's some amount of test slowness I'll accept in return for reducing the development effort needed to maintain the tests. On the general principle that computers' time is way, way cheaper than humans' time.
As far as keeping tests that rely on stuff like the database running quickly goes, my new preferred plan of attack is configure those components to use RAM disk in the local dev and CI environments.
The real database is slow only if the only one you can use is the production one (it should be obvious why this is a bad idea) If you copy the schema and import it into sqlite with just the data you need for the test it is fast.
Mocks do have their place, but people get confused about the place. The purpose of a test is to assert "This will never change". That is a very strong statement. With a mock you are asserting that you will always call that function with those parameters - this is rarely what you want.
It places big constraints on your development, because you get stuck limiting yourself to the subset of SQL that is supported by both databases. That usually means trapping yourself in the 1990s, which, given everything that been implemented in the major RDBMSes over the past 20-25 years, implies accepting an artificially low ceiling on the performance, and possibly the semantics, that you can get out of your data store.
Second, even if you're fine with sticking to a small subset of your database's features, you're still not guaranteed that the two DBMSes will behave the same way given the same input. This is particularly true if one of them is SQLite, which is dynamically typed.
With the possible exception of key/value stores, databases generally exhibit such complex behavior that, if you're looking to conclusively verify that logic involving your database actually works, you probably shouldn't accept any substitutes.
I understand sometimes the extensions are worth using, but good engineering is to limit those uses to where it actually is significant.
Sticking to 1990s SQL gives you the ability to swap among 5 or 6 different databases, all of which have been around for a day or two, none of which have seen a whole lot of change in their relative strengths and weaknesses in a good long while. It's just not super likely that you'll find a compelling reason to switch among them, especially if the people who made the original choice did their homework first.
Keeping a well-isolated data layer, on the other hand, gives you a much wider universe of things you can attainably switch to - document stores, triplestores, graph DBs, Cassandra, Hadoop/Hive, stuff that hasn't been invented yet, etc. I've lived through my fair share of database technology migrations.
So far, every single one of them has been this kind, not a SQL -> SQL migration. So, while you're at it, might as well get your money's worth out of the SQL DB.
IMO the purpose of test is to assert "this is behavior that will likely happen in production", not "this will never change". I try to use data that's as real as possible within resource constraints (e.g. slashing out boring data to get dump times down from 1 hr to 5 mins).
>If you copy the schema and import it into sqlite with just the data you need
omg no. I've had undiscovered bugs caused by using a different version of postgres locally in test to the one in prod, never mind using an entirely different database.
1. Local Development
Local Development is the dev's local development environment. For performance reasons, and so we can work offline when required, we usually try not to touch any 'real' cloud-hosted components here, and run a local database and blob storage emulator in docker containers.
Development is a cloud-hosted environment where devs can deploy to at any time, for example to test that some code they've written works with real PaaS components.
Test is where our CI pipeline automatically deploys code to after every succsessful build, unit test run and integration test run, so as well as testing the code, this stage also tests our database migrations and deployment. Despite hitting a real database, we usually manange to keep our integration tests fast.
QA is where our testers and pilot users can test new features. We only deploy stuff here when we have something worth showing - the point is to have a relatively stable environment to test in.
Staging isn't a 'real' environment for us - when we reach here, database migrations are executed against the production database, and a Staging version of the web app is deployed that runs against it. This means we can do smoke tests before the final deployment to Production.
As an aside, our web apps are all hosted on Azure, using their App Service PaaS. When we move from Staging to Production, what we are actually doing is just swapping the slots around, so we get zero-downtime deployments to Production.
As another aside, we generally don't run a replica of all cloud services for each environment - mainly for cost reasons. So, all the test/non-critical environments (Development, Test, QA) are hosted on a single instance of each cloud component: we use a single App Service, with a slot for each environment, we use a single database, with a separate schema for each environment, etc.
This takes a lot of discipline to work, but in essence you need to make sure that any database changes are backwards compatible, since the Production web app must be able to run using the newly migrated database - so no columns/tables can be dropped, new columns must have default values etc.
Deploy your new service alongside and mirror traffic. Treat your production system as a living organism that you're running tests inside of. Assess the impact/risk of your changes and decide your test strategy. Make large changes in small increments. And most importantly monitor all your changes with metrics, do canary releases when you're rolling out the real deal.
Spend more time testing and validating in production and less time building extensive and usually flaky e2e suites and focus on well pointed smoketests that you run in production.
Reduce the time it takes from writing code to seeing it in production to close to zero and painfully waiting for thousands of unit, integration and e2e tests and getting signoff in n different and yet never quite like production environments suddenly seems like an ancient practice. "How do you know it works?" "The metrics, logs and traces tell me it works"
Of course, it's a living organism, you don't want to screw it up, but there are plenty of ways to isolate your services under test. It definitely isn't something you can just start doing overnight because you'll need tooling and infrastructure to make it happen.
One technique in particular that has worked well is to identify small infrastructural problems in my integration tests that prevent me from writing integration tests that replicate real production/QA bugs and, when cost effective, building out infrastructure to enable me to write a test that mimics those conditions.
E.g. take the selenium tests that just test with chrome and get them to run on firefox to catch those firefox only bugs.
This often allows me to uncover whole classes of bug before the code even leaves the testing environment, never mind hitting prod.