The idea is to use block-level copy-on-write (ZFS by default, LVM2 is an option too, though less powerful) to be able to get thin clones really fast. Currently supports only Postgres. The engine is open source. Result; single machine with Postgres DB of any size, you pay 1x for storage and compute, and can have dozens of independent DBs, provisioning takes a few seconds.
This is used in CI/CD context so getting temporary full-size for any tests is not a problem any more.
We constantly add new features and looking forward to new use cases.
Some case studies with numbers are here: https://postgres.ai/resources
Here is how GitLab uses it for DB testing: https://docs.gitlab.com/ee/architecture/blueprints/database_...
(disclaimer: I'm the founder of Postgres.ai, the company that develops Database Lab).
We use buildkite for CI/CD. It runs the same docker compose stuff as we do locally. It runs unit and acceptance tests. If green, it will auto deploy to our k8s staging cluster (though it worked similar when it deployed to vms). If staging is green, it can auto promote to a canary node in prod, and, depending on team set up, w ill auto roll out or allow manual gating for any additional QA. Version numbers are auto bumped by some internal tooling tied into the build system.
When we push to stage, the staging env has db instances but they are usually 1-2 nodes where in prod we will have more (3-30 nodes or more depending on the service). In staging, we run a suit of tests that exercise the end user experience. We also regularly run a slimmed down version of those tests in prod.
When it comes to unit tests, most of our teams do not mock the db. We use Go, and we will test by using test fakes - a struct that matches an interface is created in the test file and passed around. These tests validate error handling, log generation, sometimes metric generation, and, lastly, the happy path.
TL;DR - docker compose and scripts.
IAC defines the database hardware + setup
Migrations make sure that the data you need to start with is in there and any changes are captured and applied on each deployment