Yeah, I ran in very much the same issues. Setting up a database for unit tests can be even more of a bother though (especially for CI pipelines, I don't want to have to run a full database there), so I took a middle road, and use my ORM to throw up a temporary in-memory SQLite database that is mostly similar to our actual database. Each unit tests scaffolds its own database, and deletes it when it's done.
That allows me to somewhat mock the database within unit tests, while still being able to test database-specific errors that might occur, like foreign key errors.
Why a separate DB for each test? Just have only one DB and each test opens a transaction and then rolls it back when it's done. That way you also achieve isolation of tests so they don't interfere with each other.
The code being tested also uses transactions internally at times, so it'd mean additional complexity in the code being tested to allow for unit testing, which is not great. In my experience throwing up a database including all tables in an in-memory SQLite db is extremely fast, so it's not really a major concern.
OK, but is your production DB also SQLite? If not, I would not. I found the differences between it and PostgreSQL too big and was getting too many false positives.
Also code complexity on a few levels of recursion of transactions is an easy thing to abstract away with almost zero performance penalty -- depending on your programming language of choice.
The production DB is MSSQL, and we scaffold it in SQLite through EF Core. The resulting SQLite DB is close enough to our production DB that we are able to catch invalid defaults, missing foreign keys, etc. in unit tests instead of later on in our testing pipeline, which helps massively in accelerating our development. It could be even better if SQLite would actually tell you which foreign key constraint failed instead of its somewhat unhelpful 'foreign key constraint failed' error, but as it is we at least know something is wrong in our code.
And sure, we could probably refactor it to use transactions to shave a few seconds of running our test suite, but it'd add some additional mental complexity to our codebase to do so. In general, I prefer to keep the actual running code as simple as possible, and have any complexity that is required for unit tests be handled by the unit tests. By just recreating the database the unit tests currently handle all that complexity.
The idea is definitely not a bad one though, and if your scaffolds are big enough to actually cause performance issues with your unit tests it might definitely be a consideration to not recreate the database every time.
Well, if you found a productive workflow then who am I to judge, right?
However, if I was hired into your team tomorrow you'll have to fiercely fight with me over this:
> And sure, we could probably refactor it to use transactions to shave a few seconds of running our test suite, but it'd add some additional mental complexity to our codebase to do so.
Various languages and frameworks demonstrate that abstracting this away and using convenient wrappers is more or less trivial, and definitely a solved problem. And the resulting code is simple; you are (mostly) none the wiser that you are actually using a temporary transaction that would be ultimately rolled back.
...Though that is extremely sensitive to which programming language and DB library you are using, of course.
Your approach works, so keep using it but IMO it's a temporary one and it could stop being fit-for-purpose Soon™. You yourself are already aware of its limitations so just keep an eye out for this moment and be decisive when the time comes to migrate away from it.
Could you give an example or two of languages/frameworks that have demonstrated abstracting the transaction blocks away? I'm not sure I'm following so I think this will help.
I lost the link to one Golang library that I liked very much but here's the link to Elixir's Ecto: https://hexdocs.pm/ecto_sql/Ecto.Adapters.SQL.Sandbox.html (not the perfect one, sorry, but can't be bothered to look for a better resource). The TL;DR is as above: it uses the same DB but does parallel connections to it and in each you have a transaction that's ultimately rolled back.
That allows me to somewhat mock the database within unit tests, while still being able to test database-specific errors that might occur, like foreign key errors.