I can clarify one point, relative to migration rollbacks. We indeed chose not to implement down migrations as they are in most other migration tools. Down migrations are useful in two scenarios: in development, when you are iterating on a migration or switching branches, and when deploying, when something goes wrong.
- In development, we think we already have a better solution. Migrate will tell you when there is a discrepancy between your migrations and the actual schema of your dev database, and offer to resolve it for you.
- In production, currently, we will diagnose the problem for you. But indeed, rollbacks are manual: you use `migrate resolve` to mark the migration as rolled back or forward, but the action of rolling back is manual. So I would say it _is_ supported, not just as convenient and automated as the rest of the workflows. Down migrations are somewhat rare in real production scenarios, and we are looking into better ways to help users recover from failed migrations.
No question that there are cases where down migrations are a solution that works. In my experience though, these cases are more limited than you might think. There are a lot of presuppositions that need to hold for a down migration to "just work":
- The migration is reversible in the first place. It did not drop or irreversibly alter anything (table/column) the previous version of the application code was using.
- The up migration ran to the end, it did not fail at some step in the middle.
- The down migration actually works. Are your down migrations tested?
- You have a small enough data set that the rollback will not take hours/bring down your application by locking tables.
There are two major avenues for migration tools to be more helpful when a deployment fails:
- Give a short path to recovery that you can take without messing things up even more in a panic scenario
- Guide you towards patterns that can make deploying and recovering from bad deployment painless, i.e. forward-only thinking, expand-and-contract pattern, etc.
We're looking into how we can best help in these areas. It could very well mean we'll have down migrations (we're hearing users who want them, and we definitely want these concerns addressed).
So future features notwithstanding, is the typical Prisma workflow that if a migration failed during a production deploy the developer would have to manually work out how to fix it while the application is down?
As of now there is no strong opinionation in the tool — you could absolutely maintain a set of down migrations to recover from bad deployments next to your regular migrations, and apply the relevant one (manually, admittedly) in case of problem.
Sure we do that before every release as well. But a rollback is a much less invasive surgical fix compared to a full database restore. You're down while the new db instance spins up and any writes in the meantime will be lost.
You can also test migrations against a restored prod database snapshot but again there's no guarantee some incompatible data hasn't been inserted in the meantime.
The first thing I've looked at on your site is how migrations work. Because honestly, I think that's one of the best things about Django. They just got it right, and as you say, not many other tools get close.
I wonder if you have looked at how it works. Because they have put in something like a decade to make it work and it's very powerful and a joy to use.
Down migrations are indeed very useful and important once you get used to it. First and foremost they give you a very strong confidence in changing your schema. The last time I told someone who I helped with django to "always write the reverse migration" was yesterday.
No way you can automatically resolve the discrepancies you can get with branched development. Partially because you can use migrations to migrate the data not just to update the schema. It's pretty simple as long as we're just thinking about adding a few tables or renaming columns. You just hammer the schema into whatever the expected format is according to the migrations on that branch. But even that can go wrong: what if I introduced a NOT NULL constraint on a column in one of the branches and I want to switch over? Say my migration did set a default value to deal with it. Hammering won't help here.
The thing is that doing the way Django does it is not that hard (assuming you want to write a migration engine anyway). Maybe you've already looked at it, but just for the record:
- they don't use SQL for the migration files, but python (would be Typescript in your case). This is what they generate.
- the python files contain the schema change operations encoded as python objects (e.g. `RenameField` when a field gets renamed and thus the column has to be renamed too, etc.).
- they generate the SQL to apply from these objects
Now since the migration files themselves are built of python objects representing the needed changes, it's easy for them to have both a forward and the backward migration for each operation. Now you could say that it doesn't allow for customization, but they have two special operations. One is for running arbitrary SQL (called RunSQL (takes two params: one string for the forward and one for the backward migration) and one is for arbitrary python code (called RunPython, takes two functions as arguments: one for the forward and one for the backward migration).
One would usually use RunSQL to do the tricky things that the migration tool can't (e.g. add db constraints not supported by the ORM) and RunPython to do data migrations (when you actually need to move data around due to a schema change). And thanks to the above architecture you can actually use the ORM in the migration files to do these data migrations. Of course, you can't just import your models from your code because they will have already evolved if you replay older migrations (e.g. to set up a new db or to run tests). But because the changes are encoded as python objects, they can be replayed in the memory and the actual state valid at the time of writing the migration can be reconstructed.
And when you are creating a new migration after changing your model you are actually comparing your model to the result of this in-memory replay and not the db. Which is great for a number of reasons.
Yep we looked at Django ORM as an inspiration. I unfortunately don't have the bandwidth right now to write a lengthy thoughtful response, but quickly on a few points:
- The replaying of the migrations history is exactly what we do, but not in-memory, rather directly on a temporary (shadow) database. It's a tradeoff, but it lets us be a lot more accurate and be more accurate, since we know exactly what migrations will do, rather than guessing from an abstract representation outside of the db.
- I wrote a message on the why of no down migrations above. It's temporary — we want something at least as good, which may be just (optional) down migrations.
- The discrepancy resolving in our case is mainly about detecting that schemas don't match, and how, rather than actually migrating them (we recommend hard resets + seeding in a lot of cases in development), so data is not as much of an issue.
Well, of course I don't know about the internals, but having used Django migrations for a decade now (it used to be a standalone solution called "South" back then), I haven't really run into any inaccuracies and can't really imagine how those could happen. As far as I can see, the main difference is that they are storing and intermediate format (that they can map to SQL unambigously) while you immediately generate the SQL.
Django doesn't try (too hard) to validate your model against the actual DB schema. Because why would it? You either ran all the migrations and then it matches or you didn't and then you have to. (Unless you write your own migrations and screw them up. But that's rare and you can catch it with testing.) While your focus then seems to be to check if the schema (whatever is there in the db) matches the model definition. Based on my experience (as a user) this latter is not really something that I need help with.
Data is actually an issue in development and hard resets + (re)seeding is pretty inconvenient compared to what django provides. E.g. in my current project we're using a db snapshot that we've pulled from production about two years ago (after thorough anonymization, of course). We initialize new dev environments and then it gets migrated. It probably takes about half a minute to run as opposed to about 2 seconds of back migrating 2-3 steps.
It makes a lot of sense. I have a fair amount of Rails experience with ActiveRecord, and it was also my impression that the database schema drifting in development is rarely a problem, but I now think it's a bit of a fuzzy feeling and discrepancies definitely sneak in. The main sources of drift in development would be 1. switching branches, and more generally version control with collaborators, 2. iterating on/editing of migrations, 3. manual fiddling with the database
One assumption with Prisma Migrate is that since we are an abstraction of the database, and support many of them, we'll never cover 100% of the features (e.g. check constraints, triggers, stored procedures, etc.), so we have to take the database as the source of truth and let users define what we don't represent in the Prisma Schema. On SQL databases, we let you write raw SQL migrations for example, so you have full control if you need it.
- In development, we think we already have a better solution. Migrate will tell you when there is a discrepancy between your migrations and the actual schema of your dev database, and offer to resolve it for you.
- In production, currently, we will diagnose the problem for you. But indeed, rollbacks are manual: you use `migrate resolve` to mark the migration as rolled back or forward, but the action of rolling back is manual. So I would say it _is_ supported, not just as convenient and automated as the rest of the workflows. Down migrations are somewhat rare in real production scenarios, and we are looking into better ways to help users recover from failed migrations.