Hacker News new | past | comments | ask | show | jobs | submit login
Database Version Control with Liquibase (turtle-techies.com)
102 points by asafg6 3 months ago | hide | past | favorite | 79 comments

It's frustrating watching this space.

The problem is so trivially solved that there's almost no product to build. But people want to build something to solve it, so they build a much more complicated solution that can be configured to solve any potential database migration scenario, regardless of whether anybody in history has ever needed to handle that scenario.

Here's how you actually version databases:

  alter table buns add column seed_count int
  update version_number set version = 31
Check that in to source control and you're 90% done.

Extra points for building the tool that spins through all the files in source control and executes the ones that come before the current version_number on the server in question. And for automatically creating a change_0031-0032.sql file for the next guy to use (with that update line pre-filled.)

You'll be hard pressed to spend an entire day building this, but sadly nobody does. Everybody downloads some overcomplicated monstrosity like the one in the article that describes changes in XML and whatever else. Then spends two days fiddling with config files to get it working.

I'd give up complaining about it if I didn't keep ending up working with shops that use stuff like this.

>> and you're 90% done.

Sure but as with just about any problem in this space, the first 80% is the easy part and the final 20% will occasionally just flat out kill you.

How do you handle versioning stored procedures and triggers? What if your dev envs don't use partitioning but your QA / prod envs do? How do you rollback? Perhaps you're using the expand-contract pattern to avoid the need for rollback, great, how do you manage the complexity of inflight (expanded) but not yet contracted changes?

For basic use cases you definitely don't need liquibase but there comes a point where something like flyway makes a difference without adding much complexity at all.

Certainly, but none of the things you mention seem all that scary.

versioning stored procedures and triggers?

Procedures and the like get their own .sql files and live under source control like everything else. At build time, they all get blown away and rebuilt.

What if your dev envs don't use partitioning but your QA / prod envs do?

SQL still works though, right?

How do you rollback?

Nobody has ever needed to rollback. It's the scary ghost story that convinces people to build big ugly database build systems.

In short, most shops just plain won't ever need any of the stuff they're paying for (in dev hours) to run these big tools.

Hopefully somebody will build this small thing as a tool, so that people can download it and therefore be "using a tool" and thus not running up against Invented Here Syndrome.

As someone who built (and sold) a company in this space, your solution works for the vast majority of relatively simple databases. However, lots of legacy/enterprise/custom apps have freakishly complicated schemas running on databases that have been so over-engineered as to create problems for themselves (cough Oracle *cough).

In practice, you end up having lots of complex restrictions - you have to change your schema definition slightly between dev and prod, because your dev box is linux and your prod machine is AIX and there's a bug in your dbms on AIX. You can't just drop stored procedures, because rebuilding them takes 30 minutes of downtime you can't allow for. Or you have security policies on data in prod that changes the table definition.

In an ideal world you wouldn't need these things, but once you have a complicated enough system, you basically have to have a tool that does your database release management.

Nobody has ever needed to rollback.

This is at the same the truest and the wrongest sentiment.

True, because experience teaches that db rollbacks are exceedingly rare for production. The cost of maintaining them is larger than the costs of doing it by hand. Moreover, since the rollback path typically is much less tested and validated, you run significant risk of a blind rollback messing up your schema or data even further. With GP programming we typically choose to hide new functionality behind a feature flag and toggle it instead of relying on downgrading the software version.

Wrong, because there should be ability to perform provably lossless schema and data transformation. A lossless transformation would by its nature be amenable to a safe rollback. Just like a unified patch(1) patch file is a lossless transformation of a file and thus can be reversed, a lossless database change could be. The database engine could even offer an explicit support for checking, validating, and enforcing lossless (reversible) transformations, in similar spirit to the foreign key constraints guarding against mistaken deletion of relational data. Come to think of it, it would be very similar to a long-running transaction that encompasses both data and schema change, and that has side-effects visible to other transactions before it is closed and has its metadata discarded.

In my opinion, the best place to implement the ability to rollback a schema change is in the database engine itself. In terms of how that would actually be implemented that's outside my wheelhouse. I think the ability to have some _standard_ method of rolling back schema changes across all the popular database engines would be significant.

Some engines implement some support for some things. Many DDL commands can be run in a transaction in SQL Server. I believe Oracle creates a savepoint of some kind after every commit and DDL commands typically imply a commit. But it varies across vendors and it's a lot to untangle.

In practice a lot of rollback scripts tend to make the assumption that the change script succeeded without issue. When the change script fails for an unknown reason often the rollback script will fail as well, in the worst case also having partially succeeded.

> Nobody has ever needed to rollback.

I did a rollback several times, but not a DB schema rollback. The potential to lose data is too big. I try to write code and migrations that support rolling back at least one deployment without rolling back the migration. Most of the time you need this anyway to support zero downtime deployments.

Writing and testing backward migrations is ... problematic.

“Nobody ever needed rollback” is a very naive statement. Nobody needs rollback, until they do. And the one time you need it, having it will make up for ALL of the time spent on getting it set up correctly.

Properly having rollback and other safeguards in place allows you to move faster and make mistakes with less consequences. Everything is a trade off.

On the project I supported we did, but due to nonexistent database version control I had to do restores instead. Thankfully this was on Dev/Test envs, so things got ironed out before production releases. Thanks to no rollbacks, I cannot imagine running a full fledged CICD pipeline on neither of those envs.

This is what Liquibase does. It does some other stuff for you as well, quite simple ones, but time has proven (to me/us) that they're all useful in real-life scenarios:

- exclusive lock to prevent multiple instances running at the same time

- rollbacks

- forcing immutability

> Then spends two days fiddling with config files to get it working.

Look, dozens and dozens of developers have been using this for many years at us without problems. If it takes two days to set it up: let it be.

And we're writing our migration scripts in SQL, not XML.

So, having used both, I've got to say, I had fewer problems with the homegrown solution.

Liquibase has its share of gotchas, especially in the SQL syntax. My sense is that most of the problems I experienced had to do with weird edge cases introduced by trying to handle some complex enterprisey scenario that I would rather deal with by just not letting things get that complicated in the first place. That exclusive lock mechanism, for example, isn't foolproof, and I suspect that it has bitten me more times than it's helped me.


Roll-your-own is not necessarily the easy option. I'd say that, if your goal is to get something robust enough to use in production, a day to implement is a very optimistic estimate. I'd say that it's one to steer clear of unless you're sure you know what you're doing.

Which one is better depends, as usual, on your situation. I think my rubric would probably be: If the XML schema definition language is at all attractive to you, use Liquibase (or similar). That's far and away its most unique contribution. If, on the other hand, the knowledge that anyone would even consider defining their SQL schema in a language other than SQL fills you with holy rage, maybe choose an option that sits closer to the metal. This doesn't necessarily need to be DIY project, but, thanks to Zawinski's Law, that might be the only way to ensure that your solution also stays close to the metal.

I like alembic because you label your migrations as nodes in a graph so that I can do work in my branch, but properly navigate backwards in the migration history to a common ancestor with my coworker so I can migrate forwards to their branch and pair with them on something.

I'm frustrated to see people make a bunch of new tools that ignore the lessons from old tools, but for me this has been solved for a while.

It looks like a simple problem until you have multiple developers/branches/supported DBs/application instances. Then you start to appreciate a tool that is more descriptive, can do rollbacks and auto upgrades at application start, populate a test database and so on.

Are those all things you've actually done? Or are you making a list of "potential database migration scenarios" like I described above?

As a datapoint, I've been a backend guy for coming up on 25 years now, and not only have I never needed to roll back a database migration, I'm not aware of that ever having needed to be done anywhere, even anecdotally. It just never happens in the reality we live in.

Yet all migration tools support it, require reverse migration scripts, etc. And you listed it in your "must have" list.

These are features I actually use liquibase for. That's why I know about them.

About rollbacks: yes, they are not always useful and you can think of ways to avoid them. But these days when you can rollback a docker application upgrade with a few commands/clicks, this way you can also roll back the DB changes. Not all the changes require this, but there are many ways a schema upgrade can go. Datapoint: changing stored procedures/functions that should match the calling application's version. Yes, not everybody uses logic in the DB, but sometimes you need that performance.

merging of branches with migration files, which means you can't just use "30, 31, 32, ... etc" is definitely something people do, all the time.

I've used a couple of these tools and have ended up using the ones that do less, rather than more. Our team has settled on Flyway for our Java projects and Entity Framework Migrations for C# work. These aren't as full featured as Liquibase but, in my opinion, are somewhat easier for members of the team to reason about.

Automatic rollbacks strike me as somewhat risky. My thinking is that if the migration (or upgrade) failed it was likely due to something that hadn't been thought about when the migration script was written. In that case it seems unlikely to me that another automated process (the rollback) will be any more successful.

I've built that. This works on small, well synchronized teams, on small databases. Once you have to merge a branch, you start fighting over version ids or renumbering like it's BASIC. And remember that any long-lived dev session is effectively a branch. Heaven forbid you have environmental conditionals to apply.

I've personally not spent too much time fighting any migration tool. Configuring Alembic from scratch, for instance, takes about 5 minutes for separate dev and production databases (MySQL/SQLite). Code-generation comes as a bonus. There's 0 chance I'd trade a day of developer time to build an in-house tool re-creating 90% of what Alembic does, with its large audience use-testing it daily, stability and documentation.

See my sibling comment about Django. You can spend a day to build this. And maybe a few hours later for small fixes for corner cases found in production. And then explain to your colleagues.

Or you can spend a few hours to learn a tool that has tutorials online (where you can point newcomers). When you move to another project, you can take your little tool with you, but you will need to start again teaching and convincing teammates.

There are lot more here. Idea is to put chances on your side in some possibly long and time constrained migration. How do you handle run that stops in the middle? How to avoid rerun of long running process that finished correctly but exited in step after? How do you revert the change to come to some previous point in time in case it is really really needed? And all that in some environment you maybe have no access but need to guide other person to act with steps defined in advance? How do you handle if some error is fatal to stop the process and other just pass with warning printed? Need to run some actions on each deployment? How do you log and verify what steps were run before? What process to use when multiple teams need to deploy to different environments and maybe different database types (different syntax, different rollback strategies)?

Even this is vastly over-engineered.

Just have a target schema definition somewhere, and use a schema diff tool to generate the changes you nee on demand.

No need for version numbers at all. No need to run through change scripts to set up a database from scratch. Almost no need to manually write scripts.

Just got through consulting for a shop that thought a similar bespoke migration system was all they needed. Since the team was quite large and the CI/CD pipeline so slow, tons of time got wasted every sprint resolving the merge conflicts, getting rid of duplicate includes from auto-merging, etc.

I agree with the other replies - PLEASE don't do this. Use liquibase, flyway, or any other tool. They both let you write everything in SQL if that's your preference. The initial setup time will be saved later on if the team grows.

I'm not a fan of those DSLs either. We use liquibase with plain old SQL...we write in the dialect of the database. No rollbacks either - always forward.

One thing that is nice from liquibase is that you get a nice cli that let's you do some fun stuff like update the database directly and then auto-generate a diff (your example above). See https://docs.liquibase.com/commands/home.html

31? damn, I also used 31.

This is way better than random ids. You still have order of operation conflicts but now your source control can't tell you at the merge level.

Wouldn't be a viable solution for an old project I was working on: we had to support different SQL back ends.

Liquibase abstracts it away and allows you to use plain SQL where you need it. Your solution looks simple at first glance, but does not scale very well. For small projects your approach will work, but its an instance of "not invented here".

> Then spends two days fiddling with config files to get it working.

Not really the case in the project I've used liquibase with.

Without out of order processing of migrations I would not want to use it. When you work on several branches you will at some point need out of order processing of migrations.

Also: what about not having a versions (as those may clash between branches) but migration creation datetimes to line them up?

Exactly. For Python project the solution can be https://pypi.org/project/nomad/ and you will even save yourself that one day of development.

Help me understand what your rationale is. Are you saying that SQL + source control is good enough? Do you think it's a fools errand to build tools like this which add additional abstraction layers for schema revision?

Flywaydb https://flywaydb.org/getstarted/how does pretty much that using plain sql files.

I even think that this is what Django does automatically with its migrations, isn't?

What about data migrations for destructive changes?

I hear you, but I'd like to give a shout-out to my favorite database deployment mechanism, which is admittedly SQL Server-only: SQL Server Database Projects in Visual Studio with DACPAC deployments. It is quite a different solution than just about anything out there. (TL;DR: there are no hand-written migrations.)

Basically, you have a project with all of your tables, stored procedures, etc. as separate .sql files per object, but they are always "CREATE" statements, never ALTER/DROP/etc. So your project defines a normative schema of how you want it to be. What makes this really interesting is that you can then "compile" your database schema, and you actually get "compiler errors" for things like invalid syntax. (I'm putting quotes around that because it's not 100% the same as a programming language compiler error, but close enough.) And you get warnings and static analysis, that helps you detect issues before you even deploy to your local database. My understanding of how this works during compilation is that it basically spins up a temporary database, deploys your fresh schema to it, and validates that it deploys successfully. It can even validate against specific server target versions for compatibility. Plus, you get IntelliSense since it knows about your full schema, making editing the .sql files easier.

Where it becomes very helpful is with changes. There are no migration scripts that you code by hand. Need to add a column? Just add it to the MyTable.sql file's CREATE TABLE statement. No ALTER necessary. You get nice git history showing the column was added to the table, without having to track back through migration history. But for new developers coming on your team, or when you want to just start your local database over again, you don't need to run through years of history of migrations. You just start with the current normative schema and deploy it as only CREATE statements. Of course, you can also add pre- and post-deployment scripts as well for seeding data or whatnot.

How this works when it comes time to deploy changes to an existing database is that it builds the temporary database of the normative schema, then does a schema comparison to the target database, and checks for any changes that would cause data loss such as dropping or dangerously altering columns. If no such changes are needed (i.e. if you're only adding a column, or changing a sproc), it can safely go ahead and generate and run that DDL script (i.e. ALTER TABLE ADD whatever, plus your pre- and post-deployment scripts). If you want to proceed allowing the data loss, you can do so if you feel it is safe. But you have that safety check to prevent screw-ups. And all of this is generated into a nice .dacpac file that you can pass around in your CD pipeline, or hand off to a DBA for deployment. You can also generate that script without running it directly on the target if you prefer, and hand that script off as a SQL file.

Also, I've found this is quite compatible with git branching, due to the normative-schema nature, whereas with migrations you can easily end up with conflicting migration order when i.e. two branches create migration #101. In the SQL Server Database Project case, if the changes are not conflicting in a git merge (i.e. columns added to two different tables), there's no problem and you don't have to touch any files at all. If they are compatible changes to the same file (i.e. two columns added to the same table), you can usually do an auto-merge and you'll likely end up with a valid schema. And in the rare case of a conflict, you can merge that however you see fit.

Is it perfect? No. Is it complex? Yes. But once you get used to how it works, I've almost never had to script out any DDL changes by hand (I often forget the ALTER syntax), I don't think I've ever had to roll back (rollbacks are usually as simple as re-deploying the older .dacpac file), and I've found so many issues due to compiler errors and static analysis that have saved a huge amount of effort. Not having something like this is the biggest thing I miss when I have to work with PostgreSQL or MySQL. If anyone knows of something similar for those databases, I'd love to know!

Although part of a bigger stack, this can be used independently for postgresql


Internally it uses sqitch and apgdiff.

The process is almost exactly like the one you described for your tool.

I recommend checking out Dbmate as an alternative to Liquibase. I've previously used Liquibase, Flyway, Django's built in migration tools, and others, but Dbmate is my favorite tool for migrations by far due to its simplicity.

It's completely language agnostic, super easy to embed into any deployment pipelines, and works using regular SQL (so you don't need to learn a new syntax).


FYI Liquibase supports SQL too. Not sure what you mean about "language agnostic" - Liquibase does have a lot of plugins and stuff in the Java world, and its written in Java, but it doesn't require Java. The biggest Java-specific thing that I like is that you can install it as a project-specific dependency rather than a global CLI, which keeps it clean.

Thanks for this... It ticks all my boxes (timestamps, schema.sql file, plain SQL migrations, etc.), it is ActiveRecord inspired (and that was the migration tool I liked most), it seems to integrate really well will CI/CD, and I have never heard of it!

nice, similar to tern which I've been using lately after using flyway for years.


>doesn't support Oracle or MS-SQL

Not exactly a Liquibase alternative for anybody working in the enterprise space.

Looks like a solid basic implementation, along the same lines as Django migrations or Alembic in the Python world.

I always find these tools a bit lacking though. Over the long term, things that I find mattering are:

- Correctness of changes being made

- Support in getting to Zero-downtime migrations

- Support in CI around testing against the right schemas

- Squashing changes/garbage collecting after a given point

- Support for applying the same change to multiple production environments (where you can't rollback one because another failed).

These are all hard problems! Some are quite specific to certain kinds of application too. But companies always seem to end up building custom tooling for this sort of stuff. I'm sure there's a place for a tool that gets the right balance between support for these complex use-cases and defining a way of working.

Using a schema diff tool can help with a number of the problems you mention - you can test correctness, autogenerate changes, and because you can diff from current -> target without version numbers, no need to keep a chain of migration files hanging around.

That's true, it would solve some of these problems. I do think the ultimate solution needs an awareness of both the diff and the migration plan though, to be able to test for things like whether changes can be made without downtime in a particular order.

Unfortunately I suspect that the tool _also_ needs runtime instrumentation of the database in some cases to understand behaviour enough to address all of these concerns.

I realized only a while ago how very powerful Django is - when I was choosing technology for a new client's project.

Just for context - over the last 13 years, I developed professionally in PHP, Java, Scala and Python, together with some code in Go and Rust. I can patch C and C++ code. By saying professionally, I mean being paid to do it as a main job.

The integrated nature of Django (ORM, admin, migrations) is a killer, but very rare to see. Spring + Hibernate doesn't come even close. Play framework seems to go in this direction but misses a lot of stuff. Maybe it's matched in modern PHP or in Microsoft land but I don't have much experience there. You can collect all the components and tie them together, but you're focusing on assembling the tooling when you could be focusing on the real problem. Being able to develop alone or in a very small team means real agility and low overhead. The time to start matters very much.

I know Django has its own pain points (for me, it's mostly performance). Each part of the stack has a better alternative as a standalone library. But I'm willing to sacrifice using the top tools in favor of using the top stack. Most of the web apps have a core product that's unique, but otherwise they all need to deal with database, authentication, authorization, CRUD admin interface etc.

>Maybe it's matched in modern PHP or in Microsoft land but I don't have much experience there

Yes, Laravel has something like you're describing: https://laravel.com/docs/5.3/migrations

Nice. Does it also have the mostly autogenerated admin component?

Django has a set of views where you can manage users and their permissions, plus any data of yours. You mostly just specify what classes and what fields are visible in the admin interface. There are extension points so you can override some behaviors with code. Eventually, you can replace parts with your own views.

The integrated nature is beautiful. You already have the ORM models. With a few lines of code, you now have a GUI for them.

It's been a while I tried Django.

Does Django admin provide a way to model schema in GUI and create migrations automatically ?

No GUI but the migrations are created automatically once you define the models.

Have a look at http://forestadmin.com. It's a fully GUI admin component. I found it when looking for Django alternatives when developing in Java.

Nope, still done though code, at least in the base framework.

I've been using it in production and it's quite good, just two things I've learned:

* XML is a lot better for configuration if you don't have a very good YAML verification. Wrong indention and stuff can be easy to miss

* Make sure your k8s probes work with long-running SQL operations.

Yes. If you're using XML with Liquibase, it's definitely worth using an editor that supports XML schemas for autocomplete and validation as you type.

I 100% agree. I started using liquibase a couple of weeks ago, and the autocomplete has saved my ass plenty of times already.

+1 on avoiding yaml, it becomes horrible to maintain when the files gets big

If you're parsing it dynamically, I agree. If you're validating it against a static data structure after parsing it, it's not bad.

What's the benefit of Liquibase over Flyway? I've been using Flyway and find it pretty nice. I enjoy being able to write migration as properly named sql scripts and not dealing with XML.

I haven't used Flyway but the XML format has some benefits like being DB agnostic and defining context (eg. production, testing etc.)

You can also include multiple change sets in one xml file, this helps with keeping the number of files down.

if you want you can just write pure SQL inside the xml with the <sql> tag.

Flyway works fine for us too and is super simple to use, simple sql files... Nothing wrong with flyway.

Note: We ony implemented rolling forward. Rolling back is hard. Esepcially for DROPing collumns.

Btw https://github.com/amacneil/dbmate also seems cool.

https://sqitch.org/ is a good tool in this space as well -- it allows you to write your schemas in plain SQL files, manages verification, and ensures that you can commit your schema changes in any order and it will resolve dependencies for you.

Initially, I thought it was about data versions within a database.

However, this seems like the traditional database migrations which has been used since forever, isn't it?

For our Ruby shop, we use Sequel Migrations[0]. For generic migrations, using pure SQL, MyBatis[1] migrations is fantastic.

[0]: https://github.com/jeremyevans/sequel/blob/master/doc/migrat...

[1]: https://mybatis.org/migrations/

I've been using Sqitch https://sqitch.org/ lately with really good results. It's also a good alternative to consider.

This page doesn't really demonstrate the product well. There are links to some other pages that show somewhat more complicated changes, but they still don't demonstrate any complex transformations or migrations. Maybe it can do these things; if so, and the author wants to sell people on its capabilities, they should be front and center.

XML is a non-starter for most people. I know this is a Java product, and Java developers have a perverse love affair with XML, but literally no one else uses it. The page mentions that changesets can be written in JSON and YAML, but there are no examples. XML is a horrid abomination and I have a viscerally negative reaction to seeing it. At least show something that doesn't cause PTSD to the viewer.

I've used Rails DB migrations (written in Ruby with ActiveRecord's DSL), Alembic (written in Python with SQLAlchemy's DSL), and roll-my-own pure SQL, and nothing beats SQL. You're working with a database, why not work with its language? Any migration of significant complexity requires the use of actual SQL, and trying to abstract that away just results in mixing languages and obscuring the change being made. It only takes a few lines of code to manage the versioning.

My organization is a big fan of Liquibase. We execute Liquibase using Jenkins. It has a lot of awesome features such as ability to re-execute some changesets if file has SHA signature has changed. You can also decide to re-run some SQL files on every deployment. For example we run ANALYZE on our Postgres RDBMS at each deployment, just to ensure that as part of deployment the stats is updated for the query planner.

I too am a fan. The `updateTestingRollback` command is great for testing to ensure the complete schema is defined and no manual changes are needed. I run it before running the unit and integration tests. The `validate` command is also good, but doing the full update, rollback, and update again is much better at finding issues in a controlled environment. (ie. don't run it in prod).

> updateTestingRollback tests rollback support by deploying all pending changesets to the database, executes a rollback sequentially for the equal number of changesets that were deployed, and then runs the update again deploying all changesets to the database. [0]

0: https://docs.liquibase.com/commands/community/updatetestingr...

Thanks, that is really great. I didn't know about that one.

I am convinced that https://github.com/golang-migrate/migrate is enough. We've used it for ~3 years now and if it is enough for a platform with hundreds of scaled containers, including migration from docker swarm to gcp kubernetes and managed sql then it should be enough for all kinds of cases.

Most databases have introspection capability, via sql. These tools should be using that. We shouldn't be making delta files, just check in schema defs along with normal code. The "migration" will consume the schema defs and generate a script that checks it against the db, and only execute schema commands that are needed. Most dbs can support drop/create of procs/triggers/other things quickly, some have alter support to just re-set them. Just never drop columns or tables, those require a human anyway, unless they're tiny. I wrote a tool that did this for mysql, and can be adapted for others as needed. Every branch could generate a database boot script easily, no crazy up down things all over the place and guessing on order.

"Just never drop columns or tables, those require a human anyway, unless they're tiny."

In a lot of common multi-tenant designs you could have thousands of schemas. You can't have a human doing this. Additionally, the hard part of schema versioning is really the data migration and rollback steps which you failed to mention.

If you need a GUI tool that creates schema migrations automatically as you change schema - try XgeneCloud [1].

The GUI database client generates up and down SQL files automatically when you click and change schema.

Currently, it supports MySQL, MSSQL, PostgresSQL, SQLite.

Here is a simple demo how this GUI works : https://youtu.be/IKvraKy0S90

[1] : https://github.com/xgenecloud/xgenecloud

(Full disclosure : I'm the creator)

Liquibase is great, been using it for about 6 years in production.

Liquibase also allow you to run java based change sets not only XML or YAML which is very good for more complex migration logic but you have to be careful not to use your DAOs or some service that might mutate in the future to support a schema change but rather use pure sql hardcode to that change set, as change sets run in order you can be sure of the current state of your schema.

Are there any similar, open-source libraries for database schema introspection and documentation? For example, I would like to provide a UI for users to document database tables and columns. At a higher level, it would also be useful to audit views to get an idea of which tables and columns are in high demand and might need indexes.

I’v been using it for quite some time now and I love it. Very versatile command set and I love that I can have the exact same database structure automatically with H2 for tests and postgres for production.

Relatively flat learning curve also

I have yet to work at a company where DBAs weren't vehemently opposed to using tooling such as Liquibase. Old habits die hard.

How does liquibase compare to flyway?

The main difference: in Liquibase you write migrations using XML and in Flyway you write migrations in SQL. At least that was the original differentiator, things might have changed in the last couple of years.

I've always preferred Flyway since SQL is the natural language to interact with your RDBMS. I would only consider writing migrations in XML if I have to support multiple database implementations e.g. when you built a product like Jira or SonarQube that support different databases (but think twice wether you want to go down that rabbit hole).

You can write migration in sql using liquibase as well. Just use tag <sql> and you can write your favorite dialect. Also you can set sql dialect on migration so you can support multiple db engines.


I think the issue here is that the <sql> tag is embedded in a larger XML document. IMHO, XML is no fun to work with by hand.

Is it compatible with clickhouse?

Seems there is a spelling error within the first line?

"Introduction to managing DB shcema changes with Liquibase"

There is a typo in the subtitle :)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact