
Rails migrations with no downtime - craigkerstiens
http://pedro.herokuapp.com/past/2011/7/13/rails_migrations_with_no_downtime/
======
morganpyne
There is another problem unmentioned in this article with his simple table
modification example... When you modify a table the DB needs to write the
entire table to disk, locking it all the time. Depending on how much data is
in the table and how fast your IO is this can be hours and hours of downtime
on the live database, for a migration that only took a few seconds locally on
your test database.

Running migrations on live databases with lots of data in them is hard. There
are many strategies to work around this problem, but generally running an
'alter table' on your primary db server on a huge table while it is in use
should be your very last option.

~~~
newhouseb
This is true, but note that the article was written in the context of
PostgreSQL instead of MySQL. PostgreSQL can do many types of table migrations
with no downtime (dropping columns, renaming, adding columns that default to
NULL, and creating indexes). Before I made the switch from MySQL to Postgres I
did some quick benchmarks which support these claims on a 5 million row test
table: <https://gist.github.com/1620133>.

~~~
morganpyne
Guilty as charged for immediately assuming that the problems of the MySQL
family apply to other databases. Thank you for pointing out that PostgreSQL
does not necessarily have all the same limitations.

------
dan_manges
If you're writing your code to work against multiple schemas, how do you test
it? Do you run your test suite twice, once with the pre-migration schema, and
again with the post-migration schema?

~~~
pedrobelo
For sure.

Most of the time migrations are developed in a feature branch, so the owner is
responsible for splitting his branch into two or three before sending pull
requests (as many as required to address issues with hot compatibility).

This means that we have two or more merges (and deploys) from master, each
requiring a full run of the test suite.

------
thommahoney
Andy Delcambre wrote a post two months ago about how Engine Yard handles zero-
downtime deploys with migrations. Link here:
[http://www.engineyard.com/blog/2011/zero-downtime-deploys-
wi...](http://www.engineyard.com/blog/2011/zero-downtime-deploys-with-
migrations/)

------
c3
This is old news, but we successfully use
<https://github.com/freels/table_migrator> in production not on heroku. It
creates a copy of the table, performs the schema changes, copies the data
over, then renames the tables for almost (i.e. 1-2 seconds) no downtime.

~~~
morganpyne
How does this deal with foreign key references? I thought that if you rename a
table the FK references from other tables will still point at the renamed
table? Discussion here: <http://dev.mysql.com/doc/refman/5.0/en/rename-
table.html>

The Percona pt-online-schema-change tool goes to great lengths to avoid this
kind of problem.

~~~
c3
Basically, it doesn't - because it doesn't need to. Rails/ActiveRecord doesn't
use those either. Problem solved :)

~~~
morganpyne
Interesting. As you have probably guessed by now, I'm not a Rails developer
and therefore did not know this. I was surprised to read about this after your
comment and find that Active Record went the lowest common denominator route
with this and therefore gave up any native foreign key integrity support.

~~~
JonnieCache
There is nothing to stop you using foreign key constraints with rails, the
activerecord migration api includes methods to create them for the major db
adapters and there are plugins to automate the process to some extent. It's
not the Rails Way™ because it sacrifices some database-agnosticity, and
therefore almost noone does it. People achieve the same behaviour with
application-level validations in the model.

I'd wager that a lot of the big professional rails deployments are doing FK
constraints though.

~~~
jaylevitt
The gem 'foreigner' makes them painless, and supports Postgres, Oracle, and I
think MSSQL.

------
baghali
I use percona tools pt-online-schema-change
[http://www.percona.com/doc/percona-toolkit/2.0/pt-online-
sch...](http://www.percona.com/doc/percona-toolkit/2.0/pt-online-schema-
change.html), wrote a custom gem for it while ago

------
lgierth
At SoundCloud we developed Large Hadron Migrator [1] to work around the
problem of locked tables with ALTER and MySQL. It creates a table with the new
schema, copies the data from the "old" table to the new one in chunks
(utilizing INSERT IGNORE and triggers) and then switches the two tables. The
switching is not yet atomic, as there was a bug with RENAME TABLE.

[1] <https://github.com/soundcloud/large-hadron-migrator>

Edit: URL

------
AznHisoka
Does it make sense to add extra meta-dummy columns in your tables for the
future? in case you do need to add an alter to add a column you can just reuse
those extra columns instead?

------
kmfrk
What's the Django equivalent of this?

~~~
woj
You pretty much have to do the same thing - release code using new models
before migrating the db. Django tends to spell out all columns in its SQL
statements, so when you delete one from the db, but fail to upgrade the code
you will run into a db exception.

I'd also recommend using South for migrations <http://south.aeracode.org/>

