Hacker News new | past | comments | ask | show | jobs | submit login

This is a pretty normal way to handle things: double writes, active sync/migration, double reads, disable old writes, finish sync, disable old reads.

Over the last 12 months we've migrated our entire system handling hundreds of millions of requests a day through 2 different database systems. Just requires testing and good release management.

Also it would be nice to have real numbers other than "bajillions"... what's that even mean? Doesn't sound like much more than a few gigs of data, in which case this transition could take seconds by just using an in-memory system.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact