The Android app has a hardcoded server URL to the DNS of my instance. If the server goes down, and I prepare a backup, all my users will still be sending to the old, dead server (assuming Sandy takes out AWS in that region). So I can also update the app and give it a new server url. But I'm going to lose many hours of valuable hurricane data as users take time to get the update, etc.
Does anyone have any thoughts on how I can handle this?
I realize my mistakes and know how to fix them for next time, and my current data is obviously backed up. But for incoming data...am I screwed?
Edit: I have an idea. I'll update the app and give it a backup URL to use only if the main one is non-responsive. Then I'll publish the update and cross my fingers.
The delay on the app side should only be 15-30 minutes. Use a replication database (much like the Postgres follower system on Heroku) to ensure no loss of data and you should be fine.
Finally! Excited multi-AZ support is coming to Heroku.
This is a good summary of data centers at risk: http://readwrite.com/2012/10/29/hurricane-sandy-vs-the-inter...
RedisToGo - http://blog.togo.io/status/redistogo-hurricane-preparation/
MongoHQ - http://blog.mongohq.com/blog/2012/10/29/monitoring-the-weath...
Not only is the first floor 6 feet above the ground, but it is at the top of a gentle slope that naturally drains all water away to lower areas. The water level locally, would have to be about 20 feet for the first inch of water to push its way through the doors.
There is absolutely NO reason that any datacenter in VA should be having problems, aside from either shoddy facilities management or poor initial choice of the site. Don't let anyone tell you differently.
You know except for hurricanes and terrorism but whatever it's got a fat pipe.
Availability zones in US-East-1 are at least one data center, but may be multiple facilities in geographic proximity.