

Heroku Sandy Situation Room - ujeezy
https://status.heroku.com/incidents/460

======
callmevlad
I really feel for the Amazon employees working in the us-east data center this
week. Not only do they have to worry about the safety of their families, but
half the internet will be watching them if anything goes down. I can't even
imagine all the contingencies they have to prepare for (e.g. engineer gets
paged, but has no power at home, or can't drive in due to flooded streets).
Best of luck to them.

------
cryptoz
I have pressureNET running on EC2 in US-East. It's collecting data about Sandy
from a bunch of Android users in the region, and I only realized a few hours
ago that...it's gonna get hit by Sandy. I'm not sure what to do.

The Android app has a hardcoded server URL to the DNS of my instance. If the
server goes down, and I prepare a backup, all my users will still be sending
to the old, dead server (assuming Sandy takes out AWS in that region). So I
can also update the app and give it a new server url. But I'm going to lose
many hours of valuable hurricane data as users take time to get the update,
etc.

Does anyone have any thoughts on how I can handle this?

I realize my mistakes and know how to fix them for next time, and my current
data is obviously backed up. But for incoming data...am I screwed?

Edit: I have an idea. I'll update the app and give it a backup URL to use only
if the main one is non-responsive. Then I'll publish the update and cross my
fingers.

~~~
timdorr
Edit: Rewrote my entire comment...

The delay on the app side should only be 15-30 minutes. Use a replication
database (much like the Postgres follower system on Heroku) to ensure no loss
of data and you should be fine.

------
jbredeche
"We recommend customers with production databases (Crane and up) create a
follower running in us-west-2 using the --region flag (this is an alpha
feature we are exposing ahead of schedule to prepare for this incident)"

Finally! Excited multi-AZ support is coming to Heroku.

~~~
bgentry
Regions are comprised of multiple Availability Zones. Heroku has always been
running on multiple AZs.

~~~
jbredeche
Thanks for the clarification. Meant to say regions.

------
btilly
Those on EC2 might want to read <http://alestic.com/2010/10/ec2-ami-copy> for
their own disaster preparations. (Though ideally you should have been on it
before now...)

------
mmahemoff
I was hoping <http://blog.linode.com/> would mention risk assessment/strategy
on Newark center.

This is a good summary of data centers at risk:
[http://readwrite.com/2012/10/29/hurricane-sandy-vs-the-
inter...](http://readwrite.com/2012/10/29/hurricane-sandy-vs-the-internet-in-
the-path-of-frankenstorm)

~~~
JshWright
You were hoping for transparency from Linode? I've been a long time Linode
customer (and don't have any plans to change that any time soon), but Linode
has never been great about transparency, and that has only gotten worse over
the past few years...

------
benarent
A few Heroku add-ons have also high availability options. Along with a new
add-on status page <http://status.addons.heroku.com/>

RedisToGo - <http://blog.togo.io/status/redistogo-hurricane-preparation/>

MongoHQ - [http://blog.mongohq.com/blog/2012/10/29/monitoring-the-
weath...](http://blog.mongohq.com/blog/2012/10/29/monitoring-the-weather-
situation-with-hurricane-sandy/)

------
xbryanx
Why aren't there more data centers in the solid craton part of the content far
from these inevitable coastal problems? Genuinely curious as to why Minnesota
or something is the datacenter capital of the US.

~~~
patrickgzill
The datacenter where most of my servers are is not too far from Philly, but I
have 0 worries about it. Why? It was properly designed and maintained from the
start.

Not only is the first floor 6 feet above the ground, but it is at the top of a
gentle slope that naturally drains all water away to lower areas. The water
level locally, would have to be about 20 feet for the first inch of water to
push its way through the doors.

There is absolutely NO reason that any datacenter in VA should be having
problems, aside from either shoddy facilities management or poor initial
choice of the site. Don't let anyone tell you differently.

------
dholowiski
I never thought of that- they don't just have aws availability to worry about,
they have the availability of 79 external service providers, totally out of
their hands, to worry about.

~~~
samratjp
it wouldn't matter as much. if your primary app and workers via Heroku is down
(which presumably are the ones using the add-ons) none of the end users are
going to notice the add-ons being down.

~~~
dholowiski
On the other hand, Heroku could do everything in its power to keep its main
system up, but the failure of a highly used addon, totally out of their
control, could cause nearly as much of a problem for them.

------
patrickgzill
We are complimenting companies for not having redundant setups but telling us
about it, when they have been claiming all along, that they have redundant
setups?

~~~
patrickgzill
It is funny to see the points on the parent comment, "oscillate", going from 2
or 3 up to its current zero.

------
herge
Is US-east the least reliable of all AWS data centres? Arguably, Virginia is
more susceptible to natural disasters than Oregon or Northern California
(while we wait for that 1 in a 100 year earthquake).

~~~
bradleyland
I prefer Dallas (Colo4, specifically). They've been incredible. Only one
significant outage in 7 years. They're nearly half way between coasts too, so
they're a great latency compromise if you're stuck with a single DC.

~~~
callmevlad
Rackspace also has a big data center in DFW. I only remember one or two
outages in the last several years.

------
JakeSc
This great that Heroku is doing this. It's nice to know that your cloud
provider is aware of and preparing for disasters like this.

~~~
dholowiski
Heroku has had enough really high profile failures, due to their reliance on
aws in the past that I would expect nothing less. If they haven't learned from
amazon's mistakes yet then there is something wrong.

