No it doesn't. It underscores the vulnerabilities of not understanding your hosting and accepting the "no outages" slogans of ANY cloud. A single data centre is always susceptible to outages like this, it doesn't matter who owns it. If any of those sites had owned a single data centre that was hit by storm damage, the impact would be the same. I know this is supposed to be the year of the cloud backlash but even so...
Either way, that quote is ridiculous.
I was able to finish out the episode, so their CDN was working for the actual media, but everything else was dead for me.
Another useless anecdote: A coworker was watching on his xbox, and it apparently cut mid-stream for him.
Is Heroku resilient against single-AZ failure (so only some subset of customers go down, and then it restarts), or is it exposed so that if any AZ goes down, core stuff also goes down? The sites I care about on Heroku seem to go down whenever any US-East badness happens at all, even if it is "limited to a single AZ" per Amazon.
Other AWS services are also used frequently and heavily. AWS use is strongly encouraged for any new projects.
> "I also wanted to clarify that Route 53 is an Amazon-built and operated service. It is not a re-branding of a third party DNS service. Over time you'll see various parts of Amazon move over to use Route 53."
I run applications on EC2 and RDS. I'm using Oracle. AWS has recently introduce Multi-AZ Oracle, but I haven't enabled yet. Before it was available, though, I set up a poor-man's procedure that consists of running data exports and dropping them on S3.
Now, when everything went to hell in the east, I lost an RDS instance. I couldn't do point-in-time restore, and I couldn't snapshot (both are still pending since 7 AM or so).
Luckily, I was able to spin up an RDS instance in the west, pull down the latest data from S3, and do an import. I repointed my apps at the new database, and now I'm back up.
The process took about 45 minutes. Setting up the backup scripts took about 20 minutes about 2 years ago. Now I'm just sitting on my hands waiting for the AWS ops team to fix everything. This is work I'd normally be scrambling to do myself. I'm quite happy to let those talented folks deal with it. When it's all back up and running, I'll check integrity and consistency, and I might have to restore some interim data, but for now I'm operational.
I'm sure there are worse scenarios, but the major outage last year and in the past 24-hours were quite easily mitigated.
There's something to be said for being part of a giant machine. AWS really is utility computing, so even the small guys get the benefit by virtue of standing next to the big guys.
This already works well for email providers.