Edit: One more point. In the SLA, you'll find the following: “Region Unavailable” and “Region Unavailability” means that more than one Availability Zone in which you are running an instance, within the same Region, is “Unavailable” to you. What it implies is that if you do not spread across multiple Availability Zones, you will then have less than 99.95% uptime. So spreading across AZs should still reduce your downtime, just not beyond that 99.95%
"Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location."
That's the spec that everyone was building to, but that isn't what is happening. Of course you're right, multiple AZs can fail at the same time, but I read the above as saying that they should fail independently/coincidentally (until the entire Region fails).
Even if Amazon breach their SLA, I think they only have to refund 10% of one month's bill per year - i.e. a 1% discount. I suspect they'd make a good profit even if they paid out a full 10% refund every month.
Unless an SLA is accelerated - i.e. >100% refund - I don't think it's worth taking particularly seriously.
Of course if an SLA only guarantees 95% uptime, that's probably a big hint to design for failure!
It's like the hard disk maker that gives you a 1 year warranty vs a 5 year warranty... which one believes in their product more? :)
Suppose it's the same hard disk with a black sticker instead of a blue sticker. Drive with 1 yr warranty @ $100, 5 yr warranty @ $150, 20% additional failure rate over the extra 4 years, 50% redemption rate on failed drives. Cost per replaced drives = 20% * 50% * ($100 + $30 processing costs) = $13 = $37 profit.
Totally fictitious numbers to try to prove my point, of course :-) But as the SLA becomes increasingly low in value, the signalling value decreases in my book.
(Edit - fixed my math!)
One of them may be planning to be out of business, sell the HD business unit in 2 years, shove off the risk via financial wizardry, etc.
My guess is the great majority users will not RMA a dead hard drive after 4.5 years regardless of the stated warranty. Even if they did, it would only represent replacement with a future smallest-possible-capacity drive.
While agreeing it's not about the money it's about my site being up, I nevertheless was pretty shocked by this statement.
As far as I know we've heard nothing to the contrary from Amazon - it's totally possible that multiple AZs happened to fail independently/coincidentally. Perhaps it was simultaneous equipment failure? Or maybe one AZ failed and a sufficient number of people attempted to "fail over" to another AZ causing a chain reaction of failure?
The one bit of information we have suggests that the root cause was a networking issue, which suggests SPOF.
Now if you understand the SLA and still choose not to do cross-region deployments, then you've taken a cost/complexity vs uptime trade-off, which may well be right for you. quora.com probably is ok - who cares if its down for a day?
So anything that is beyond commercially reasonable is outside the SLA.
In truth, as with all businesses, the reputation for uptime weighs more heavily than the written contract. It will be interesting to see how the AWS people attempt to make amends.
It's kind of unfair to describe these as "weasel words" when it's unlikely that any decent lawyer would let them sign up to something that exposes them to more liability than this. Customers who are using any cloud service provider have to expect reasonable steps to maintain availability, not an absolute promise.
I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now. The issue is that EC2 is still beholden to the traditional points of failure (power, cooling, network issues). However, EC2 has the additional problem of Amazon's management software. I don't want to sound too down on Amazon's ability to make good software. However, Amazon's status site shows that EBS and EC2 also had issues on March 17th for about 2.5 hours each (at different times). Reddit has also just been experiencing trouble on EC2/EBS. I don't want this to sound like "Amazon is unreliable", but it does seem more hiccup-y.
The question I'm left with is what one is gaining from the management software Amazon is introducing. Well, one can launch a new box in minutes rather than a couple hours; one can dynamically expand a storage volume rather than dealing with the size of physical discs; one can template a server so that you don't have to set it up from scratch when you want a new one. But if you're a site with 5 boxes, would that give you much help? SoftLayer's pricing is competitive against EC2's 1-year reserved instances and SoftLayer throws in several TB of bandwidth and persistent storage. Even if you have to over-buy on storage because you can't just dynamically expand volumes, it's still competitively priced. If you're only running 5 boxes, the server templates aren't of that much help - and virtually none given that you're maybe running 3 app servers, and a replicated database over two boxes.
I'm still a huge fan of S3. Building a replicated storage system is a pain until you need to store huge volumes of assets. Likewise, if you need 50 boxes for 24 hours at a time, EC2 is awesome. I'm less smitten with it for general purpose web app hosting where the fancy footwork done to make it possible to launch 100 boxes for a short time doesn't really help you if you're looking to just have 5 instances keep running all the time.
Maybe it's just bad timing that I suggested we look at Amazon's new live streaming and a day later EC2 is suffering a half-day outage.
One fallacy that I think that many people make in the whole cloud debate is the idea that a given cloud provider is any more or less failure prone than a given dedicated server host.
We have assets on Amazon, Slicehost, and Linode. Sometimes these go down, whether it's our fault, software's fault, hardware's fault, or a construction crew hitting a fiber drop, things happen. If you're not backed up in a fully tested way on not just another server or availability zone, but whole different hosting infrastructure (preferably in a different time zone), then you're not really backed up. Being on a host like Amazon, or even a fully managed host like a Cadillac Rackspace plan doesn't remove the need for good BCP.
What these cloud services allow you to do in theory is have that backup infrastructure ready to go on relatively short notice _without_ keeping it running all the time. We can't reasonably afford to replicate all of our servers and hot data to Western Region or the Rackspace cloud 24/7. We can, however, afford to set up the infrastructure and spin it up on the fly within an hour with slightly stale data once a month to test it, and when for things break. Requisitioning that kind of hardware and then dumping it for only a few tens of dollars a month is difficult if not impossible on a virtual host.
The big question is not 'Is the cloud more reliable?', but 'Do i need what only the cloud can offer?'. If your current infrastructure can handle getting drudged or reddited fine, and you're only on a few servers, you're probably better off just paying to keep a hot spare up at softlayer.
On the other hand if you have 1) Occasional traffic bursting that you don't want to pay to handle most days and 2) Can accept a few minutes of downtime, then the solutions offered by cloud hosts blow the competition out of the water. I guess what you're gaining is not the management software, it's the ability to turn off & on quickly when something goes wrong (or, in the case of a redditing, right).
Part of figuring out the right hosting solution involves asking the right questions.
(..and for reference, we were all ready to go with a backup... and then we learned that our hosting company was storing our nightlies on S3 and couldn't retrieve them, and that our offsite DB solution was having an unrelated issue). Had we run proper tests (I'm brand new to the job), we would've been ready for this one. I also worry big time about DNS and load balancing being a big SPF, but that's a plan for another day.
If an AWS data center goes down it gets a lot of press, but does it actually outweigh the sum of all dedicated/shared/vps hosting issues on the equivalent volume?
I can order machines online and SSH in 3-4 hours later. Even exotic stuff they turn around just as fast - we saw that speed on a quad octocore box with a raid 10 of Intel SSDs.
That's real metal too, with real IO (most of my work is IO bound so VMs and the cloud are not options). You get to pick the exact CPUs, disks, etc and they slot them in solid Super Micro boards and use good Adaptec disk controllers. You pay monthly and can spin down the box at any time (though must pay full months, no per-minute pricing like AWS).
That is on the dedicated hardware side, you can also spin up compute instances and those can be cloned and fired up in bulk. But, they also have the IO problems that all other VMs have.
In any case, just wanted to mention they are a decent middle ground. Not as automated and polished as Amazon on the VM side but you can spin up mixtures of metal and VMs to get combinations that make sense - pushing compute or RAM-only stuff to VMs and keeping DBs and persistence layers on real metal. They have a few different datacenters too so you can spread gear around physical locations.
Problems are just not as common if you're running on a handful of dedicated machines, and a single dedicated machine at a good host can handle a LOT without having to do all the crazy reliability engineering that running on AWS requires. You need backups, but you don't need that same assumption that you need to be able to failover instantly or you will have guaranteed downtime sometime soon. I don't think that that difference can be overstated, since it lets you focus on more important things.
Or is it a better option when you are starting up, and want to be able to quickly throw hardware at a problem, should the need arise?
Apologies if this sounds like a pretty ignorant question, but I haven't implemented cloud-based services before. It seems like there is a hardware cost vs. people cost due to the newer nature of AWS and the like, and that needs to be factored into development / maintenance time.
Saving people time by relying on a known quality like arrays of Linux servers with failure tolerance seems preferable.
I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now.
An issue at Amazon, or Rackspace, or Linode, or Slicehost need not imply failure at other providers and cloud as an alternative to dedicated is still as viable as ever. Amazon tanking does not mean everybody needs to run back to dedicated, and my pet peeve is that when one provider takes a crap everyone paints the cloud as toxic.
When ThePlanet's facility exploded a few years ago I did not hear lamenting that dedicated hosting was doomed. When an airliner crashes we do not say air travel is doomed. I do not understand why people rush to paint cloud as a toxic choice in light of a failure of a certain player. Admittedly a big one but there are others too and you can move.
Providers like Linode are almost exactly equivalent to dedicated hosting. They just administer the hardware for you and pay the remote hands bills. Same for Slicehost and Rackspace. It is simply far easier to wipe your instance and start over and for all intents and purposes it acts like a dedicated box. You need to administer it like one too. Most failures of the "cloud" are really designing your application in violation of the fallacies linked elsewhere.
Basically, if you're running a database that does not completely fit in memory you should be on dedicated hardware.
I'd also point out that a lot of advantages that people routinely cite as cloud strengths are more about cloud vs traditional hosting or colocation as opposed to cloud vs a place like softlayer. softlayer can provision a custom build in a few hours (yeah vs. minutes, but who really cares that much) and you pay month-to-month without a contract.
You mean like newservers.com, SoftLayer Bare Metal Cloud, stormondemand, or one of the other metal clouds?
Disclaimer: I'm a director at orionvm.
You are correct, I/O is the challenge in administering systems in a virtual environment. My database which does not fit in memory does fine on a high-load site because I cache it responsibly. For comparison, here are awful results from a new player called ChunkHost, who I signed up for with the purpose of testing
The sequential write throughput there is troubling. This comparison from a couple years back is interesting too
I've linked this URL before but it really does the best of breaking it down. What cloud providers have you tried? In my experience there are vast gaps between certain ones, Amazon no exception. Hard to stereotype cloud with gaps like those.
Even if SoftLayer could provision me a new box in ten minutes the improvement to my sleep from not waking up for every disk failure and submitting a remote hands ticket at who knows how much per pop far outweighs anything else.
In general, if you really believe what you're saying, you either (1) have a very poorly designed application, (2) have a very poorly designed database environment, or (3) are speaking to a specialized application that wouldn't reflect the majority of environments operating in real life. This isn't to say it isn't a combination of these options, mind you. I didn't even start on utilizing caching in applications, because it's clear there are other hurdles to overcome first.
The application would be transparently mirrored to another region and if an even such as this occurs, the mirror would be spun up.
The Customer would choose the frequency of snapshot desired, and would pay for that.
Certain sites, with less dynamic content, would be mirrored and continue to operate as normal with minimal impact or cost.
Other sites, where the content creations is fairly real-time from its users, would pose more complex and costly mirroring situations (ala reddit).
But the option should be there.
Also, remember to think of the evolution of amazons services say, 24 months from now, where this type of offering will likely become more a reality.
As too many others have noted, it is best to not be 100% reliant on amazon for your entire services - but at this point in time its a little hard to spread the load between competing offerings to AWS/EC2 etc.
The option IS there. I know, because I had zero downtime today and am 100% on AWS.
I've seen a lot of misinformation about this, with people suggesting that the sites (reddit/foursquare/heroku/quora) are to blame. I believe that the sites were designed to AWS's contract/specs, and AWS broke that contract.
Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.
Yet what Amazon guarantees, by way of their SLA, is only 99.95% for a region[2,3]:
The Amazon EC2 SLA guarantees 99.95% availability of the service within a Region over a trailing 365 day period.
 Of course, they're not even meeting that right now. :-(
In fact, the first bit you quoted provides an even stricter technical contract than the one on the main EC2 page - it states some degree of natural disaster tolerance, heavily suggesting separate datacenters (not just different floors). Thanks for pointing that out.
Whatever the common point of failure turns out to be, it does seem to have been shared across AZs, in violation of their FAQ.
We're down to 3 nines so far. A few more hours to 2 nines.
The cloud is not for all businesses.
"The cloud" is not (and never has been) a cure-all for reliability issues. It's just as easy to have single points of failure as any other hosting strategy, and is just as easy (or difficult) to plan for. Companies that have planned for high availability with multi-region or multi-provider strategies will continue to be available, regardless of whether or not they are using "the cloud".
That implies something about reliability. The downtime today is real data about that availability.
Use this as an example of the reliability of EBS (or if you want to broaden the scope, Amazon Web Services) all you want, but this says nothing about "the cloud" as a concept.
That's a nonsensical question to ask.
If your business is amongst the chosen few that can justify the cost to guarantee any number of nines then your availability strategy involves multiple vendors anyways.
The cloud is not for all businesses.
Whether Amazon can be part of an availability strategy has nothing to do with the number of nines.
Cloud is vulnerable? Of course it is. So plan accordingly.
If I'm engineering the same steps in the cloud as I am in the data center, then I'm going to skip a step and just engineer the data center, because adding machines on demand is not rocket science. But maybe that's just me.
If someone says to you "We need to improve the efficiency of our IT by adopting a cloud based strategy." Rather than ask them the 'meta' question of what sort of reliability guarantees they have, have an actual and honest talk about what IT costs and why. And perhaps they will relax their uptime requirement which will let you reduce your costs, or they will come to understand what the costs are for the level of uptime you're providing. Annual reviews of those questions (how much downtime can we tolerate, how much are we paying to achieve our current availability?) should be de rigueur.
"The cloud is not for all businesses."
Of course it isn't. However it can (and does) run some businesses more efficiently. And while Quora might be down for a day while folks at Amazon scramble to fix what ever it is they did that brought it down, their "business" won't change all that much. There will be no mass exodus of users because they could get their questions answered for one day. Now if you take someone's email away for a day, that is real money, or if you take away their ability to connect to the Internet period.
For something like icanhazcheeseburger even two 9s is probably good enough. That would be offline for 3.6 days of the year.
But what people forget is: AWS has a world class team of engineers first fixing the problem, and second making sure it will never happen again. Same with Heroku, EngineYard, etc.
Host stuff on dedicated boxes racked up somewhere and you will not go down with everyone else. But my dedicated boxes on ServerBeach go down for the same reasons: hard drive failure, power outages, hurricanes, etc. And I don't have anyone to help me bring them back up, nor the interest or capacity to build out redundant services myself.
My Heroku apps are down, but I can rest easy knowing that they will bring them back up with out an action on my part.
The cloud might not be perfect but the baseline is already very good and should only get better. All without you changing your business applications. Economy of scale is what the cloud is about.
Do we have reason to believe that it will only get better? I think it's possible the complexity of the systems we are building and the traffic they encounter will outpace our ability to manage them. Not saying I think it's the most likely outcome, but I don't feel as confident as you.
But do we believe in "economy of scale" for computer and Internet systems in this age? Google, Amazon, Facebook, etc. have already proven to me that they have enough human and financial capital to architect and run systems that show economies of scale.
It's a bit scary to think about what it will mean when this runs out, but for now I personally feel confident that things are getting much better, and will continue to do so.
Perhaps for Quora and the like, engineering for the amount of availability needed to withstand this kind of event was simply not cost effective, but I seriously doubt the possibility didn't occur to them. It's not even obvious to me that there are many people who did follow the contract you reference who had serious downtime. All of the cases I've read about so far have been architectures that were not robust to a single AZ failure.
As for multi-az RDS, it's synchronous MySQL replication on what smell like standard EC2 instances, probably backed by EBS. Our multi-az failover actually worked fine this morning, but I am curious how normal that was.
I'm curious if anyone has any predictions about what the landscape will be like in a few years? Will these be solved problems? Will cloud services lose favor? Will everything just be designed more conservatively? Will engineers finally learn to read the RTFSLA?
I could eventually see, with help from functional languages like Lisp or Erlang, a intra-company cloud running on and between networks. CPU could be bought from 3 providers, and storage could be bought from 4 providers, with GPU acceleration clusters when big data needs crunched quickly.
Or right now, companies can make their own clouds via Eucalyptus. Don't want Amazon to hold your keys? Load balance between Your cloud and Amazon's.
It's an irony.
"reddit is in "emergency read-only mode" right now because Amazon is experiencing a degradation. they are working on it but we are still waiting for them to get to our volumes. you won't be able to log in. we're sorry and will fix the site as soon as we can."
thx for the explanation.
I would upvote your cheap shot if you administered a top-200 site with a technical staff of three.
I don't have much karma to begin with so I didn't mean to "offend" anyone. Just thought it would be interesting discussion since I tried to post to Reddit today and realized it wasn't possible. Thanks for helping me recover some of my karma back :)
Well they certainly weren't reading Reddit.
A cheap shot is making fun of a person for having a birth defect or a dead mother, but ok, I guess I came a little close with this last one. I'll be sure to be nicer to the $10 million corporation.