Anyone else experiencing a problem?
Their uptime is much higher on average than any IT team I've ever been involved in.
Our internal uptime is 99.985 in production, we are fast moving and roll out changes every day, we run mainline kernels and all of our 350 odd servers and 800~ containers are running on completely vendor independent, open source software.
I'm not saying it's easy, but the middle man is there to help you if you can't find or afford up front good operational engineers, or to take your money because their advertising has made you believe that they are always the best decision.
We perform an in-detail yearly cross-cost comparison between AWS and our operated datacentre, the cost to run and maintain the same uptime, processing power (and yes we take into account spinning down instances at night etc...), bandwidth between zones, backups and customers and it really hasn't improve at all over the past 3 years. This year the review came back that our yearly expenditure on operational expenses would increase from approximately $500,000 (including human resources) to well over $3,000,000 a year. (Not kidding), the margin of error was approximated at between 10-20%.
You sound genuinely very smart and knowledgeable in this area. But the other 90% of the workers in this sector are not.
> Amazon itself claims that you must have your hosts across various zones to get decent uptime - that's like saying "oh yes - the Toyota Carolla is really reliable, it works 99.99% of the time... As long as you buy a second one for when it's not available".
Wait, you don't have a second data center for your mission critical systems in case your primary fails?
> We perform an in-detail yearly cross-cost comparison between AWS and our operated datacentre...bandwidth between zones, backups and customers and it really hasn't improve at all over the past 3 years
I totally agree. If you have the right resources, a good data center partner and well defined process, then "the cloud" isn't for you. For the other 90% of the people out there that simply don't have the know-how, knowledge, or resources to find talented IT operational excellence, then AWS totally makes sense.
Thank you for the kind words there, I think one major thing for us is that we've hired a small number of just the right people, each with quite different backgrounds and we work VERY closely with our developers. Every bit of configuration is kept in GIT and we CI / CD whatever we can.
That's all that Multi-AZ is mate ;)
Most of the time people here might be seeing good portions of their infra go away, but the number isn't statistically significant to the overall region health for them to post an outage.
Don't ask me what those numbers are, but that is the way it is determined.
You might have an incident affecting just 2% of the API calls, and affecting less than 2% of the user base (even that would be unusually large and a source of big drama internally). The service could be super stable and extremely reliable, but that 98% could get completely the wrong idea if they saw a service status, (and of course from a PR perspective, the same goes for anyone looking to use the platform.)
A service dashboard is an extremely blunt tool with which to pass out a message about service status. It renders what is an extremely nuanced situation down to "All good, maybe, no, DEAD"
To give a rough example, one service I was familiar with had a "page everyone in the team" level of incident. API availability tanked, badly. It looked atrocious, and seemed like hardly any requests were getting through successfully. You'd have every expectation that they should at least post a yellow alert, if not approaching red. It turned out that it was one single customer who's requests were failing (I forget the reason why), but due to a bug in the customer's software consuming the API, every time it got a 500 response, it would immediately resend the request, every single time, with no timeout or limited retry number. It reached such a terrific pace it got to the point where they made up a huge majority of all the requests hitting the endpoint. Every other customer using the service was completely fine. If you'd looked at the API graphs you'd think "POST YELLOW, POST YELLOW, NOW NOW NOW!", but because they took time to figure out the actual impact, they found out that would have been totally the wrong thing to do.
Service health dashboards are a neat idea, but one that is in desperate need of a rethink and overhaul. It has some value when you're a smaller service, but it just doesn't accurately scale with the platform.
I'm not sure what the real solution is. They've somehow got to pull together TB of logs and/or metrics to make an accurate assessment of the scenario, and do it in a matter of minutes, so as to provide accurate updates, and not needlessly panic customers.
Red's for heat death of the universe.
Another thing to look into is EC2 Auto Recovery . I don't know if this would've kicked in with today's event, but it's worth setting up as an extra safety net.
edit: I'm basing this off the status page which indicated that only one AZ was impacted.
Both AZs are directly under the deluge and I don't believe only one AZ is affected for a second.
The size of the storm can be seen here http://www.bom.gov.au/products/IDR713.loop.shtml#skip
The classic irony for me was a service manager in just such an environment resisting a cloud move "because it's someone else's computer" - even though his (ancient) application was running on a rented partition of a remote, IBM owned & operated S/390...
No surprise therefore that the big clouds have country resources dedicated to moving the needle on cloud awareness in highly regulated environments.
(obdisclosure: I am former .au AWS manager)
Noone ever seems to be able to refer to a specific law, but then, it's an IT person talking to lawyers, so there are some battles you just don't fight.
10:47 PM PDT We are investigating increased connectivity issues for EC2 instances in the AP-SOUTHEAST-2 Region.
11:08 PM PDT We continue to investigate connectivity issues for some instances in a single Availability Zone and increased API error rates for the EC2 APIs in the AP-SOUTHEAST-2 Region.
11:49 PM PDT We can confirm that instances have experienced a power event within a single Availability Zone in the AP-SOUTHEAST-2 Region. Error rates for the EC2 APIs have improved and launches of new EC2 instances are succeeding within the other Availability Zones in the Region.
Jun 5, 12:31 AM PDT We have restored power to the affected Availability Zone and are working to restore connectivity to the affected instances.
I'm joking of course, but that's what ran through my mind while reading that timeline.
What a mess.