Hacker News new | comments | show | ask | jobs | submit login
AWS Service Interuptions
78 points by codingninja on June 5, 2016 | hide | past | web | favorite | 53 comments
AWS Is currently having issues with a bunch of our instances and the console reports the following error "An error occurred fetching instance data: The service is unavailable. Please try again shortly. "

Anyone else experiencing a problem?




It will take a direct hit with nuclear weapon on the datacenter for Amazon to change icon to red on service status page.


Yeah we monitor lots of Amazon & Microsoft 'cloud' services, we observe much, much higher downtime / number of outages than they ever report in a order of 50 to 1 or more. What do you expect though, both companies are known for lying through teeth to convince the IT community (or more likely the IT managers) that their services are reliable for everyone and amazing uptime and that they're not only a good option but the only option.


> What do you expect though, both companies are known for lying through teeth to convince the IT community (or more likely the IT managers) that their services are reliable for everyone and amazing uptime

Their uptime is much higher on average than any IT team I've ever been involved in.


Oh wow really? That's really bad - you must have worked with some really poor ops teams in the past. Last year we measured less than 97% uptime on AWS Sydney, and a shocking 96% uptime for Office 365 exchange online. Most of the problems when we investigated them out of interest were due to either internet network routing issues within their networks (or first ISP hop), or they just had hosts outright fail. The 'cloud' is just outsourced hardware with a provided toolset (APIs etc...), Amazon itself claims that you must have your hosts across various zones to get decent uptime - that's like saying "oh yes - the Toyota Carolla is really reliable, it works 99.99% of the time... As long as you buy a second one for when it's not available".

Our internal uptime is 99.985 in production, we are fast moving and roll out changes every day, we run mainline kernels and all of our 350 odd servers and 800~ containers are running on completely vendor independent, open source software.

I'm not saying it's easy, but the middle man is there to help you if you can't find or afford up front good operational engineers, or to take your money because their advertising has made you believe that they are always the best decision.

We perform an in-detail yearly cross-cost comparison between AWS and our operated datacentre, the cost to run and maintain the same uptime, processing power (and yes we take into account spinning down instances at night etc...), bandwidth between zones, backups and customers and it really hasn't improve at all over the past 3 years. This year the review came back that our yearly expenditure on operational expenses would increase from approximately $500,000 (including human resources) to well over $3,000,000 a year. (Not kidding), the margin of error was approximated at between 10-20%.


> Oh wow really? That's really bad - you must have worked with some really poor ops teams in the past.

You sound genuinely very smart and knowledgeable in this area. But the other 90% of the workers in this sector are not.

> Amazon itself claims that you must have your hosts across various zones to get decent uptime - that's like saying "oh yes - the Toyota Carolla is really reliable, it works 99.99% of the time... As long as you buy a second one for when it's not available".

Wait, you don't have a second data center for your mission critical systems in case your primary fails?

> We perform an in-detail yearly cross-cost comparison between AWS and our operated datacentre...bandwidth between zones, backups and customers and it really hasn't improve at all over the past 3 years

I totally agree. If you have the right resources, a good data center partner and well defined process, then "the cloud" isn't for you. For the other 90% of the people out there that simply don't have the know-how, knowledge, or resources to find talented IT operational excellence, then AWS totally makes sense.


Yes we have two datacentres and we do have a few VPS mostly for triangulation of monitoring, but honestly, in four years - we haven't had to failover once, although we practise it with our applications almost every single day.

Thank you for the kind words there, I think one major thing for us is that we've hired a small number of just the right people, each with quite different backgrounds and we work VERY closely with our developers. Every bit of configuration is kept in GIT and we CI / CD whatever we can.


> Yes we have two datacentres

That's all that Multi-AZ is mate ;)


Those icons don't change unless a certain percentage of the overall count of instances in an AZ or region are affected.

Most of the time people here might be seeing good portions of their infra go away, but the number isn't statistically significant to the overall region health for them to post an outage.

Don't ask me what those numbers are, but that is the way it is determined.


Sounds interesting. Is that data available?


From my experience in AWS, part of the problem is scope of impact. It's easy to lose track of just how many active customers there are at any time, and it's easy to see the platform as a cohesive whole, i.e. "If it's affecting you it must be affecting everyone else". In reality almost every customer impacting event affects only a tiny percentage of the active users at any one time. I know it can be hard to believe or see this as an external customer, because after all the service appears to be down to you. Take, for example, when people start saying "us-east-1a" is down. What is "us-east-1a"? If you've watched some of the re-invent talks you'll know that it actually describes numerous data centres, in close proximity (within a certain millisecond network target). If one of those has an incident, it might look to some customers like "us-east-1a" is down, when the reality might be that 95%+ of the data centres still fully functional, and most customers aren't seeing an impact.

You might have an incident affecting just 2% of the API calls, and affecting less than 2% of the user base (even that would be unusually large and a source of big drama internally). The service could be super stable and extremely reliable, but that 98% could get completely the wrong idea if they saw a service status, (and of course from a PR perspective, the same goes for anyone looking to use the platform.)

A service dashboard is an extremely blunt tool with which to pass out a message about service status. It renders what is an extremely nuanced situation down to "All good, maybe, no, DEAD"

To give a rough example, one service I was familiar with had a "page everyone in the team" level of incident. API availability tanked, badly. It looked atrocious, and seemed like hardly any requests were getting through successfully. You'd have every expectation that they should at least post a yellow alert, if not approaching red. It turned out that it was one single customer who's requests were failing (I forget the reason why), but due to a bug in the customer's software consuming the API, every time it got a 500 response, it would immediately resend the request, every single time, with no timeout or limited retry number. It reached such a terrific pace it got to the point where they made up a huge majority of all the requests hitting the endpoint. Every other customer using the service was completely fine. If you'd looked at the API graphs you'd think "POST YELLOW, POST YELLOW, NOW NOW NOW!", but because they took time to figure out the actual impact, they found out that would have been totally the wrong thing to do.

Service health dashboards are a neat idea, but one that is in desperate need of a rethink and overhaul. It has some value when you're a smaller service, but it just doesn't accurately scale with the platform.

I'm not sure what the real solution is. They've somehow got to pull together TB of logs and/or metrics to make an accurate assessment of the scenario, and do it in a matter of minutes, so as to provide accurate updates, and not needlessly panic customers.


Amazon's own criteria are yellow for one AZ down (which this one was), red for multiple AZs down.


Nah, that'd be green with the ! icon.

Red's for heat death of the universe.


Not sure if relevant to this issue, but Sydney is currently being hit with one of the biggest storms I can remember in the past few years. Probably not crazy enough to take down a DC, but might be a contributing factor in this outage.


I realize that some systems may need to have all of their servers located close together in a single AZ. But barring that, if this took you offline, you should really consider spreading your instances across AZs. It's so easy there's no excuse not to do it.

Another thing to look into is EC2 Auto Recovery [1]. I don't know if this would've kicked in with today's event, but it's worth setting up as an extra safety net.

[1] https://aws.amazon.com/blogs/aws/new-auto-recovery-for-amazo...

edit: I'm basing this off the status page which indicated that only one AZ was impacted.


The site I manage is load balanced across both AZs ap-southeast-2a and ap-southeast-2b which did not save it. At the moment ec2 statuses are not being updated which is preventing ELBs from registering instances as healthy.

Both AZs are directly under the deluge and I don't believe only one AZ is affected for a second.

The size of the storm can be seen here http://www.bom.gov.au/products/IDR713.loop.shtml#skip


This is the most concerning thing to me. The Multi-AZ, redundant setup is worthless if the ELB can't do its job properly. I've seen some funky behavior from the ELBs when it comes to instance state. They really need to make this better.


Sadly our use case (private data etc.) prevents us from leaving the local availability zone, meaning when it went down today we were left totally unavailable. The recovery itself is ongoing but our applications are resilient enough to detect the restored connections and automatically add themselves back into the cluster.


Availability zones are different from regions. You can be in multiple AZ's within the Sydney region still.


That's interesting. Is it an Australian regulation? Curious that they'd make it in such a way that the data had to reside in the same building/zone.


Indeed it is, it was a massive struggle getting approval to move into a cloud service in the first place.


I almost hate to point this out, then, but you did consider that there's no guarantee that an AZ is a single DC, right?


It's pretty much guaranteed not to be the case.


Really ? Pointing the local gov department's officer to AWS's IRAP compliance cert was all that was needed to move quite a lot of their stuff unto AWS.


Yeah, and I'm curious about which sector or agency is the culprit here. Even APRA (the financial regulator) are cloud-friendly now, if you engage them at the start of an adoption process. My wild guess is health insurance, being a sector where IT is notoriously hidebound, but it could just be a case of overzealous/interfering/uncomprehending lawyers. A security policy that precluded cross-site service or data replication would likely be in contradiction with DR/BCP plans.

The classic irony for me was a service manager in just such an environment resisting a cloud move "because it's someone else's computer" - even though his (ancient) application was running on a rented partition of a remote, IBM owned & operated S/390...

No surprise therefore that the big clouds have country resources dedicated to moving the needle on cloud awareness in highly regulated environments.

(obdisclosure: I am former .au AWS manager)


    lawyers
I've supported multiple legal firms who have assured me they cannot legally host their data in the cloud.

Noone ever seems to be able to refer to a specific law, but then, it's an IT person talking to lawyers, so there are some battles you just don't fight.


I would not call APRA cloud friendly. Systems of record can not be in the cloud, and I don't know of any bank that is actually storing data in the cloud


From AWS status page for Asia Pacific:

10:47 PM PDT We are investigating increased connectivity issues for EC2 instances in the AP-SOUTHEAST-2 Region.

11:08 PM PDT We continue to investigate connectivity issues for some instances in a single Availability Zone and increased API error rates for the EC2 APIs in the AP-SOUTHEAST-2 Region.

11:49 PM PDT We can confirm that instances have experienced a power event within a single Availability Zone in the AP-SOUTHEAST-2 Region. Error rates for the EC2 APIs have improved and launches of new EC2 instances are succeeding within the other Availability Zones in the Region.

Jun 5, 12:31 AM PDT We have restored power to the affected Availability Zone and are working to restore connectivity to the affected instances.


It took them an hour to figure out that their connectivity issues were caused by losing power to an entire Availability Zone? Maybe they should add an alert for "AZ has no power" or put it on a dashboard...

I'm joking of course, but that's what ran through my mind while reading that timeline.


Wasn't quite that simple. I lost connectivity to instances that did not reboot so I'm guessing it took out some network elements.


Been having issues from Sydney AP-Southeast-2, probs from bigger-than-usual storm that's been going on here for the past few days.


I recently switched to Google Compute Engine. It's cheaper and so far more reliable than AWS. Might be another option for some people here.


Even GCE has had (global) outages (interesting post-mortem here [1]), no provider is really safe from these sorts of issues.

[1] https://news.ycombinator.com/item?id=11489791


I am trying to convince people at my work to move to GCP from AWS, but AWS truly has become the Microsoft of Cloud computing. Many people have no idea there are other providers like GCP, Azure, DigitalOcean etc.


Azure might be great in a year or so, but it makes me uneasy as is. Some of the services are great, but a lot of them are pretty fragmented. I've had so many instances where our billing/usage data has just "disappeared" for a few days, undocumented changes have been made to the formats of reports/exports/APIs, and official documentation is plain wrong that I just can't recommend Azure to anyone. Not to mention they have the most expensive infrastructure costs out of the major players (even with an EA and decent monetary commitment); their premium for Windows licensing is the lowest by far though (not surprising), so it does end up being a cheaper option for super windows heavy shops though.


Although I am quite optimistic about Azure, GCP seems like the best bet at the moment. I think of factors like Reliability, performance, availability, cost and longevity.


Our Sydney EC2 DB instance is stuck spinning in the "stopping" state so we are basically offline right now. The team is working on getting a new DB instance set up, but I read that our payment provide, Westpac, is also having issues. So even if we do get back online, users might not be able to purchase.

What a mess.


For all intents and purposes, we're completely offline at the moment. It's clearly some serious issue because the icon for EC2 in Sydney on AWS' status page is yellow, rather than the usual green tick with the small 'i'.


Still down 5 hours later. ELB won't register instances. Ugh


The ELB control plane woke up for us about 90 minutes ago; back to flying on all engines again now.


Multiple ELB's came alive around that timeframe but our primary ELB has remained unable to re-register instances. Creating a new ELB as a test and trying to register new instances from the effected ASG has also failed.


Another confirmation here, all services in our Sydney AZ are down. AWS Support last mentioned a power failure or similar in AZ1, but some of ours are coming back online now.


I see lots of people using ELB for load balancing. Anyone tried using DNS on top of ELB to spread the load? That might just save you from the extended downtime.


Generally the ELB should have instances in different availability zones which are data centers miles apart. If your ELB went down creating a new one should be simple if you can access the region. The problem with high availability and spreading load is how to deal with your database and recovery.


Works for me - Sydney Region


I cannot tell if this is an amazing joke or not.


Yes, AWS EC2 (Sydney) is completely offline from what we see. We have almost 10 servers there unaccessible for over an hour.


Yes, it seems that zone A is completely down. However, load balancers seem to be affected as well.


Your zone A might be another customer's zone B. AWS maps availability zones per account.

See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-reg...


Still having issues with accessing apps hosted on Elastic Beanstalk on AP-SOUTHEAST-2 Region. Restarting app servers / rebuilding the environment doesn't make a difference.


Curious if you are all based in Australia, or if the Sydney outage is effecting other regions?


Anyone having problems with BJ servers? My site is not running, https://www.weisisheng.cn, I can not SSH into machine nor access login page to AWS dashboard.


We appear to be back online, however all machines have rebooted.


ap-southeast-2 EC2 appears to be completely offline for us




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: