1) For three whole days, it was questionable whether or not a user would be able to launch a node pool (according to the official blog statement). It was also questionable whether a user would be able to launch a simple compute instance (according to statements here on HN).
2) This issue was global in scope, affecting all of Google's regions. Therefore, in consideration of item 1 above, it was questionable/unpredictable whether or not a user could launch a node pool or even a simple node anywhere in GCP at all.
3) The sum total of information about this incident can be found as a few one or two sentence blurbs on Google's blog. No explanation nor outline of scope for affected regions and services has been provided.
4) Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems.
5) Some users here are reporting that they have received no response from GCP support, even over a time span of 40+ hours since the support request was submitted.
6) Google says they'll provide some information when the next business day rolls around, roughly 4 days after the start of the problem.
I really do want to make sure I'm understanding this situation. Please do correct me if I got something wrong in this summary.
When things stop working, GCP is the worst. Slow communications and they require way too much work before escalating issues or attempting to find a solution.
They already have the tools and access so most issues should take minutes for them to gather diagnostics, but instead they keep sending tickets back for "more info", inevitably followed by a hand-off to another team in a different time zone. We have spent days trying to convince them there was an issue before, which just seems unacceptable.
I can understand support costs but there should be a test (with all vendors) where I can officially certify that I know what I'm talking about and don't need to go through the "prove its actually a problem" phase every time.
The issue with outages for the Government organizations I have dealt with is rarely the outage itself - but strong communication about what is occurring and realistic approximate ETAs, or options around mitigation.
Being able to tell the Directors/Senior managers that issues have been "escalated" and providing regular updates are critical.
If all I could say was a "support ticket" was logged, and we are waiting on a reply (hours later) - I guarantee the conversation after the outage is going to be about moving to another solution provider with strong SLAs.
Sure, we use support tickets with vendors for small things. Console button bugging out, etc. But for large incidents, every vendor has a representative within an hour driving distance and will be called into a room with our engineers to fix the problem. This kind of outage, with zero communication, means the dropping of a contract.
Communication is critical for trust, especially if we're running a business off it.
You need failovers to different providers and hopefully also have your hardware for general workloads
And suddenly the CEO doesn't care anymore if one of your potential failovers is behaving flaky in specific circumstances
Not saying it's good as it is.. communication as a saas provider is - as you said- one is the most important things... But this specific issue was not as bad as some people insinuate in this thread
Don't get it wrong. AWS is the exact same thing as Google. All you will is log a ticket and receive an automated ack by the next day.
When I worked at GoDaddy, there were around 2/3 of the company was customer support.
At the current company I'm at, a cryptocurrency exchange, our support agents frequently hear they prefer our service over others because of our fast support response times (crypto exchanges are notorious for really poor support).
All of my interactions with Amazon support have been resolved to my satisfaction within 10 minutes or less.
Companies really ought to do the math on the value that comes from providing fast, timely, and easy (don't have to fight with them) customer support.
Google hasn't learned this lesson.
They have though; they've just drawn the conclusion that they'd rather put massive amounts of effort in to building services that users can use without needing support. This approach works well once the problems have been ironed out, but it's horrible until that's the case. Google's mature products like Ads, Docs, GMail, etc are amazing. Their new products ... aren't.
Google Ads and such also have a terrible support reputation, even with clients spending 8 figures.
Until something goes wrong and the only recourse is to post an angry Hacker News thread or call up people you personally know at Google to get it fixed. For example https://techcrunch.com/2017/12/22/that-time-i-got-locked-out....
We actually got to a point where we had a couple of spare parts onsite (sticks of RAM, HD, etc) and so we repair immediately and then request the replacement. This was on a large HPC cluster so we had almost daily failures of some kind (most commonly we'd get a stick of RAM that would fail ECC checks repeatedly).
Isn't that the case with basically every support request, no matter the company or severity? The first couple of emails from 1st & even 2nd level support are mostly about answering the same questions about the environment over and over again. We've had this ping-pong situation with production outages (which we eventually analysed and worked around by ourselves) and fairly small issues like requesting more information of an undocumented behavior which didn't even effect us much. No matter how important or urgent the initial issue was, eventually most requests end up being closed unresolved.
GCP does have role-based support models with a flat-rate plan, which is really great, but the overall quality of the responses leaves much to be desired.
So far GCP is the best, hands down in terms of stability. We never had a single outage or maintenance downtime notification till now. We are power users but our monitoring didn't pick any anomaly so i don't think this issue had rampant impact on other services.
But i find it concerning that they provided very little update on what went wrong. I also think its better to expect nil support out of any big cloud provider if you don't have paid support. Funny how all these big cloud providers think you are not eligible for support de-facto. Sigh.
If you are an early stage startup can you afford their 200/Month support, when your entire GCP bill is under $1. However, that doesn't mean you don't have to support them.
If you've got VC money to blow so you can pretend your SaaS toy can feed 500 people while having money left to throw at things, that's cool. Just remember that other people might be running sustainable businesses.
And just like that you turned a $200/month bill into a $10k/month strawman.
> Just remember that other people might be running sustainable businesses.
Why are you pretending that a startup that can't afford $200/month is a "sustainable businesses"?
I mean sure, they could go and probably afford to waste $200 extra on something random that will be useless to them most of the time, but that money is going straight out of their paycheck.
You don't remain profitable though by repeatedly making bad decisions like that. Which was my point.
Running a (small) profitable business is about making the right decisions consistently, and if you're likely to waste money on one thing, you're also likely to waste it on the 19 other similar things.
Maybe speak to literally anyone you know who is running a small businesses if you want to know more. Yes that includes your local small stores on your street.
At the end of the day you probably pissed off quite a few people on here when you called their livelihood a hobby project.
This is akin to saying that a mom and pop laundromat can’t afford insurance, or shouldn’t because they won’t frequently need it.
You’re trying to equate small businesses with hobbies. You’ve now resorted to straw men, slippery slopes, and false equivalency. Maybe consider that if you have to distort the situation this much to make your point, you might just be wrong.
> At the end of the day you probably pissed off quite a few people on here when you called their livelihood a hobby project.
I didn’t say anything about anyone’s livelihood. You’re the one pretending that small businesses bringing home $120k/year can’t afford a $200 monthly support bill.
I bet the guy who started this thread about GCP’s support cost has made a sum total of <$1000 from his “startup”. Likely <$10. Hobby.
I don’t care if “quite a few people” got pissed about my comment. People with egos that delicate shouldn’t use social media.
I was trying to tell you that most small businesses can't go around spending hundreds of bucks of things that provide little value, whether that's a business support plans on services they use or something else. It's true regardless of whether you're a brick and mortar store or some online service.
> This is akin to saying that a mom and pop laundromat can’t afford insurance, or shouldn’t because they won’t frequently need it.
Speaking about about false equivalencies...
> You’re the one pretending that small businesses bringing home $120k/year can’t afford a $200 monthly support bill.
First off, I spoke of businesses making generally less than that.
Also (I already said this, good job ignoring that!) paying $200 bucks on a single useless thing is survivable for even a small business - but you know what's better than only making one bad business decision? Making no bad ones at all. Making too many will quickly break the camel's back.
Which was my whole argument and it's also what people generally refer to when they say they can't afford something.
For instance you may say "I can't afford to go to this restaurant", even though you'd have enough money to do it without going immediately bankrupt. But it'd be a bad decision, too many of which quickly add up.
And I'm telling you that if you built your business on top of GCP, a support contract is probably not "low value". You'd happily pay $200 for support on your critical infrastructure, just as you'd happily pay $200 for a repairman to fix your washing machine if you owned a laundromat.
If you don't need support, then sure, don't pay for the plan. If you do need support, $200 seems pretty reasonable.
> Speaking about about false equivalencies...
Signing up for a monthly recurring support plan in case you need it is literally insuring your business.
> For instance you may say "I can't afford to go to this restaurant", even though you'd have enough money to do it without going immediately bankrupt. But it'd be a bad decision, too many of which quickly add up.
A support plan for your critical infrastructure probably isn't "useless". Which is the point. If your need for support is that low, then either you've built your own redundant systems to protect you or more likely you aren't running a real business.
I'm glad AWS's free tier is working for you, but complaining that Google doesn't want to give you free capacity for your business and then also provide you free support for that business is pretty absurd.
I am not saying with vast numbers its feasible but big cloud providers don't even give you the opportunity to raise a ticket if its their fault. There is a price you pay extra when you opt for any one of them but many don't realize. Having said that - almost all the time, our skilled expertise is better than their initial two level of support staff. We realized it early so we handle it better by going over the documentation and making our code resilient since all cloud platforms have some limit or another since overselling in a region is something they can't avoid. Going multiple regions across when you handle these exceptions is the only way through.
"We are investigating an issue with Google Kubernetes Engine node pool creation through Cloud Console UI."
So, it's a UI console issue, it appears you can still manage
"Affected customers can use gcloud command  in order to create new Node Pools. "
Similarly, it actually was resolved in Friday, but they forgot to mark it as so.
"The issue with Google Kubernetes Engine Node Pool creation through the Cloud Console UI had been resolved as of Friday, 2018-11-09 14:30 US/Pacific."
The items I put down in my comment are based largely on user reports, though (there isn't much else to go on). And I mean these items as questions (i.e. "is this accurate?"). Folks here on HN have definitely been reporting ongoing problems and seem to be suggesting that they are not resolved and are actually larger in scope than the Google blog post addressed.
Someone from Google commented here a few hours ago indicating Google was looking into it. And other folks here are reporting that they don't have the same problems. So it's kind of an open question what's going on.
I'm in the evaluation phase too. And I've found a lot to like about GCP. I'm hoping the problems are understandable.
Edit: I finally got my cluster up and running by removing all nodes, letting it process for a few minutes, then adding new nodes.
I created a new instance in us-west2-c, which worked briefly but began to fail midday Friday, and kept failing through the weekend.
On Saturday I created yet another clone in northamerica-northeast1-b. That worked Saturday and Sunday, but this morning, it is failing to start. Fortunately my us-west2-c instance has begun to work again, but I'm having doubts about continuing to use GCE as we scale up.
And yet, the status page says all services are available.
What blog statement are you referring to? I don't see any such statement. Can you provide a link?
The OP incident status issue says "We are investigating an issue with Google Kubernetes Engine node pool creation through Cloud Console UI". It also says "Affected customers can use gcloud command in order to create new Node Pools."
So it sounds like a web interface problem, not a severely limiting, backend systems problem with global scope.
Also, the report says "The issue with Google Kubernetes Engine Node Pool creation through the Cloud Console UI had been resolved as of Friday, 2018-11-09 14:30 US/Pacific". So the whole issue lasted about 10 hours, not three whole days.
> Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems
I don't see much of that.
> So it sounds like a web interface problem, not a severely limiting
Depends who you as to whether this is "severely" limiting, but yes there is a workaround by using an alternate interface.
a) Google had a global service disruption that impacted Kubernetes node pool creation and possible other services since Friday. They had a largely separate issue for a web UI disruption (what this thread links to) which they forgot to close on Friday. They still have not provided any issue tracker for the service distribution and it's possibly they only learned about it from this hacker news thread.
b) People are having various unrelated issues with services that they're mis-attributing to a global service disruption.
What exactly is your point?
...and I'm a happy GCP customer.
Ok. So on aws we were* paying for putting systems across regions, but, honestly I don’t get the point. When an entire region is down what I have noticed is that all things are fucked globally on aws. Feel free to pay double - but it seems* if you are paying that much just pay for an additional cloud provider. Looks like it’s the same deal on GCP.
Do you have an example on this?
That being said I really do think there is a difference between who is working at google today and the google we all fell in love with pre-2008.
I am sure there are a amazing people still working at google, but nowhere near like it was.
The way I like to think about google is that some amazing people mad ea awesome train that builds tracks in front of it -- you can call them gods maybe -- but those people are gone -- or a least the critical mass required to build such a train has dwindled to just dust. What we have left is a awesome train full of people pulling the many levers left behind.
To make things even worse my last interview as a SRE left me wondering if even the people who are there know this as well, and they are actually working hard to keep out those who might expose light on to this. I don't say that because I did not get the job -- I am actually happy I did not get extended a offer.
I say this with one exception, the old-timer who was my last interview. I could tell he was dripping in knowledge and eager to share it with any that would listen. I came out of his 45 min session learning many things -- I wold actually pay to work with a guy like that.
I would also like to point out that the work ethic was not what I expected. I was told that when on call, my duty was to figure out the root cause was in the segment I was responsible for. I don't know about you, but if my phone rings at night I am going to see through to a resolution and understand the problem in full -- even if it is not on the segment that I was assigned.
At the same time you ignored the massive complexity and size of Google compared to what they were at the beginning.
This is voodoo organisational analysis.
That being said -- when you are on call -- dropping everything is exactly what is expected.
During my time on the GCE team (note I don't work at Google now) I knew multiple full-time Google employee support reps, including some still at the company. They have the good attitude and deep knowledge you'd hope for.
The problem is simply about how Google scales their GCP support org. To be completely clear, AWS support is by and large not great either.
If you're a big or strategically important customer, of course, you can get a good response from either company.
20% of my support experiences are amazing.
Fortunately, I don't require decent support to keep my service running. My sales rep tells me that he's aware of the problem.
I speculate it's simply the result of GCP trying to grow the org very quickly.
Perhaps if you explained it on a whiteboard...
From my personal experience - i think all big cloud providers first two level support staff is no good if it isn't an obvious dumb one on your part. I always prefer to forgo support and try to go through every bit of their documentation to figure it out on our own. This helps to save huge amount of time. But if you have developer support - it can help to expedite things little faster though.
That's my favorite.
- Someone at Google right now, probably.
(I work at Google, on GKE, though I am not a lawyer and thus don't work on the deprecation policy)
for any reason
at any time
> 7.1 Discontinuance of Services. Subject to Section 7.2, Google may discontinue
> any Services or any portion or feature for any reason at any time without
> liability to Customer.
Let's take a look at Section 7.2:
> 7.2 Deprecation Policy. Google will announce if it intends to discontinue or
> make backwards incompatible changes to the Services specified at the URL in
> the next sentence. Google will use commercially reasonable efforts to continue
> to operate those Services versions and features identified at
> https://cloud.google.com/terms/deprecation without these changes for at least
> one year after that announcement, unless (as Google determines in its
> reasonable good faith judgment):
> (i) required by law or third party relationship (including if there is a change
> in applicable law or relationship), or
> (ii) doing so could create a security risk or substantial economic or material
> technical burden.
> The above policy is the "Deprecation Policy."
To me that looks like a reasonable deprecation policy.
It might be, until they jack up the prices 15X with limited notice (looking at you, Google maps ). No deprecation needed, just force users off the platform unless they're willing to pay a massive premium.
The fact that they're all Google makes reputation damage bleed across meaningfully different parts of what's in truth now a conglomerate under the umbrella name Google.
The bit of Maps Platform integration for management of the billing and API layer was called out in the announcement blog as an integration with the console specifically, and the docs and other branding around Maps Platform remain distinct from GCP still in excessively subtle ways that Googlers pay more attention to than everyone else, like hosting the docs on developers.google.com instead of cloud.google.com and having Platform in its name separately from Cloud Platform.
This stuff makes sense to Googlers not only because of the org chart but also because Google has a pretty unified API layer technology and because Google put in a lot of work to unify billing tech & management. Reusing that is efficient but not always clear.
But you're right to be confused. Their branding is a mess and always has been. This is the same company that thought Google Play Books makes sense as a product name.
Google's product / PR / comms / exec people are very bad at understanding how external people who don't know Google's org chart and internal tech will perceive these things, or at least bad at prioritizing those concerns.
They live and breathe their corporate internals too much to realize this. Some Google engineers and tech writers realize the confusion but pick other battles to fight instead (like making good quality products).
They do at least document which services are subjected to the GCP Deprecation Policy (Maps is not there): https://cloud.google.com/terms/deprecation
As for what products are actually part of GCP, it's the parts of this page that aren't an external partner's brand name, aren't called out separately like G Suite or Cloud Identity or Cloud Search, and aren't purely open source projects like Knative and Istio (as opposed to the productized versions within GCP), with the caveat that the level so far of integration into GCP of Google acquisitions like Apigee, Firebase, and Stackdriver varies depending on per-company specifics: https://cloud.google.com/products/
G Suite and Cloud Identity accounts can be used with GCP, just like any other Google accounts. They are part of Google Cloud but not Google Cloud Platform.
Hope I waded through the mess correctly for you. :)
If they ever do deprecate something people have built on though they're gonna get absolutely crucified. That's probably better protection than any terms of service.
They do this all the time, and they get crucified every time. I built a Google Hangout App and a Chrome App, both of which were platforms eventually shut down.
This is where the meme came from, and it's why I personally stopped building on top of Google products. A 1-year deprecation policy is no assurance to me if I plan for my app to live longer than that.
By the way that GCP is so full of loopholes where Google can get out of its obligations its laughable. So it's not even that clear cut that the GCP is really a better alternative.
And even when it turns out to be legally sound, when stuff like this happens, who's going to sue google over it? Nobody, and they know it.
But as I say in another comment, the contract is less important than both trust and reality. Keep in mind nobody focuses on how AWS doesn't even have a public deprecation policy.
I'm right there with many people in this thread in agreeing that Google has a trust problem, due mostly to real perception issues stemming from Google's habits outside GCP, which can and do impact people's perceptions of what they'll do with GCP.
The reality of what Google has done and will do with GCP, though, is pretty good. Sure they do sometimes deprecate things in ways Amazon never would. But not nearly as often or as abruptly as they do on the consumer side - that would be commercial suicide - and they do other things better than Amazon. Tradeoffs.
No. It's just words. Actions speak louder than words. Googles' actions in the last couple of days spoke pretty loud. No amount of words will change that.
Are you working for Google PR or something?
I'm still a fan of GCP as a suite of products and services, as much as I recognize many of Google's organizational failings and disagree with plenty of their product decisions in other areas of Google.
Google (including GCP) has been bad at external communication as long as I've paid attention, and that includes external communications around incidents. What actions are you referring to, beyond poor and confusing communication (i.e. words) around what is or isn't broken or fixed at what points during the incident? That's most of the problem I'm aware of from this incident.
With that said, part of the reason people notice GCP's outages more than AWS's is that GCP publicly notes their outages way more than AWS does. In other words, among the outages that either cloud has, Google much more often creates an incident on their public status page and Amazon much more often fails to.
My "reality of [...] GCP" comment was about the bigger picture of the cloud platform offering, not any one specific incident.
If a service Google runs is losing money, what reason would they have to not shut it down?
If I pay you for a service that would take time to migrate off of, and you are making money off me now, I am going to be ripshit if you decide to just turn it off because it's suddenly not making money for you in the short term. Google's done this a lot, and the fact that don't provide concrete time lines in their contract gives even less reason to trust them
People look at AWS's track record, and trust that. People look at Google's track record, overlook what to an inside-the-company Googler perspective are dramatically significant organizational boundaries or product lifecycle definitions that are very poorly communicated outside the company, mentally apply reputational damage from one part of Google (or from a preview-stage GCP product) to a different part of the company (or to a generally available GCP product), and don't trust that.
Google has always been worse at externally facing PR than at the internal reality, even when I worked there (2011-2015). Major company weakness.
But the internal reality inside GCP, perceptions aside, is pretty good even now.
If it's costing them money they haven't figured out a model, yet, that works in their favour.
Which is the deprecation policy. (I mean I share your frustration with Google's what-appears-to-be-at-least haphazard policy of shutting down services instead of trying to gain traction. But, let's not misrepresent what they say).
I don't think it's wrong - they can deprecate any service they want to do whenever they want, unless people have paid for and signed a contract that says otherwise which I guess people aren't doing.
But the policy doesn't really guarantee anything at all does it, due to the reference escape-hatches? It might as well not exist?
"Subject to the deprecation policy [which says that Google will give at least 1 year notice before cancelling services], Google may discontinue..."
In other words, at any time, google can give you a years notice.
(I work at Google, but am not a lawyer and this isn't official in any capacity).
Please don't selectively quote things out of context to give a misleading impression.
> commercially reasonable
> substantial economic or material technical burden
Is one engineer working on an old service to keep it alive commercially reasonable or a substantial burden? I don't know. Do you?
In practice this policy lets them shut off anything they want any time they want. Again it's their playground they can do what they want unless they signed a contract saying they'd do something else for you so I don't have a problem with it.
To be clear, that policy is a contract. And those things would be decided by a jury. And if my understanding is correct, the reasonable person standard applies. So you can answer this yourself, do you think a reasonable person would believe that your interpretation is valid?
If not, why mention it?
Because it makes more people feel comfortable enough to use your services and pay you, without actually binding you towards any sort of behavior that would cost you money.
There's a direct financial incentive here to use legalese to give the semblance of reliability without having to deliver on it
Caveat emptor, folks.
b. To the Agreement:
Google may make changes to this Agreement, including pricing (and any linked documents) from time to time.
Google will provide at least 90 days’ advance notice for materially adverse changes to any SLAs by either: (i) sending an email to Customer’s primary point of contact; (ii) posting a notice in the Admin Console; or (iii) posting a notice to the applicable SLA webpage. If Customer does not agree to the revised Agreement, please stop using the Services. Google will post any modification to this Agreement to the Terms URL.
Note how I never stated the inference. This is because I wanted to share a way of thinking without feeling the responsibility to reply to people attempting to force me to prove some prescriptive, arbitrary inference rule by exhaustion. I do not participate in such practices casually. I also consider it rude to subject people to such practices without consent. I also believe it is a practice that kills online discussion platforms. See this community’s thought provoking guidelines :)
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
It looks like the UI issue was actually fixed, and that we just didn't update the status dashboard correctly. But we're double checking that and looking into some of the additional things you all have reported here.
As another comment pointed out, what's the point of having so many zones and redundancy around the globe if such global failure can still happen? I thought the "cloud" was supposed to make this kind of failure impossible
nvm s3 has regions, it's the bucket names that are global.
If you look at the docs now, new buckets are regionalized and the region is in the URL for non-us-east-1 regions.
I've been creating GCP instances in us-central1-a and us-central1-c today without issue. Which zone were you using in NA?
I have been noticing unusual restarts, but I haven't been able to pin down the cause yet (may be my software and not GCP itself).
You have to remember that you're trying to have access to backend platforms and infrastructure at all times, which almost no public utility does (assuming "the cloud" is "public utility computing"). Power plants go into partial shutdown, water treatment plants stop processing, etc. Utilities are only designed to provide constant reliability for the last mile.
If there's a problem with your power company, they can redirect power from another part of the grid to service customers. But some part of your power company is just... down. Luckily you have no need to operate on all parts of the grid at all times, so you don't notice it's down. But failure will still happen.
Your main concern should be the reliability of the last mile. Getting away from managing infrastructure yourself is the first step in that equation. AppEngine and FaaS should be the only computing resources you use, and only object storage and databases for managing data. This will get you closer to public utility-like computing.
But there's no way to get truly reliable computing today. We would all need to use edge computing, and that means leaning heavily on ISPs and content provider networks. Every cloud computing provider is looking into this right now, but considering who actually owns the last mile, I don't think we're going to see edge computing "take over" for at least a decade.
If set up properly to be utilized correctly, yeah. But, it's not a perfect world though.
People who respond here could be employees of Google, caring about it and respond here because they know it.
What he can mention ( a lot of people are working on it) is what you can suspect when something is going down. All other cloud providers do the same.
edit: The default is also only about the UI issue and there's no issue tracker for the broader non-UI disruptions going on since Friday.
There is a reason while Google have been having hard time making inroads in the enterprise cloud. Kind of impedance mismatch between enterprise and the Google style. That 2 stories like high "We heart API" sign on the Google Enterprise building facing 237 just screams about it :)
Why do you guys suffer global outages? This is your 2nd major global outage in less than 5 years. I’m sorry to say this, but it is the equivalent of going bankrupt from a trust perspective. I need to see some blog posts about how you guys are rethinking whatever design can lead to this - twice - or you are never getting a cent of money under my control. You have the most feature rich cloud (particularly your networking products), but down time like this is unacceptable.
The problem is that running a global SDN like this means if you do something wrong, you can have outages that impact multiple regions simultaneously.
This is why AWS has strict regional isolation and will never create cross-region dependencies (outside of some truly global services like IAM and Route 53 that have sufficient redundancy that they should (hopefully) never go down).
Disclaimer: I work for AWS, but my opinions are my own.
Disclaimer: google employee in ads, who worked on many many fires throughout the years, but talking from my personal perspective and not from my employer. I am sure we are striving to have 0, but realistically, i have seen many that says things happen. Learn, and improve.
Then there was the global load balancer outage in July.
Looking though the incident history, there were essentially monthly multi-region or global service disruptions of various services.
(Disclosure: I worked for Google, including GCP, for a few years ending in 2015. I don't work or speak for them now and have no inside info on this outage.)
GKE being the exception, since it was launched a couple years before EKS. AWS clearly has way more services, and the features are way deeper than GCP.
Just compare virtual machines and managed databases, AWS has about 2-3x more types of VMs (VMs with more than 4TB of RAM, FPGAs, AMD Epyc, etc.), and in databases, more than just MySQL and PostgreSQL. When you start looking at features you get features that you just can't get in GCP, like 16 read-replicas, point in time recovery, backtrack, etc.
Disclaimer: I work for AWS but my opinions are my own.
Some of GCP's unique compelling features include live VM migration that makes it less relevant when a host has to reboot, the new life that has recently been put into Google App Engine (both flexible environment and the second generation standard environment runtimes), the global load balancer with a single IP and no pre-warming, and Cloud Spanner.
In terms of feature coverage breadth I started my previous comment by agreeing that AWS was ahead, and I still reaffirm that.
But if you randomly select a feature that they both have to a level which purports to meet a given customer requirement, the GCP offering will frequently have advantages over the AWS equivalent.
Examples besides GKE: BigQuery is better regarded than Amazon Redshift, with less maintenance hassle. And EC2 instance, disk, and network performance is way more variable than GCE which generally delivers what it promises.
One bit of praise for AWS: when Amazon does document something, the doc is easier to find and understand, and one is less likely to find something out of date in a way that doesn't work. But GCP is more likely to have documented the thing in the first place, especially in the case of system-imposed limits.
To be clear, I want there to be three or four competitive and widely used cloud options. I just think GCP is now often the best of the major players in the cases where its scope meets customer needs.
Disk and network performance is extremely consistent with AWS so long as you use newer instance types and storage types. You can't reasonably compare the old EBS magnetic storage to the newer general purpose SSD and provisioned IOPS volume types, and likewise, newer instances get consistent non-blocking 25gbps network performance.
I'm not so sure I would praise our documentation; it is one of the areas that I wish we were better at. Some of the less used services and features don't have excellent documentation, and in some cases you really have to figure it out on your own.
GCP is a pretty nice system overall, but most of the time when I see comparisons, when GCP looks better its because the person making the comparison is comparing the AWS they remember from 5-6 years ago with the GCP of today, which would be like comparing GAE from 2012 with today.
You're right that Athena seems like the current competitor to BigQuery. This is one of those things that are easy to overlook when people made the comparison as recently as a couple of years ago (before Athena was introduced) and Redshift vs BigQuery is still often the comparison people make. This is where Amazon's branding is confusing to the customer: so many similar but slightly different product niches, filled at different times by entirely different products with entirely unrelated names.
When adding features, GCP would usually fill adjacent niches like "serverless Redshift" by adding a serverless mode to Redshift, or something like that, and behavior would be mostly similar. Harder to overlook and less risky to try.
Meanwhile, when Athena was introduced, people who had compared Redshift and BigQuery and ruled out the former as too much hassle said "ah, GCP made Amazon introduce a serverless Redshift. But it's built on totally different technology. I wonder if it will be one of the good AWS products instead of the bad ones." (Yes, bad ones exist. Amazon WorkMail is under the AWS umbrella but basically ignored, to give one example.)
And then they go back to the rest of their day, since moving products (whether from Redshift or BigQuery) to Athena would not be worth the transition cost, and forget about Athena entirely.
On the disk/network question, no I didn't see performance problems with provisioned IOPS volume types, but that doesn't matter: for GCE's equivalent of EBS magnetic storage, they do indeed give what they promise, at way less cost than their premium disk types. There's no reason it isn't a fair comparison.
And for the "instance" part of my EC2 performance comment, I was referring to a noisy neighbor problem where sometimes a newly created instance would have much worse CPU performance than promised and so sometimes delete and recreate was the solution. GCE does a much better job at ensuring the promised CPUs.
I'm glad AWS and GCP have lots of features, improve all the time, and copy each other when warranted. But I don't think the general thrust of my comparison has gone invalid, even if my recent data is more skewed toward GCP and my AWS data is skewed toward 2-3 years old. Only the specifics have changed (and the feature gap narrowed with respect to important features).
Most of what you can read of Google's approach will teach you their ideal computing environment is a single planetary resource, pushing any natural segmentation and partitioning out of view.
It's the opposite really: the expectation that service providers have no unexpected downtime is unrealistic, and it's strange this idea persists.
I agree, in general, outages are almost inevitable, but global outages shouldn't occur. It suggests at least a couple of things:
1) Bad software deployments, without proper validation. A message elsewhere in this post on HN suggest that problems have been occurring for at least 5 days, which makes me think this is the most likely situation. If this is the case, presumably given this is multiple days in to the issue, rolling back isn't an option. That doesn't say good things about their testing or deployment stories, and possibly their monitoring of the product? Even if the deployment validation processes failed to catch it, you'd really hope alarming would have caught it.
2) Regions aren't isolated from each other. Cross-region dependencies are bad, for all sorts of obvious reasons.
Are software development and release processes improving to mitigate these outages? We don't know. You have to trust the marketing. Will regions ever be fully isolated? We don't know. Will AWS IAM and console ever not be global services? We don't know.
Blah blah blah "We'll do better in the future". Right. Sure. Some service credits will get handed out and everyone will forget until the next outage.
Disclaimer: Not a software engineer, but have worked in ops most of my career. You will have downtime, I assure you. It is unavoidable, even at global scale. You will never abstract and silo everything per region.
> Facebook Platform Appears to be down
> A check of https://developers.facebook.com/status/dashboard/ returns an error and I'm unable to login with facebook to some of my mobile apps.
If you're stuck implementing a suboptimal solution, that's not your fault, and not the intent of my above comment.
Nothing to see here, move along.
Edit: I saw your point a bit late. It was limited to GKE, which makes my initial comment about "service" incorrect, and it was global, which keeps my comment about "region" correct. On a related note, an SRE from GKE posted on Slack that GCE was out of resources and so GKE faced resource exhaustion as well  - so it _might_ have been a multi-service outage.
Affected customers can use gcloud command  in order to create new Node Pools.  https://cloud.google.com/sdk/gcloud/reference/container/node...
That led me to believe that only the web UI was affected.
No one would ever ask why you chose AWS. The old “no one ever got fired for buying IBM”.
Even if you chose Azure because you’re a Microsoft shop, no one would question your choice of MS. Besides, MS is known for thier enterprise support.
From a developer/architect standpoint, I’ve been focused the last year on learning everything I could about AWS and chose a company that fully embraced it. AWS experience is much more marketable than GCP. It’s more popular than Azure too, but there are plenty of MS shops around that are using Azure.
- Security posture. Project Zero is class leading, and there's absolutely a "fear-based" component there, with the open question of when Project Zero discovers a new exploit, who will they share it with before going public? The upcoming Security Command Center product looks miles ahead of the disparate and poorly integrated solutions AWS or Azure offers.
- Cost. Apples to apples, GCP is cheaper than any other cloud platform. Combine that with easy-to-use models like preemptible instances which can reduce costs further; deploying a similar strategy to AWS takes substantially more engineering effort.
- Class leading software talent. Google is proven to be on the forefront of new CS research, then pivoting that into products that software companies depend on; you can look all the way back to BigQuery, their AI work, or more recently in Spanner or Kubernetes.
- GKE. Its miles ahead of the competition. If you're on Kubernetes and its not on GKE, then you've got legacy reasons for being where you're at.
Plenty of great reasons. Reliability is just one factor in the equation, and GCP definitely isn't that far behind AWS. We have really short memories as humans, but too soon we seem to forget Azure's global outage just a couple months ago due to a weather issue at one datacenter, or AWS's massive us-east-1 S3 outage caused by a human incorrectly entering a command. Shit happens, and it's alright. As humans, we're all learning, and as long as we learn from this and we get better then that's what matters.
Or you have legitimate reasons for running on your own hardware, e.g. compliance or locality (I work at SAP's internal cloud and we have way more regions than the hyperscalers because our customers want to have their data stay in their own country).
But, whether it is right or not, as an architect/manager, etc, you have to think about what’s not just best technically. You also have to manage your reputational risks if things go south and less selfishly, how quickly can you find someone with the relevant experience.
From a reputation standpoint, even if AWS and GCP have the same reliability, no one will blame you if AWS goes down if you followed best practices. If a global outage of an AWS resource went down, you’re in the same boat as a ton of other people. If everyone else was up and running fine but you weren’t because you were on the distant third cloud provider, you don’t have as much coverage.
I went out on a limb and chose Hashicorp’s Nomad as the basis of a make or break my job project I was the Dev lead/architect for hoping like hell things didn’t go south and the first thing people were going to ask me is why I chose it. No one had heard of Nomad but I needed a “distributed cron” type system that could run anything and it was on prem. It was the right decision but I took a chance.
From a staffing standpoint, you can throw a brick and hit someone who at least thinks they know something about AWS or Azure GCP, not so much.
It’s not about which company is technically better, but I didn’t want to ignore your technical arguments...
Native integration with G-Suite as an identity provider. Unified permissions modeling from the IDP, to work apps like email/Drive, to cloud resources, all the way into Kubernetes IAM.
You can also do this with AWS - use a third party identity provider and map them to native IAM user and roles.
Cost. Apples to apples, GCP is cheaper than any other cloud platform. Combine that with easy-to-use models like preemptible instances which can reduce costs further; deploying a similar strategy to AWS takes substantially more engineering effort.
The equivalent would be spot instances on AWS.
From what (little) I know about preemptible instances, it seems kind of random when they get reassigned but Google tries to be fair about it. The analagous thing on AWS would be spot instances where you set the amount you want to pay.
Class leading software talent. Google is proven to be on the forefront of new CS research, then pivoting that into products that software companies depend on; you can look all the way back to BigQuery, their AI work, or more recently in Spanner or Kubernetes.
All of the cloud providers have managed Kubernetes.
As far as BigQuery. The equivalent would be Redshift.
Reliability is just one factor in the equation, and GCP definitely isn't that far behind AWS
Things happen. I never made an argument about reliability.
They're equivalent in the sense that you have nodes that can die anytime, but it's much more complicated. You could technically have a much lower cost on AWS by aggressively bidding low but we've had a few instances where the node only lived a few minutes.
Preemptibles nodes are max 24h, and from our stats, they really live around that amount of time. I think the lowest we've had was a node dying after 22h.
You also save out of the box because they apply discount when your instance is running for a certain number of hours.
You can even have more discount by agreeing to a committed use which you pay per month instead of one-shot unlike AWS.
I'm going to add a few more reasons to the above reply:
- UI and CLI is so much better in GCP
I don't have to switch between 20 regions to see my instances/resources. From one screen, I can see them all and filter however I like.
- GCP encourage creating different projects and apply same billing.
It's doable in AWS too, of course, but coupled with the fact that you have different projects and regions, and you can't see all instances of a project at once, this makes a super bad experience
- Networks are so much better in GCP
Out of the box, your regions are connected and have their own CIDR. Doing that in AWS is complicated.
- BigQuery integration is really good
A lot of logs and analytics can be exported to BigQuery, such as billings, or storage access. Coupled with Data Studio and you have non technical people doing dashboards.
- Kubernetes inside GCP is a lot better than AWS'
- Firewall rules > EC2 Security Group
- A lot of small quality of life that makes the experience a lot better overall
... like automatically managing SSH keys for instances, instead of having a master ssh key and sharing that.
Here's the thing though, a lot of GCP can be replicated, just like what you linked for the identity provider. With GCP, there's a lot of stuff out of the box -- so dev and ops can focus on the important stuff.
Overall, AWS is just a confusing mess and offers a very bad UX. Moving to GCP was the best move we've made.
Moved for bizdev reasons, and really appreciated the improved quality of life.
"Cloud" is not a thing one buys and one's reputation has nothing to do with the reliability of the services consumed, but the reliability of the services provided.
To put it more succinctly, "you own your availability".
In the end, "cloud" is a commodity and all cloud providers are trying to get vendor lock-in. My goal as a manager is not to couple my business revenue linearly to any particular product or service.
Cloud is only an interchangeable commodity if you’re treating it like an overpriced colo and not using it to save costs on staff, maintenance, and helping deliver product faster.
Doable, but it’s a hell of a lot of hassle and that CapEx is huge for a startup.
I’d go bare metal in a second for any kind of cost conscious business that needed scale and had established revenue.
GCP can be a fair bit cheaper than AWS and Azure for certain workloads. Raw compute/memory is about the same. Storage can make a big difference. GCP persistent SSD costs a bit more than AWS GP2 with much better performance and way cheaper than IO2. Local SSD is also way, way cheaper than I2 instances.
Most folks deploying distributed data stores that need guaranteed performance are using local disk, so this can be a really big deal.
However, I could see doing a multicloud solution where I took advantage of the price difference for one project.
There are lots of little things to like about GCP that are superior to AWS. Network IO, some of the bigdata products. Not having to deal with IAM. In the end it would be some combination of those things that should drive the decision. Basic enterprise IT shops moving to "cloud" should choose AWS 90% of the time.
Anyone starting from scratch on kubernetes or considering shifting all of their infrastructure to it should absolutely choose GKE. Anyone currently in EKS or AKS should sign up for GCP today and evaluate the differences to see what they're missing.
It's not like 5 years ago when everyone was ramping up their offerings with a yearly price drop and a new generation.
The AWS console is wildly inconsistent. I’ll give you that. But, any projects I am doing are usually defined by a Cloud Formation Template abd I can see all of the related resources by looking at the stack that was run.
Theoretically, you could use the stack price estimator, I haven’t tried it though.
>We will provide more information by Monday, 2018-11-12 11:00 US/Pacific.
Wait, did the people tasked with fixing this just take the weekend off?
My working assumption is that 18006 should have closed out 18005. But now it sounds like there's a different issue, which we're working to get to the bottom of.
And this is likely a major incident with significant customer impact.
The way google is handling all this gives a pretty poor impression. Seems like this kubernetes is just a PoC.
The person updating that status dashboard may or may not be an engineer, the IM certainly is.
That's yes, it's still being investigated?
You may feel that's a bad decision, but I doubt that people are in a panic because they can't push out an update that would not be noticably different from the last one.
Just to clarify, what should this update contain?
"We're working on it", possibly an ETA for a fix or some details. Technically it's fluff but people are not machines and the update is for people. We like the feeling that people are working on a fix, that people care and that the end is in sight. It makes the situation less stressful and, as for why Google should care, less stressed engineers won't bad mouth Google as much after the fact.
What's the better communication plan: detailed, hourly updates or terse, one-line blog posts scattered across several days?
And I'm confused about what was good about that response. That article is about how the s3 outage caused so many issues that Amazon couldn't update their status dashboard to inform users at all.
Let's hope you don't have a life threatening medical emergency that can't wait near an affected healthcare facility while that silly software is down.
If your ability to operate an ER is dependent on a remote data center, you have no business being a public health provider.
Am I the only one that finds this slightly humorous?
It's likely the fix is checked in and will start roll out on Monday.
Disclaimer: I work on Google Cloud and while I believe we could use more words here, this doesn't seem like a huge problem. It's embarrassing the the issue with the ui was shipped, and I'm sure this will be addressed in the post mortem as well as whether it could have been mitigated quicker than a roll forward.
Based on comments in this thread even gcloud is failing and so are other non-kubernetes services. Which may be inaccurate but there's a lot of people saying the same thing so maybe it is.
You're right however that the linked issue is only about the UI. So Google isn't even tracking the service distribution issue in it's issue tracker much less updating people on. I personally think that's even worse...
Looks like this time Mary took the whole week off without telling Josephine :)