Hacker News new | past | comments | ask | show | jobs | submit login
Google Cloud Platform (googlecloudplatform.blogspot.com)
82 points by luu on Sept 27, 2015 | hide | past | web | favorite | 60 comments

Just less than an hour ago I got 10+ alerts that my monitoring agents on GCE machines are unreachable for 5 mins again and networking is down. I see this at least once a month, separately the cloud SQL has weird outages again about once-twice a month. I also run some real hardware machines in a datacenter and have none of these problems. I am not impressed with Google's cloud.

Sorry to hear that; which zone are you in? I don't see any notice of an outage, but I'd be happy to investigate for you (I work on Compute Engine).

To your other issue with Cloud SQL, there is sadly a mandatory downtime with every release they do (often weekly). The team is working on it and knows it's unacceptable for serving workloads.

Edit: Here's a link to the maintenance note in the FAQ for reference - https://cloud.google.com/sql/faq#maintenancerestart

Very strange, you should be able to observe these in almost any monitor. I see this in multiple projects/networks.

Outage today was on us-central1-a. I got alerts from multiple agents - new relic, zabbix and ssh monitor so I don't think it can be blamed on tools or something like this. It happened in two bursts, this is the first one http://imgur.com/XdzuJ2r with messages from both zabbix and new relic, the time zone in the emails is EEST. These are all different machines, no duplication.

This is what I get for not being in SRE (I don't get paged, I just poll our incident lists, etc.): there was definitely a blip at that time, people are on it right now looking into the root cause (but it was automatically remediated within just a couple minutes as you saw). Regardless, sorry for the blip and for being wrong about the all clear.

Googles not alone in this, we got a lot of unexplained seemingly random outages when trying azure - it ended up being so unpredictably unreliable we found it unusable for anything important not to mention they never updated their status page.

Just remember when you contract something out to a third party, you're not subject to their SLA level but their general level of competence. This can be quite a bit lower than the SLA, even if they are high profile or expensive. If you're lucky they'll bung you a discount as compensation but don't rely on that.

I speak with experience with Google, AWS and Rackspace.

Wow. Kind of surprised such a borderline-desperate sounding post was allowed on the official Google blog.

I basically read this as "we're so technically superior in several ways ... wtf is everyone still using AWS instead of us?"

It reminds me of the attitude of one Google engineer I know, which is basically Google is such an amazing place, with such visionary leaders, and across-the-board smart engineers, that they can wander into any domain they want, and just assume they'll succeed/print money like AdWords.

I'm paraphrasing and also probably putting words into his mouth, so apologies for phrasing it in the extreme to make a point.

Part of me wishes it was that way: as long as you had the smartest people + biggest warchest (which they basically do), victory would be pre-determined. But capitalism has a way, if anything, of handicapping the existing giants, and you can't just wander into a field "because we're really smart" and assume you'll win.

</armchair business insight>. Feel free to correct my assumptions/biases.

disclaimer: I am the author, and work at Google Cloud Platform

Stephen, thanks for the direct feedback, seriously! We weren't shooting for desperate, if anything I had a bit of geek pride in my voice when saying it at our GCP Next events, and then typing it down for this post. As I've been helping folks onboard and understand the platform, I found myself explaining these facts pretty frequently, so I figured having it written down might be helpful for others. Any suggestions on what kinds of content you'd like to see us deliver? Things you'd want to know? I'm game!

# Why I think it was bad

Saying "we're the better cloud" is a frail argument. Both GCP and AWS has strengths and weaknesses. People know that. Saying "we're the best" feels simplistic (and insincere).

Second, some of the things you mention (live migration, scalability, etc) are things that AWS also provide.

# What is good

The last part of your post, So, how can you optimize, was good. It wasn't bashing AWS, it was general, sound advice on how to build things for a cloud platform. More of that would have resulted in a better post.

Nearline is another good offering. You got great publicity on that when it was released recently. I think many felt it had better and simpler pricing, and a better structure for retrieval than glacier. That's what you should focus on.

No need to criticize your competitors or arguing that you're better than them in every dimension. Instead, build good products and let the customer decide.

disclaimer: I'm the OP and work for GCP

Sandstrom, thanks for the detailed feedback. Spending most of my day talking with customers, I've found that many, many folks don't really know those strengths and weaknesses very well. This post was written really in direct response to questions we get every day about "why should I choose GCP?"

A great example; in your comment you describe AWS as having live migration, which they do not. It's exactly this kind of misinformation that we were aiming to clear up.

I hear you loud and clear on more technical guidance; my team has been building http://cloud.google.com/solutions for the last year, let me know if that helps or if you have any ideas on what we should build next!

Thanks again!

Yes, you are right regarding live migration. I though that's what they used to avoid reboots for one of the Xen-vulnerabilities earlier this year. But it wasn't, it was a hot-patch of some sort. Thanks for clarifying.

Style in writing is about standpoint. Where are you coming from?

Leading with:

> Google Cloud Platform delivers the industry's best technical and differentiated features

Is clearly the standpoint of someone trying to convince you of something and the tone certainly put my hackles up. It's often more effective [1] to adopt the position of someone sharing what they find most interesting about a topic.

Instead of:

> How can a cloud platform be both Better AND Cheaper? > Easy: it turns out that the same technology ...

Maybe something like:

> What's fascinating is that the same technology that allows cloud platform to perform better for these workloads also allows it to be cheaper.

[1] http://www.amazon.com/Clear-Simple-Truth-Writing-Classic/dp/...

disclaimer: i'm the OP and work for GCP

That completely makes sense to me, and yet runs in mild conflict with the feedback below where at least some users want us to do a bit more "selling". I've heard numerous times from customers and prospects that they'd like for GCP folks to just "spell it out and tell me why I should pick you", so this was my take at doing exactly that.

I'm definitely trying to convince you of something, but I don't want my/our tone to have you come away with a negative impression, or for you to read it as anything other than an excited nerd happy about his tools. Thanks for the feedback!

You're backfilling your standpoint and it's working!

But yup, different forms for different audiences and this works well as supporting material for an investment in GCP.


Would a little fun autobiography from folks on my team give the context/backfill required for us to make sense while being a little assertive?

Thanks for the book link, ordered up!

> if anything I had a bit of geek pride in my voice

:-) Cool. It is impressive to beat AWS at a game that everyone assumes they are the best at.

> Things you'd want to know?

This is kind of weird...but no, not really...for my team, I think we're zealously happy with AWS and it's their game to loose.

And "only" 40% cheaper/x% faster isn't enough, given the switching costs of moving everything over. As long as AWS remains good enough.

So, basically my "it doesn't really matter that you're cheaper" attitude is what made the connection to "a great business/adoption rate doesn't come solely from technical excellence".

Which, as a programmer/technologist, is frustrating. If I knew what it did come from, I'd be off doing that, instead of kibitzing on HN. :-)

First off, I'm super happy that you're having a great experience in the cloud: given where we are in the overall adoption cycle for public cloud tech any win for cloud is a win for all of us.

>"given the switching costs" Perhaps that's the crux of the question right there; what if it was trivially easy? What if you couldn't tell beyond a different (lower) bill? Wouldn't it be awesome to run on several providers, load balance between them, and up your availability game to be redundant across providers? This is an area of real focus for my team, let me know if it's something of interest!

> what if it was trivially easy?

Yes, in theory that would be awesome...I think in practice, the limitations would be:

- Cross-provider latency for micro-services that are interdependent (e.g. right now we assume everything is in the same AWS region)

- API compatibility/codebase coupling, as our code is littered with "new AmazonXxxClient"

- Feature parity would have to be near-100%...e.g. we use the regular things like EC2, S3, Dynamo, RDS, but also IAM and Kinesis (...and, and...). Not to say "oh you just need IAM & Kinesis", but that for any feature, current or future, that doesn't have a direct comparison across all providers, makes cross-provider harder to do.

- Increased devops complexity; different ways to get into machines, different places to look for logs, different places to look for metrics (CW), different logins, different admin UIs...

If you're team is focusing on this, these are likely things you've thought of...

I would not be surprised if "cross-provider" becomes like "cross-platform", technically possible but rarely done.

E.g. in the 90s cross-platform Microsoft/Apple/Linux (or today cross-iOS/Android/web) applications existed, if the programmers were very diligent, smart, and worked behind API abstractions that hid any platform-specific, platform-differentiating features.

...but in reality, 95% of programmers didn't, and were coupled to one platform or the other. And that was basically fine/just the way things were.

GCP would probably have been better off if the "AWS marketplace" had worked out...

E.g. instead of "GCP runs machines + offers PaaS databases/caches/etc." and "AWS runs machines + offers PaaS databases/caches/etc.", then now both the "run machines" + "PaaS offerings" must be identical, it'd be much easier if cloud providers got out of the PaaS business.

That way both GCP and AWS just "run machines", and instead there is some sort of vendor / marketplace API where database vendor X can run in both AWS and GCP, and caching vendor Y can run in both AWS and GCP.

And so you use the db APIs and the cache APIs, with some sort of service discovery within each cloud, and ideally the management UIs for the db and cache are both hooked in the GCP/AWS consoles.

So, it's not "use Dynamo in AWS" vs. "use Bigtable in GCP" and they're kinda/sorta the same, it'd be "use (say) Voldemort in both" and they really are the same, and managed by the same vendor.

(And not "you start vendor X's AMIs/images in your account and then play sysadmin", but "you use vendor Xs already-running/multi-tenant APIs from your app".)

That said, it's not in AWS's interest to let others provide the PaaS part of the machine in a generic way.

And, realistically, I don't think other (non-cloud) vendors could have successfully provided the PaaS offerings that AWS has, at the same multi-tenant scale. E.g. pre-AWS, all of the vendors (like a DB startup) were focused on "scale single-tenant solutions", and not "scale multi-tenant solutions", which is likely an order of magnitude or more harder. So it made sense for AWS to do it instead (plus the obvious benefit to them of more AWS lock in).

Although GCP has some distinguishing technical features like live migrations, based on my own independent availability monitoring, it hasn't been any more reliable than other cloud services including Azure and DigitalOcean. In fact, over the past year, EC2 has been much more reliable than GCE. The most common problems I've observed are network blips that may or may not get reported on the GCP status page.

https://cloudharmony.com/status-1year-for-google https://cloudharmony.com/status-1year-for-aws https://cloudharmony.com/status-1year-for-azure https://cloudharmony.com/status-1year-for-digitalocean

I particularly love Cloud Platform's Nearline storage for personal media archival. Previously I was using Amazon's Glacier, but the interface was clunky and the load times were too slow. Cloud Platform's web UI, comparatively, has been an absolute joy to use at a roughly similar price point.

Note that while Nearline's pricing is identical to glacier, it's response time is almost immediate, compared to ~4 hours for glacier.

The pricing is actually no longer identical. Amazon's latest price refreshes dropped Glacier to below Nearline and introduced a new S3 object type (e.g. Reduced Redundancy Storage) called Infrequent Access that's a lot more like Nearline.

I like the new Infrequent Access, but I'd like to understand the tradeoffs better. What's the downside compared to regular S3?

It mentions 99.9% availability instead of 99.99%, but it would be useful to understand how that is manifested in reality, and what the underlying architectural difference is.

I tried to ask jeffbarr here[1], but got no answer.

[1] https://news.ycombinator.com/item?id=10230937

Sorry I missed that, I have been working 16+ hours per day to get ready for re:Invent.

Please take a break when you can!

disclaimer: I am OP and work at GCP.

Watch out for minimum file size on Standard-IA: it's 128kb, and if you write a file smaller than that they round up. It's little gotchas like this that can really burn you...

I really don't know how they do this, but hubiC from OVH is order of magnitude cheaper. Works well for me with git-annex, fsck reports no data loss.

(Not affiliated except for being a customer for 1.5 years.)

Using both GCP and AWS the one place GCP really stands out is container engine, it's much better then elastic bean stalk for deploying and managing apps.

Otherwise: cloud storage has a more limited api then s3 (s3 allows you to query by start key, cloud storage only by prefix), cloud sql doesn't support postgres (which is a necessity Geospatial apps) and there is something weird with there network that causes server sent events to be unusable from compute engine instances.

You should try ec2 container services. That's superior to beanstalk.

Can we fix the title? The official title is "Google Cloud Platform delivers the industry's best technical and differentiated features". "Google Cloud Platform" is a meaninglessly vague title, especially considering the source is the official GCP blog.

That'd be like posting an article titled "Washington Post Delivering News to Best New Platforms" as simply "Washington Post". I realize that the original title is too sales-y, but if I knew it was marketing, I wouldn't have clicked.

I gave GCP a try recently because one of my workloads on DigitalOcean started to be come unreasonably expensive. I use AWS S3 for object storage, but while their other offerings seem to be pretty powerful, it's more complicated than what I need. As a hobby-level user, I'm really liking what I see with GCP so far.

One GCP offering that I think deserves more attention is Managed VMs (part of App Engine). You provide your app + runtime in a Docker container, and App Engine will run that container in a Compute Engine VM and do a bunch of cool stuff for like health checking, autoscaling, and log aggregation. It's limited in a lot of respects, but it's a basic cluster manager like Kubernetes or ECS but easier to use for simple services IMO.

Google doesn't realize that this isn't why AWS is winning. AWS has better support, more services, more options and is easier to use.

I completely disagree with "easier to use". GCE's webui is vastly superior, their gcloud cli is a pleasure to use. I like their networking/projects a lot better than vpc/security groups.

AWS does seem to have less problems (excluding this past week), and they do offer more (and sometimes better) services.

Compared to GCP, perhaps, but AWS's support is pretty atrocious too. The real reason AWS wins is because they know how to sell platforms: as you said, it's about more services, more options, and yes, more _selling_. You can't just build the best components and wait for people to try them. You have to go listen to customers and propose how to build what they need using your platform. This is by far the biggest difference between GCP and AWS: the technical salesperson mindset.

disclaimer: I'm the original author, and a technical salesperson of sorts at GCP

What could we do to make this better? I'm super serious; I talk to customers every day, and we're working hard to deliver a platform (all of it - product, docs, support, sales, customer advocacy, OSS, etc) that exceeds expectations and delivers results. What could we do to impress you from a customer engagement standpoint?

Your experience with AWS support is generally correlated with how much you pay. After many years of grumbling, we're finally at the highest level and support is pretty great. As it should be, given the price.

You're exactly right about the "technical salesperson mindset". It's awesome see new features come out several times a year that feel like direct response to our feedback. Google is impressive, but I don't get the feeling they would be as responsive to customer requests.

"Your experience with AWS support is generally correlated with how much you pay"

Can't say I've found this to be the case. We spend a minor fortune each month, have premium support, and get really poor help. I thought mostly everyone agreed AWS support stinks.

Part of the problem, I've realized, is we are not contacting support for trivial, common items. Rather we engage with them on outages in their service (ahem "elevated error rates"). On these and other similarly complex issues, their front-line support staff is ill equipped and they mostly resort to saying "I've escalated to the service team."

- better support

Need to prove this with some data

- more services - more options

Sure, because of earlier start.

- and is easier to use

How do you measure the ease of use? APIs? Documentation? Management UI?

> Sure, because of earlier start.

So? AWS will always have that head start. If GCP can't catch up to or beat AWS on features, or find some way to differentiate from AWS, that's always going to be an AWS advantage. Customers buy based on what's best for them, not by comparing what two services look like at the same point in their lifespan.

Quite bike claims made in the post. AWS is the unquestioned standard at my workplace. I am inkling to fund the GCP a try. Do they have some migration doc?

One of the folks in Developer Relations made this side-by-side service "mapping": https://cloud.google.com/free-trial/docs/map-aws-google-clou... (some aren't really 1:1, but you know close enough). It's not a "migration doc" but at least gives you a sense of what pieces to put together. Some services have really easy mappings (say S3 => GCS which supports the S3 XML API except multipart uploads) while others have really obvious "conceptual" mappings (say EC2 => GCE).

Does that help?

That does give me a direction, thanks a lot!

I tried both Google Container Engine and Amazon EC2 Container Service recently for a docker deployment. Google was the clear winner: they have support for private registries (Amazon requires you to use a third party or set it all up yourself), Google gives you a preconfigured, Debian-based base image so you don't have to do any setup or maintenance (with keys already exchanged that allow login with the gcloud command, no work required), and Kubernetes itself (which Google Container Engine uses for orchestration) is really nice. Also Google cloud logging (currently free) just works out of the box with their container engine and streams your container's stdout to their logs. Kubernetes itself is open source so you always have the option to move on your own metal if your deployment grows to that size. Would highly recommend Google Container Engine.

Also check out Joyent, you can launch containers natively from your laptop on their cloud bare metal, no underlying instances for you to manage are required. Plus their 128m size container is only .003c/hr and is billed by the minute.

It's also worth noting that it's shared cpu prioritization so you will get at least your allocation, but can burst much higher for cpu bound loads in practice... though I find that joyent's storage pricing is pretty far out of line compared to gce, aws and azure and that their pricing all around is a bit higher. That said, their technology stack for docker is pretty damned cool.

> their 128m size container is only .003c/hr...

$0.003 / hr, 0.3c / hr


Somewhat off-topic, but what service do you guys recommend for someone who just wants a cheap server to play around with (maybe getting something like an rsync script or BTsync and a website running)?

Some people might say Digital Ocean or Linode or Ramnode or whatev, but I would go straight to AWS.

The 't2' class of servers are competitive with Digital Ocean in terms of pricing (roughly the same), especially if you do a 'reserved instance', and you get a ton of other benefits "for free", like a sane firewall and network, better monitoring tooling, snapshot tooling, and an easier path forwards if you do need to expand to other things like databases or s3.

How do you figure? The t2.micro is ~10 USD a month with 1GB of transfer. The comparable ~10USD/month plan at DO includes 2TB of transfer. And Amazon doesn't have a 5USD/Month plan, which is probably enough?

Apparently 2TB out of a t2.micro comes to an additional charge of 0.09 * 1999 ~ 180 USD. What am I missing? That you probably don't want to use bandwith because no-one will use your hobby projects?

OVH provides very cheap vServers starting at $3.49 / month in both France and Canada (the latter provides good connectivity to the US East Coast): https://www.ovh.com/us/vps/vps-ssd.xml

In Europe, also Hetzner is a good choice with good connectivity and vServers starting at EUR 3.90 / month: https://www.hetzner.de/us/hosting/produkte_vserver/cx10

I would not host my production-quality business on these machines, but for playing around they work very well. And BTsync should not be a problem, I have BTsync running on a Cubietruck and it works flawlessly.

I don't think there's anything wrong with Digital Ocean, but other options are virtual servers from OVH, leaseweb, ramnode.

I haven't tried Hetzner's VPS/cloud offering, but they're budget "used" servers are nice, if you want to play with your own hardware. People often forget how much overhead virtualization can end up having due to disk contention, and especially in the network layer/stack. A dedicated server is more expensive than a (low end) VPS, but can be more fun! :-)


While the listings are mostly self-explanatory, a couple of points: if you want to experiment with cloud storage/zfs - you probably want ECC Ram (you probably want ECC ram. Full stop.). That'll be more expensive. You might also want to search after servers with at least one SSD, as eg. ZFS can use that to speed up the file system (and Linux also has different flash cache, or you might just want to use it for DB storage etc).

- DigitalOcean - Good, popular, easy to use, pretty stable, $10 for 1GB of RAM

- Vultr - (Much) faster CPUs than DigitalOcean, less popular, a little less nice UI, more OS options, roughly the same price as DigitalOcean

Oh and if you're a student, you can get $100 credit at DigitalOcean for free: https://education.github.com/pack

Personally i would recommend you to set up your own server, so you can learn how stuff works.

If you don't want that for some reason i have used Digital Ocean found their service to be very simple to use. I have also tried amazon's stuff and i found it to be expensive and not very user friendly. But when it comes to price i know there is other ones that are cheaper than Digital Ocean.

GCP micro

just grab something from lowendbox or vpsboard's offers section.

Can confirm, $5 a year for a 1Gb RAM VPS (with NAT IPv4) and something like $10 for one with a dedicated IP. Search for deals, they are out there.

You could get a cheap Raspberry Pi and use that.

I would use, but still there is no data center in South America.

disclaimer: I work for GCP

Would connecting to our network locally scratch the itch? https://cloud.google.com/interconnect/

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact