Hacker News new | past | comments | ask | show | jobs | submit login
Google nabs Apple as a cloud customer (businessinsider.com)
471 points by ra7 on March 16, 2016 | hide | past | favorite | 216 comments



We've been doing tests in GCE in the 60-80k core range:

What we like:

- slightly lower latency to end users in USA and Europe than AWS

- faster image builds and deployment times than AWS

- fast machines, live migrations blackouts are getting better too

- per min billing (after 10mins), and lower rates for continued uses vs. AWS RIs where you need to figure out your usage up front

- project make it easy to track costs w/o having to write scripts to tag everything like in AWS, down side is project discovery is hard since there's no master account

What we don't like:

- basic lack of maturity, AWS is far a head here e.g. we've had 100s of VMs get rebooted w/o explanation, the op log ui forces you to page through results, log search is slow enough to be unsuable, billing costs don't match our records for the number of core hours and they simply can't explain them, quota limit increases take nearly week, support takes close to an hour to get on the phone and they make you hunt down a PIN to call them

- until you buy primare support (aka a TAM), they limit the number of ppl who can open support cases, caused us terrible friction since it's so unexpected esp. when it's their bugs you're trying to report and they can mature from fixing them


Sorry to hear about your troubles. Are you running with onHostMaintenance set to terminate or are you losing "regular" VMs. If you want to ping me with your project id (my username at google), I'd like to investigate. 100s of VM failures is well outside of our acceptable range.

Also, if it's been a while since your last quota request, we've drastically improved the turnaround time. All I can say is, your complaints were heard and we've tried to fix it. Keep yelling if something is busted! (And yes, I see the irony of the support ticket statement; out of curiosity which support are you on?)

Disclosure: I work on Compute Engine.


Maybe there is something special for the member of GCE startup program, but for us the quotas requests take between 1min and 1 hour, where the same requests over aws took a few days, and endless discussions.

Our all experience with the folks over at Google has been amazing compared to the poor level we had with AWS.

Granted we are on a range way lower than yours.


Ditto -- we've had about five quota requests handled within an hour or two. AWS took about a week for each of two requests.


Thanks for sharing your experience. Its really helpful!


Can someone explain to me why traffic is still so damn expensive with every cloud provider?

A while back we managed a site that would serve ~700 TB/mo and paid about $2,000 for the servers in total (SQL, Web servers and caches, including traffic). At Google's $0.08/GB pricing we would've ended up with a whooping $56,000 for the traffic alone. How's that justifiable?


Traffic's a luxury tax (along with RAM) that cloud providers assume that big companies can afford to pay if they're getting that much traffic.

Outside of the Cloud Providers Traffic is dirt Cheap, Hetzner includes 30TB traffic inclusive in their dedicated server i7 Quad-Core Skylake 64GB DDR4 RAM, 2x250 GB SATA 6 Gb/s SSD for 39 euro /month:

https://www.hetzner.de/us/hosting/produkte_rootserver/ex41ss...

If you don't want to be shaped after you exceed 30TB, Hetzner charges €1.17 per additional TB, so 700TB would come to €783.90 total.

Whereas ScaleWay include unlimited traffic in their bare metal servers starting from 12 euro /month:

https://blog.scaleway.com/2016/03/08/c2-insanely-affordable-...


It sounds kind of inefficient though since different business types have extremely different bandwidth needs. So it's going to tax business by sector rather than by their ability to sustain it.


How many people share that Hetzner server for 39euro/month?


It's fascinating for me to see again and again people somehow accustomed to cloud pricing hear about bare metal hosting offerings and not to believe the prices could be that low. BTW this applies not only to traffic, but also processing power and storage.


If you are looking for a bargain with little commitment, you might want to take a look at Hetzner's auction.

https://robot.your-server.de/order/market/sortcol/ram/sortty...

The gotcha is the allowable bandwidth for their auction machines are lower than their normally priced servers.

https://www.hetzner.de/us/hosting/produktmatrix/rootserver

I got lucky and found a 32GB machine with 4 Samsung SSDs in their auction and it has served me well for testing. I write about it my blog below:

http://gitsense.github.io/blog/benchmarking-march-14-2016.ht...

I think I'm paying about 68 euros a month for it. The Canadian dollar has taken a beating, so it's not as good of a deal as it use to be, but it's still a good deal none the less.


It's a dedicated bare metal machine for you. The tradeoff with Hetzner is, that it's not expensive Server hardware, so you will encounter hardware problems more often compared to a Dell or HP server.


You just have to build high-availability into your software. I'm using six Hetzner servers for over 1.5 years now and the only problem I had was one disk failure - support needed 10 minutes to swap it. I can highly recommend them! I pay them ~200 euros per month for what would cost me 2.000+ dollars on aws...


I'm curious about this statement : is that as opposed to cloud apps ? Wouldn't you need to build high-availability into your apps whether they're running in the cloud or on dedicated ?

If you mean that you can have a load balancer in front of it managed by Amazon, that's true for dedicated as well (Akamai, CDNetworks, Limelight, even Leaseweb). Managed databases are available from most providers (usually without an API, but you can find them with an API as well).


Failures will happen no matter what. About the only difference I think you'll see is that most of their servers don't use ECC memory, so you're technically more likely to hit a problem there.

I've had one server with them for about 3 years, and another for 2 years, and haven't run into a hardware issue yet. Obviously a hardware issue could happen at any time, so anything I can't live with being offline until I can restore from a backup is configured with redundancies, including a Digital Ocean VPS just in case the datacenter my servers are in goes offline.

From my monitoring, however, I tend to see a short network blip about every other month, but it's less than a minute at a time. All other outages I've had were my own fault.


Hetzner also has options for Xeons and Dell PowerEdge servers for a bit more a month, but I've also had great experience with their best value hosting servers, ran a site on it for a couple of years without running into any h/w issues before moving to AWS due to its easy managed RDS, S3, SES services. But if I'd just needed a single dedicated server with great specs I'd use Hetzner in a heartbeat.


I picked up one of their new ex41-ssd machines and I'm actually kind of nervous about it. I've bought a few auction machines and they all came with Samsung SSDs, but these new EX line machines are using Crucible, hence the low price. I have zero confidence in Crucible and I'm not sure if I will buy anymore EX machines, unless somebody tells me Crucible has a different reputation now.


Ok, cool. The price just seemed low.


Hetzner is towards the vey low end of pricing (downside: latency if your users aren't in Europe), but dedicated servers from most providers end up far cheaper than AWS or GCE.


None.


Actually, one. :)


Its a full root server. In fact you can get an older root i7 for 20-25 euro


I don't get it. Google says they're going after the big fish in the industry by claiming they have amazing pricing. The servers look good, I'm ready to jump on board.

$120-$230 for first TB of egress bandwidth depending on where it goes. No thanks, I can get 2 TB for < $20 elsewhere.

These bandwidth costs leave small businesses, and individuals like myself, staying with the smaller competition. I suppose their reasoning is they can chase after that single $400-600 million contract. One major client like that is worth as much as ten million of us little guys paying $50 each. The big cloud providers exist to to serve gigantic enterprises. The rest of us are a drop in bucket and not worth the effort.


When pricing a value-add you want to price it linearly, with a volume discount, but such that after the volume discount the line is still steeper than the base cost curve. That way growing customers feel like they are getting a deal vs small fish, and are incentivized to use as much as they need, but you still drive your margins towards what the market will bear, provided your volume is growing. That curve will eventually squeeze out some of your biggest customers, but you can avoid this by cutting deals for them, e.g. Google with Apple.


Traffic is not important for every use case. If you run a store for physical items, how much traffic are you going to use? This is probably going to be less then 5% of your AWS bill, so you don't worry too much about it. If you host heavy images, big JS files (which you shouldn't do anyways) or offer downloads, you should probably use a CDN anyways. For big downloads, latency is not really that important as long as you get proper download speeds, so the CDN is going to be a lot cheaper.

Nor everybody wants to run the next Netflix or Dropbox in terms of bandwidth consumption. Even if you did, keep in mind that Netflix does not host the videos in the cloud.


Cloud, especially AWS is 10+ times more expensive that hosting same stuff on DO, Vultr or Bare metal. And you still need administrators, EC2 are just VPSes like any other service from other companies.


They are pricing themselves out of the market for traffic-intensive small-fish operations that way though.


Why do you think they want any of that action? I think their pricing conclusively demonstrates that they don't. Some of those customers are waaaay more trouble than they are worth. Also Google and AWS have "premium" bandwidth - massive redundancy and lots of peering relationships.


> traffic-intensive small-fish operations

Do you have any examples? It seems like it's always been a grow-and-become-profitable-or-die-fast niche.


I run a few websites with video content which leads to 50TB+ per month. The business is profitable, but clearly I would not waste my money on expensive bandwidth.


Google clearly isn't trying to be a porn CDN


Which is funny because through YouTube they have to have the cheapest raw bandwidth in the world.

They need two traffic prices.... Fast low latency web traffic for the current 10 cents per GB. Slower more laggy CDN type bandwidth for like 10 cents per TB.


Don't Google have a CDN service?

Edit: yes. (https://cloud.google.com/compute/docs/load-balancing/http/cd...) But it's more of a CloudFlare competitor—a distributed caching reverse-proxy with a 4MB object cachability limit. Costs $0.008/GB, which is cheap compared to a real CDN, but expensive compared to CloudFlare's "free."


You miss out something there, I think. The $0.008/GB is for the load balancing. On top of that, you still pay for network egress depending on whether it is internally in GCE or to the internet. Those rates are from $0.20 to $0.08 depending on location. (EDIT: For traffic to the public internet)

And those rates are still in crazy territory compared to most alternatives other than Azure and AWS which have equally messed up bandwidth pricing last I checked.

I build caching solutions for customers that want to store their data in S3 or Google Cloud Storage, because the bandwidth prices at the big cloud providers are so out of whack that as soon as someone uses lots of egress (few TB a month or more), you can often cut your bandwidth costs by 80%+ or more by getting some dedicated cache servers to put in between your users and your cloud storage. That is after the rental and management costs for those cache servers are included.

(the reason for this rather than building storage solutions is that if the above fails you don't lose data. If you trust your abilities or service provider, building a multi-location storage setup with 3+ times redundancy that beats S3 etc. on cost by a large margin is fairly straight forward... But it's often easier to sleep at night if you have other people do the risky stuff..)


Funny guess, but wrong.


If you have time, I'd recommend purchasing dedicated servers in multiple geographical areas and setting up a custom CDN. It's much cheaper, however much less reliable and much more time intensive to manage and diagnose.


Which DCs would you recommend?


you can try finding good deals in areas you are interested (be certain to ask for "test" IPs, then look up their connectivity via _multiple_ looking glasses) on www.oneprovider.com and then pair that with a robust DNS provider such as NSOne and you've got yourself a pretty decent, bespoke CDN. provided you already know how to do reverse caching proxies and all the other "magic" a CDN needs to work.


Wow, ~700 TB/mo? That does sound like a lot.

What kind of site would serve that volume of traffic and not have 56k for operating expenses? I mean, I can think of a few examples like Wikipedia maybe, since they are non-commercial and such, but for a commercial business? Maybe 4chan moves that much without a lot of revenue I would think, or maybe... imgur? but not really sure, I mean, it would seem like they could get that amount easily via ads alone.

What was the use case here?

Also, I think that 56k for traffic alone kind of depends on context. I mean, how much does Netflix pay for serving their volume of traffic?

What I'm saying is, isn't 700 TB a month something that would probably be very expensive no matter the context? Just storing 700TB would cost a lot, no?

I'm really curious about your use case here.


Image hosting community site - notably without shady popup/layer/scam ads, which probably was the reason for the relatively small income. For a two person team that only worked part time on it, it still made good money.

The total dataset was just about 3TB, so storing it was not an issue.


I see.

It does make sense. Thanks for satisfying my curiosity :)


700TB/mo is about 2Gbps - on the open market that should be under $1000/mo. Netflix's total cost is probably below $0.25/Mbps. $56,000/mo would get you over 100Gbps of committed capacity from any major provider (or a mix).


> 700TB/mo is about 2Gbps - on the open market that should be under $1000/mo

Is that a fixed-cost sustained pipe though? I was under the impression that (at least at the backbone level) those contracts got more costly the closer to full that your pipe was.


Yes, $1000/mo would get you a 2Gbps commit on a 10Gbps pipe. If you used over 2Gbps (95%ile) for the month, you would pay probably $0.60/Mbps for that excess ($0.10 over the commit price). Some providers dont charge more than the commit price for overage traffic.


Interesting data. Thanks!

It's obvious that I had no idea about costs. I honestly thought it would be much more expensive.


Why is it "not have 56k for operating expenses"? Something that can be had for $2k is not something a healthy business spends $56k on. You should be able to find a better use for those $650k that year.


Well, mainly because I had no idea about the actual costs. I kinda spoke too soon.

I just thought that moving that much data would cost some serious money. Apparently that's not really moving "that much data".

Wrong assumptions on my part :/


The engineer time to reimplement the other AWS services you're using may be substantially more than the $54k difference in bandwidth costs.


Depends on what they're using AWS for. OP seems to be running a simple and straightforward setup that just happens to use a lot of bandwidth. It doesn't take five full-time engineers to maintain a handful of LEMP servers.


Nothing prevents you from mixing and matching there are actually AWS services you can't find cheaper equivalents to elsewhere. My experience is that those of my consulting clients that want to migrate off AWS rarely have any problems replacing it. The cost savings usually pay back any development costs and the overall migration effort in 2-3 months at most.


If there will be 2 such companies, for that 108k I can wrote services backend that will be comptabile with AWS, so after a year you can transparently switch to that system on your bare metal and sell it to have your services for free.

I still didn't make my own "clone", because I can't afford machines to start selling it.


Hmm very unlikely. Thats 5 full time people.


More like 2 or 3 if you include overheads. Depends on location, of course.


It's where they make their money. Like when a restaurant pushes the desserts on you - the desserts have the highest markup by far on the menu.

AWS has a lot of 'free' services, which still have to be paid for. Some of those free services are things that benefit both the client and AWS, but would be avoided by many if folks had to pay for them (like IAM credentialling)


1TB/mo is roughly a constant 3mbit/s. So an estimated 2.1gbit/s. I recently had a 1gbit line from he.net quoted at $500 in Seattle.


One popular high traffic site I know build their own CDN to serve the large majority of their data by renting dedicated machines in OVH, Hetzner, etc. I can not remember their actually datacenters for their own CDN but they were not CloudFront or Google Cloud Platform.

Supposedly this has saved them immense amounts of money.


If your servers are efficient enough (and this is not hard to do these days), it's easy to get bandwidth-limited on a per server basis, i.e. your server could handle more traffic, but you've maxed out the bandwidth available to that particular server.

If you can load balance at the client, then you can "talk" to any server at the edge and don't need a router or proxy, so the net result is that you are only paying for whatever bandwidth comes with your OVH (or whatever) boxes. Effectively, you're buying bandwidth and the computer/storage/power/rackspace/etc. that comes with that bandwidth is free.

And yeah, it's ridiculously cheaper than AWS or Google's Cloud Platform to do things this way.


> Can someone explain to me why traffic is still so damn expensive with every cloud provider?

Because The Cloud(tm) IS cheaper--when you start and don't have any real bandwidth or CPU usage.

Whereas, every colocation facility I have quoted wants you to commit to a minimum of $500 for some partial cabinet. So, The Cloud(tm) wins the contract and gets to bill in increasing amounts when usage finally goes up.

Finally, how many real system administrators still exist who can provision your systems, configure the network, and understand how to connect everything to the network without getting p0wn3d? If you don't have that person, you can't escape The Cloud(tm) even if you wanted to.


> Finally, how many real system administrators still exist

... a lot? Has there been some shortage of network/infrastructure people lately?


Well, considering how many small/startup shops expect the developers to also do IT chores, "the cloud" makes the most sense... spending time learning the insides of systems they don't care to truly maintain comes at a cost... time to do other things, or cost to pay someone else to do it.

In the end, the cloud makes sense in a lot of scenarios.


"The cloud" does not mean you don't need real system administrators. I see time and time again companies get bitten by this. Overall devops efforts to run this well on AWS or GCE in my experience tends to be higher than provisioning dedicated systems because you have so many artificial limits imposed on you by the providers that makes things harder.

E.g. your example: Understanding how to connect everything to the network without getting hacked is far easier when your private network is physically wired to a separate switch, and your public network is physically behind a firewall and there's no configuration mistake in the world you could do that would change that, so the problem-space to get basic levels of security is reduced to configuring the firewalls correctly.

Still plenty of room to shoot yourself in the foot, but in my experience far less so than having people configure their own networking on AWS.

As or pricing, yes, if you want to do colo, the initial costs are higher. But dedicated rented servers with monthly contracts are also typically far cheaper than AWS for anything that stays up for more than ~1/3 or so of the time (obviously depends on the hosting povider). If you regularly spin up lots of instances for a short period of time, you should use AWS. But the moment you stop spinning them down again, it's time to rent capacity somewhere else.


Perhaps it is like the gas stations that sell gas for $4.99/gal when others sell it for much less. It's only worth their while to sell it if they make a healthy margin so they only sell to people willing to pay that much.


Storage is also a lot more expensive from 'cloud' providers, people often forget to look at the performance and redundancy and simply look at 'per gb' costs.


Indeed. The IOPs numbers for the cheaper VMs are not so great.

You need IOPs? Suddenly you are paying for a premium instance type.

You want replication and/or geo-redudancy with that? Now we're talking $$$ :D


To clarify, we don't do that on Compute Engine. The number of IOPS you get is tied to the volume size for Persistent Disk. You choose between the two flavors (SSD and regular) and then size your disk. That does mean you have to buy more GiB than you "need" if you want to go faster, but PD is much cheaper than "bigger VM" in most cases.

Disclosure: I work on Compute Engine.


So lets say... how much would say a 2TB volume providing a consistent minimum of 100,000 random 4k write IOP/s that's available across multiple VMs at once and must be highly available at say 99.9% cost?

* Note: I went to use Compute Engine's cost calculator but it appeared the site was down / under heavy load?


Seems like the obvious cynical answer is that they do that to encourage you to use more of their services.


Their CDN interconnect lowers that pricing to ~$0.04/Gb (US).


That's still very expensive. Wholesale rates for bandwidth are a fraction of a penny per GB.


That is still about $13/Mbps, or 26x transit pricing.


This isn't wholesale, but at least it's half what they quoted. Additionally, you're only paying to update the assets and CDN fees.


How much do staff salaries and data center rentals add to the cost per server and per GB?


Why do you assume you need a staffed data center to get cheaper bandwidth?

Just buy dedicated servers or VPSes, no datacenters or staff needed. The hosting provider takes care of the servers, staff and the datacenter.


If I ever got to where my bandwidth fees were even a hundred a month for personal projects, I'd switch over at least part of it to a VPS...

As a business, I wouldn't do it until the cost of the bandwidth+hosting exceeded the cost of an extra, dedicated employee to manage the VPS server(s).


> As a business, I wouldn't do it until the cost of the bandwidth+hosting exceeded the cost of an extra, dedicated employee to manage the VPS server(s).

Why do you think you don't need that extra person to manage the instances in a cloud setup?

My experience is the reverse: It tends to take more man-hours per instance to manage a large cloud setup, because there are many more spinning wheels. The overall complexity is often vastly larger. In fact, I have clients I manage physical servers for where the time taken per server is on average still far lower than for cloud instances even including the 2+ hours lost on travel per visit to one of the data centres if someone has to physically go in (rather than rely on "remote hands").

This is before factoring in typically higher utilization rates for the dedicated hardware, because it's easier to customize it to get the right balance of RAM, CPU and IO for your workload. The result is usually fewer dedicated servers than you would have cloud instances.


If I'm using RDS or Azure-SQL, I'm not managing a database server... The list goes on, but when starting, you may only have one person or two working on actual development... features are important... actual customers and actual revenue may well be more important than scaling to millions of users.

Working is better than not working perfectly.


How do you figure you need extra, dedicated employees to manage VPSes compared to cloud VMs?

The hosting company takes care of the VPS servers, just like Amazon takes care of the AWS servers.


Once you need to scale, you need that expertise... if I can use RDS, Azure SQL, or a number of other options to manage database services, or other systems without dedicated staff, that buys time to keep the lights on while actual solutions and features are created... an MVP needs to work... And "wasting" a few hundred a month on hosted services while trying to get something working is better than having to spend that time becoming experts on infrastructure, databases, or any number of other systems.

I'm not saying don't optimize, but I am saying that you shouldn't switch infrastructures unless you are saving enough to cover additional talent.


That is a key question I have been pondering myself.

One theory of mine (perhaps uninformed; I'm not really a networking expert) is that because of the dynamically configurable nature of their systems, they need to use routers rather than relatively dumb and cheap switches at almost every level - in order to have flexible networking and still maintain isolation between customers.

This could get quite expensive if you have to pay Cisco/Juniper for this. If this is true Google will have quite an edge with their software defined networking here, I would guess.


No, they use whitebox switches and software defined networks to control. See https://www.youtube.com/watch?v=n4gOZrUwWmc [Edit: oops, fixed!]


That's Google. They have put their cost levels somewhat below Amazon's. Maybe they don't see the need to be 5-10x cheaper than the market leader in traffic costs even if they could...



Lol, sharing multiple youtubes with multiple people at once. Fixed.


SDN is changing the model here, and Google is way ahead. In an enteprise, you can use VMWare to do a lot of the stuff you are blowing big bucks on for Cisco/Juniper on and use switches with higher density.

SDN is going to turn the cost structure on its head -- I wouldn't want to be a network guy now, easily 60% of tasks are getting vaporized in the datacenter.


As a network guy, it's a _great_ time to be an experienced network person. The only mature aspect of the ill-defined SDN sphere is OpenFlow and that will only get you so far. Try as they might, controllers like OpenDayLight and the various things that plug into Neutron/OpenStack aren't plug and play for those w/o significant network knowledge.

From my vantage point, it's going to be at least another five years before the cost structure really does turn over on it's head for folks below the hyperscale level


No doubt. Any change is great news for smart people. But the average joe churning out firewall changes and similar are screwed.


Google's really ahead on the networking front, and other cloud providers are following suite. Networking hardware is super cheap now. When you couple that hardware with open source software networking gets cheap.


Large networks like Level3, Cogent, Telia, etc all use big-iron routers (Cisco/Juniper) and will sell you traffic for under $1/Mbps.


Yep, and once you're at the multi-gigabit per second level, the price drops much lower than that pretty quickly.


Indeed - Internet Transit at scale (10 Gigabit+ ports) goes for around $0.63/Mbps at 95th percentile. [1] - for the above quoted 700 Terabytes/month, that works out to $1341/month, if it's evenly spread out on the lower 95th percent of the circuit at around 2.129 Gigabits/second.

[1] http://drpeering.net/white-papers/Internet-Transit-Pricing-H...


They (Level3, Cogent, Telia) don't have millions of ports though...


This is true, but I can't imagine Google/Amazon/Microsoft are using Cisco/Juniper routers at every level of their network.


Hosts play a big part of SDN in that they support the dvswitches along with guest VMs. Not everything is a Cisco/Juniper. Switching hardware is still common in TOR and egress.


VLAN's and Virtual Appliances in the same environment as the guest machines to facilitate routing should allow for scale without costing these virtualization providers too much.


$1/Mbps per what (unit time)?


Per month, usually (billed 95%ile). That price will decrease by quite a bit with more volume.


Thanks. How long has it been roughly $1/Mbps/Month ? Do you know of any sources with historical data?


Average was $0.63/mbit in 2015, and I personally haven't seen lower than $0.40/mbit.

http://drpeering.net/white-papers/Internet-Transit-Pricing-H...


That really depends on volume, location, and provider. For large volumes and cheaper providers (Cogent, HE.net, etc) it's been that way for 2-4 years or more. HE.net will now sell a full 10GbE port for $2600/mo, Cogent isnt too far behind. Sub $0.40/Mbit at >25Gbps volumes in major locations is doable.


pretty sure (at least) AWS builds their own network hardware. I remember reading something a while back that said they found it magnitudes less expensive than buying enterprise hardware, with better performance as they went about the affair as scientifically as you'd expect them to.


Old CoyotePoint routers were just a commodity x86 motherboard with an ancient SSD instead of spinning rust. Junipers use a duo of x86 (routing engine) and ASIC (packet forwarding engine). Cisco has supposedly moved from that architecture to an ARM and ASIC pairing.

The ASIC is just a hardware offload for known routes. Unknown routes, admin work, and Ping packets are handled by the x86/ARM CPU. It's not too different from offloading graphics work to the ASIC on your graphics card, or your mining to your Bitcoin ASIC.


> How's that justifiable?

What, morally?


Sounds like you should start your own cloud hosting service! I bet you could make a killing.


We are consolidating all of our cloud services at Google Cloud and couldn't be more happier. We've had north of a thousand virtual machines scattered across ~6 2nd and 3rd tier providers and switching to gcloud has been a game changer for us.


> We've had north of a thousand virtual machines scattered across ~6 2nd and 3rd tier providers and switching to gcloud has been a game changer for us.

All the of the success stories I've heard about Google Cloud are from companies using significant resources. Why hasn't Google gone after startups? Perhaps I'm missing something but a turnkey package of computing, analytics, and advertising seem like a no-brainer.


We are! We give $100k to vetted startups that aren't already big: https://cloud.google.com/startups


Oof, that page renders really poorly on mobile Safari: http://i.imgur.com/bCxvJmO.jpg


Do you guys do anything for bootstrapped companies? =<


Send me a note, please? Aronchick (at) google

Disclosure: I (obviously) work at Google on Kubernetes & GKE


I use it for a bunch of personal projects and bill between $15-$30/mo.


How is the reliability? I want to like GCP but I have never trusted their services in general.


I can't speak for the OP ... but from what I've seen, it's extremely good. consistent fast performance and their proprietary "live migration" really stands out. besides really good raw machine speed, the inter-networking is also far superior.


I can absolutely second that. It's far superior to anything we've seen so far!


How did the change impact you? More control, lower cost?


They've been a heavy Azure user too. Probably more than AWS.

I'm glad there's now at least 2 and probably 3 competitors for public cloud infrastructure. So many things were at risk, including adoption of public cloud in general, when it was a sole source monopoly from Google (OpenStack/Rackspace/etc. was basically stillborn, and VPSes aren't the same thing, nor was VMware ever really credible for public cloud)

Neither GC nor Azure are as comprehensive as AWS, but together at least one of them is usually a viable alternative for any given deal.


Google has some really interesting features, closer to docker, so some better mobility options from private/vps to google, and back. They seem to have some of the best compute options out there, and tend to perform above the others in a lot of ways.

Azure's services are imho a bit easier to use, at least from my limited experience, mostly vm's, queues, tables and hosted sql.

AWS has so many options and services it's hard to keep some of them straight... Lambda is really interesting imho, and some of their options for data storage are compelling to say the least.

Joyent's Triton/Docker option is really interesting, but their pricing model just seems too much for what they're offering. I do hope that they have success in terms of selling/setting up private clouds though... there's a lot of big companies that would be much better off with their solutions.


>>OpenStack/Rackspace/etc. was basically stillborn

What's wrong with Openstack/Rackspace?


Feature creep if I recall correctly. Though Openshift is an interesting implementation.


Yea big news. We all benefit from competition here


Can someone provide a little context towards this exodus from AWS to Google Cloud? I understand in DropBox's case that they (questionably) need their own infrastructure for cost saving. But then there's Apple and Spotify suddenly changing over. What's the advantage?

I have a fear that this trend among large companies is going to trickle down to smaller ones and independent devs. Considering these "Cloud Wars" I can see stories like continuing with different providers. Ultimately, a scenario could occur where one year, one provider is king. Then the next, everyone decides they need to migrate to the next big thing. That would be irritating for us contractors. We would have to learn new interfaces and apis at the same rate of JS frameworks.


There is no exodus. There are a lot of companies moving to multi-cloud, which makes sense from a disaster recovery perspective, a negotiating perspective, and possibly from cherry picking the best parts of each platform.

This is what Apple is doing. They use AWS and Azure already in large volume. This move adds the #3 vendor in cloud to mix and isn't really a surprise.


Thanks for the answer. That makes a lot of sense. I guess To some degree, I did know this. But the media has been portraying these moves as a complete move, hence the whole "exodus" hype. It bothers me still, because this rhetoric may lead to scenario I described above for smaller companies.


The media is sensationalist as ever — I would worry about any CTO or Engineering Lead who based such a huge important decision on a Business Insider article.


Is that a bad thing? Competition good.


Mmm... I think you'll be seeing them push more AWS/Azure stuff onto GCP. :)


This.

If you can afford it, multi-cloud makes sense. Reduced risk to outages, etc.

Personally I've seen smaller companies also doing the same.


Its more catch-up than an exodus, but also overtaking in some ways. Short version I'd say pricing and data processing (DataFlow, DataProc and especially BigQuery) Their core network infrastructure is more advanced, and Live Migration is pretty nice too.

Long version the recent posts of Spotify and Quizlet’s moves to GCP dive deep into their reasons why.

https://cloudplatform.googleblog.com/2016/02/Spotify-chooses...

https://cloudplatform.googleblog.com/2016/03/free-online-lea...


> That would be irritating for us contractors. We would have to learn new interfaces and apis at the same rate of JS frameworks.

Heaven forbid cloud computing move beyond the current 1960s "You only buy from IBM" model, especially if it's "only" benefiting the customer.


I so sick of EC2 rogue 'underlying hardware issues' and EBS volumes dropping dead... AWS Console status will say everything is 'Okay' even when there are major problems - it's a joke... I wonder to myself, is it because I recently migrated (December 15) over and they are starting to buckle? Really a bad experience. At this rate I'll be looking at Google next month, or going back to Colo (25 servers, 100TB) so not much, but still worth doing right.


I've had ~25-30 instances running for the past 3 years and only had 1 or 2 instances have hardware issues, never had issues with EBS. Running on us-west-2 but it seems like more issues happen in us-east-1.


I'm on Virginia zone D... the other day a 15TB EBS went down, even with status as good. Their explanation, which took a lot of time/energy to get, was that the 2nd replicated copy had a failure, and when rebuilding from the 1st good replicated copy (primary) it suffered an unknown error taking down that copy as well... I was upset to say the least.


1 or 2 failures out of 30 is a really high failure rate for HW.


I challenge you to build enterprise grade hardware, run it hard and have a hardware failure rate of ~1% a year.


Challenge accepted. Been building carrier grade equipment with significantly lower failure rates than that for >15 years. Gear I designed in my first year out of grad school is still in field use today.


New challenge: build your machines for low cost from commodity hardware, rent your machines out to millions of customers and never have a single customer have > 1% of the hardware they land on fail.


That's moving the goalpost too far for my taste. But assuming this is true and AWS/GCE are doing this then why are their prices so high?


We run bare metal dedicated servers at SoftLayer with up times in the 3-4 year range. Our failure rate is less than 1%.

Not that hard...


Agreed. Similar results here over the years. I shouldn't bash EC2 so hard, but they also shouldn't keep the same uptime estimates when they have degraded over time. My developer made it all pretty clear to me with a statement "if you tell me the strengths and weaknesses of the system, I'll code accordingly" Great developer, can overcome EBS failing with 3-9's, but only if they state that and not 5-9's!


I just find it weird that every time "the cloud" comes up on HN, people defend it as hard as they can, like running servers yourself is some voodoo magic to be shunned. Usually with examples of "well, X is only saving $56,000/month with this switch away from the cloud! surely they're making a terrible tradeoff in an increase in employees!".

The answer is no. People do these calculations before moving stacks. The cloud is where VC money goes to pad Amazon's bottom line. AWS is insanely overpriced if you actually sit down and do the numbers. I'm our company's part time sysadmin on a bunch of bare metal servers, I spend maybe 1-2 hours total per month kicking things/filing hardware replacement tickets/etc.

I don't understand this mindset against learning the entire stack. You should understand hardware, network and OS. Maybe I'm too old.


Would be interesting to know what kind of discounts Apple got on this. It's a massive PR win for Google, the kind I expect they could give $100m for. Apple is also notorious for getting a very sharp price from their suppliers, so the combination suggests there were some steep discounts.


The public cloud prices bear no relation whatsoever to what large customers pay.

I know people spending less than $1m/month that are paying ~25% of the public prices on one of the top three cloud providers. Frankly, I'd be surprised if Apple is paying more than 10%-15% of the public pricing.

The reason is that anything above that, and you can save massively by going to more traditional dedicated hosting.


Apple was very happy that Google gave S. Korea a very public smackdown at their own game with the AlphaGo AI software. If only I were kidding.


My guess is that it's pretty much just BigQuery. No one else seems to be able to compete, and that's a big deal. The companies moving their analytics stacks to BQ and thus GCP probably make up the majority (in terms of revenue) of customers for GCP


Given how cheap bigquery is, there would have to be a lot more bq-only customers than customers that use other services. And given how seamlessly the different products work with each other, any beachhead product like bq will quickly garner more product usage.


I doubt it. Not only does Apple (maybe?) run one of the largest Cassandra clusters in the world, but surely they wouldn't leverage cloud provider features over open source alternatives for fear of vendor lock-in.


Cassandra and BigQuery are not at all comparable. BigQuery's open source competitors are Impala, Presto, and Drill.


So it makes sense for Dropbox to build it's own infra but it doesn't for Apple.

Also wondering why Apple isn't hosting exclusively with IBM, they seem to have the best geographical coverage.


Apple does operate its own infrastructure. It has numerous, exceptionally large data centers in the USA, Europe and China. Most notable is the $1 billion, 500,000 square foot facility in Maiden, North Carolina.

Apple probably augments their own infrastructure with cloud providers for various reasons, e.g. increasing geographic diversity, allowing for progressive growth, and to handle comparatively small jobs (e.g. merely a few hundred VMs).

I imagine it would also be a waste of Apple's time to tool up their own data centers to offer general purpose cloud computing services.


Apple doesn't have "numerous" facilities. Compared to Google or Amazon they have very few. They really only have two worth mentioning, and having only two is the most expensive thing you can do, with 50% natural overhead.


A trivial Google search shows that you are entirely incorrect.

And comparisons to Google or Amazon are unreasonable, as both these companies sell cloud services. Apple does not.


Apple has at least 5 data centers in the US alone.

And it makes sense that the overall size of their facilities would be much smaller than Amazon, Google, or Microsoft... They're not running a major search engine or offering anything like AWS, GC, or Azure.


How much diversifiable overhead does a data center have?

My spidey-sense is telling me that its very little.


I think Apple does a combination. Both (from the article) of hosting on AWS, Google, & Microsoft, but also on its own data centers.

I suppose it also depends on what is being hosted. If you look at Netflix & Dropbox, they both took control of their core piece (CDN & Storage) - not the entire end to end platform. I'd imagine Apple does something similar.


In Netflix case, I believe owning specialized and custom built systems to handle CDN & storage are essential. I think this CDN is content/media CDN, not the web tier, which I believe is still on Amazon. But feel free to correct me.


I winder what would constitute Apple's core?


I'd venture to guess it's about the same -- CDN (for their App Stores, OS updates, etc) and storage (iCloud backups, etc). My guess is their cloud compute needs are relatively low compared to storage and content delivery.

edit: I should note that yes, as the other poster said their core business is hardware, but their core cloud needs are what I posted.


Apple's core is consumer hardware. Anything they're doing on the cloud is ancillary.


I think you could make the case that iCloud is core to Apples business.


They do operate data centers already and are building more[1] but the lead times on such large infrastructure investments are not insignificant. By using cloud providers they can meet the current demand while still investing for the future and later repatriating those workloads back to their own infrastructure when it is ready.

[1] http://appleinsider.com/articles/15/10/02/apple-inc-massivel...


IBM may have the best coverage, but I believe their resiliency leaves something to be desired.

That said, if you're Apple, you could probably get IBM to do whatever you want.

Anecdotal, but lead infra guy for a global top 20 bank told me that IBM installed their choice of routers, and ran custom fiber into SoftLayer for them, to fix some of the more pressing SPOF issues.


Keep your friends close, but keep your enemies closer.

The same deals they do with Samsung, for example.

Playing nice with Microsoft, Amazon and Google means they will also play nice with Apple.


Because if Apple hosted exclusively with IBM, IBM wouldn't have the capacity?


They've been using Google Cloud Storage for blob storage of iMessage attachments for a little while now. They seem to use a combination of Amazon S3 and GCS (just watching connections coming out of the app on OS X).


God damn Diane Greene hit it out of the park with this one! Amazing work getting Apple to migrate so much away from Amazon.


I guess the article does say it was attributed to her, but whenever I read an executive-focused press article, I just think of the team that worked hard for months to get to this point, and suddenly the newly-hired senior executive marches in, attends a few meetings and reviews, makes a few phone calls, and then winds up getting all the credit. Seen it so many times at big companies.

Especially irksome is whenever a product launches or a deal is signed, the exec replies-all to the mass internal celebration email with a "So proud of this team!" message. Ok, thanks for smiling upon us peons with your lordly approval, after the 4 hours total you personally put into the effort.

Sorry... slightly bitter :-)


Consider the possibility the team doesn't mind the executive getting the credit, or perhaps does enjoys doing great work regardless. I also used to view myself as a lowly peon, but that overshadowed the satisfaction of a job well done.

Also, consider Greene's (no relation) Law #1: Never outshine the master.


Same with when you write report for your boss and they just stick their name on it and present it.


She does seem to have something of a knack for getting things done. I watched her startup school talk just the other day https://www.youtube.com/watch?v=zSEeFxq2X_c


It is reported that Apple accounts for 9% of Amazon's AWS revenue. If that is true, this move by AAPL is a serious dent in the financials of AMZN.


From my understanding, and I could be wrong, Apple does more on Azure than they do AWS. Also they aren't leaving AWS or Azure, but are diversifying to other cloud providers for scalability and uptime.


Any chance you have a link with that number? Would be interesting to read more about apple using such a chunk of Amazon's services.


If you run little snitch on your mac and have your photos sync with apple, you'd notice the photos agent going to google for quite a while now. Maybe it was a trial?

I say this is why icloud is about 2x the price of other cloud providers, because they don't run it themselves and want a profit margin.


iCloud Drive pricing is equal to that of Google Drive: $3.99 for 200GB (Google doesn't offer 200 but 100GB at $1.99). At 1TB, both iCloud and Google prices are $9.99.


Last I remembered it was $20/month for 1TB. OneDrive is $7/mo/1TB and Amazon is $5/month equivalent.


I don't think someone at Apple looked at Amazon's pricing table and Google's pricing table and decided to move to Google.

Very like sales teams of Azure, Amazon, Google must have done the mating dance for few months sharing their future plans etc. Very probably government's stand on encryption could have been one of the things that were discussed.

Some people must have played golf together and eventually made some decision. Also, very likely Apple will be well invested in all these three players and will remain so for a long time.


I'd be super interested to know what their backend looks like (at least the new stuff, not WebObjects), I wish they were as open as Facebook with regard to tech.

Unfortunately that's probably a wish that will forever be unfulfilled.


Depends on what you mean by backend, but they do publish a lot of papers and give a lot of conference presentations.



Sorry, I guess I wasn't super clear. I meant Apple's backend.


I can't see this as anything but a good thing for us lowly consumers. Competition in the marketplace is a great thing.


Does anyone enjoy working at AWS? maybe the Zon will have to up its game to compete but they're so mired in employee-thrashing it seems unlikely. Is it getting better there or worse? this seems to question that.


I've heard their layer 1 network is a mess and they have a small army of PhDs troubleshooting basic problems at layer 1. Sounds like misery to me.


This seems to be good for everyone but Amazon, can anyone offer some insight otherwise?


Good for Amazon too: it'll make them compete better on innovation and price. They have been quick to introduce products, but their technical infrastructure and abstractions thereof seem to lag Azure and GCP, and investment in those take a long time to pay off.


> Good for Amazon too: it'll make them compete better on innovation and price

That sounds like it's good for consumers (of the cloud services)


Whoever wins... we lose. But really, I'm glad that Google has stepped up with their cloud services (they will be revealing more awesome stuff at the GCP Next 2016). And looks like they have the best "cloud core": https://quizlet.com/blog/whats-the-best-cloud-probably-gcp


Side note, but I'm impressed the article didn't try and put a positive spin on it given Jeff Bezos' interest in Business Insider.


Would it even have gotten coverage in business insider if he had not had an interest in it?


Does anyone know if GCE offers discounts or grants to graduate students doing research?


Doesn't look as broad as Amazon's program, but Google does fund research, at least in Computer Science and related fields: http://research.google.com/research-outreach.html#/research-...


I expect they want a multi-cloud presence for HA now that there is good tooling to support that such as Spinnaker ( http://spinnaker.io/ )


"It's been only four months since Google convinced enterprise queen Diane Greene to lead its fledgling cloud-computing business, but she's already scored a second huge coup for Google"

Who was the first?



Spotify, as mentioned in TFA.


I assume Spotify.


Perhaps Pandora!

Wait...


Spotify was, as of late February.


Spotify, I think.


Maybe Spotify?


I love aws fanboy


Have the google PR guys been working a lot of OT lately?


This should be read as: "In exchange for keeping Android crappy, Apple to reward Google on his Cloud efforts."

(being downvoted? little sense of humor)


Because of the # of trolls here, sometimes it's hard to differentiate between sarcasm and trolling.


Huge difference bw trolling and sarcasm.

As my old professor used to say "Sarcasm is a closed number class".


This move will be a "GAME CHANGER" for the Cloud industry.


Why do you think so?


I can clearly Google Cloud winning the Cloud industry. It's only a matter of time and not a matter of if. Cases like this and Spotify, will make the shift happen sooner than rather.


I don't see any evidence in your assertion.

There are quite a few very powerful players in this segment and I don't see anybody 'winning' to the point where they will exclude the others. Just a lot of secret sauce and attempts at locking in the customers.

What you will see is a shift from dedicated hosting providers to cloud providers, which is one of the reason why almost every large dedicated hosting provider now has their own cloud offering.

And that is born out by evidence, in fact, if Google 'won' the cloud battle and let's say Amazon would end up as a Google customer we'd all lose. I don't think that's even a remote possibility at this point.


Yes, Google will not "win" at the total expense of Amazon & Microsoft, but I would bet a good deal of money that they'll become the market leader within the next five years, and likely sooner. The rate at which Google has been open-sourcing things, too, will further expedite this, and the fact that they just joined OCP will give them better industry credibility on the data center / computing side.


However, Google seems to be trailing in third in the cloud, at least for enterprise users. And it seems to be falling well behind AWS and Azure. See, for example, https://www.gartner.com/doc/reprints?id=1-2G2O5FC&ct=150519 and http://www.spiceworks.com/marketing/diving-into-IT-cloud-ser... and http://www.techinsider.io/why-amazon-is-so-hard-to-topple-in...

Be interested to see any reports/surveys/data that show Google leading in cloud services, but Google didn't find me any ;-)


Apple vs FBI in Encryption Lawsuit.

Pentagon Grabs Former CEO Larry Page to head technology.

Google nabs Apple as cloud customer.

i put on my robe and tinfoil hat



So, it will be nearly impossible to buy a phone in the United States that isn't designed to send your data to a Google datacenter?


"Each file is broken into chunks and encrypted by iCloud using AES-128 and a key derived from each chunk’s contents that utilizes SHA-256. The keys, and the file’s metadata, are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information, using third-party storage services, such as Amazon S3 and Windows Azure." (https://www.apple.com/business/docs/iOS_Security_Guide.pdf)

Although your IP address and some other connection metadata will be known to Google.


That's not too bad then. As long as the 'iCloud account', where Apple likes to store the keys, are never third party hosted.


Ever seen an analysis of the traffic and breakdown of the metadata you speak of? If an account or device or advertising or other unique ID is sent to Google, it could help Google to track the user's IP Address changes and locations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: