We've been doing tests in GCE in the 60-80k core range:
What we like:
- slightly lower latency to end users in USA and Europe than AWS
- faster image builds and deployment times than AWS
- fast machines, live migrations blackouts are getting better too
- per min billing (after 10mins), and lower rates for continued uses vs. AWS RIs where you need to figure out your usage up front
- project make it easy to track costs w/o having to write scripts to tag everything like in AWS, down side is project discovery is hard since there's no master account
What we don't like:
- basic lack of maturity, AWS is far a head here e.g. we've had 100s of VMs get rebooted w/o explanation, the op log ui forces you to page through results, log search is slow enough to be unsuable, billing costs don't match our records for the number of core hours and they simply can't explain them, quota limit increases take nearly week, support takes close to an hour to get on the phone and they make you hunt down a PIN to call them
- until you buy primare support (aka a TAM), they limit the number of ppl who can open support cases, caused us terrible friction since it's so unexpected esp. when it's their bugs you're trying to report and they can mature from fixing them
Sorry to hear about your troubles. Are you running with onHostMaintenance set to terminate or are you losing "regular" VMs. If you want to ping me with your project id (my username at google), I'd like to investigate. 100s of VM failures is well outside of our acceptable range.
Also, if it's been a while since your last quota request, we've drastically improved the turnaround time. All I can say is, your complaints were heard and we've tried to fix it. Keep yelling if something is busted! (And yes, I see the irony of the support ticket statement; out of curiosity which support are you on?)
Maybe there is something special for the member of GCE startup program, but for us the quotas requests take between 1min and 1 hour, where the same requests over aws took a few days, and endless discussions.
Our all experience with the folks over at Google has been amazing compared to the poor level we had with AWS.
Can someone explain to me why traffic is still so damn expensive with every cloud provider?
A while back we managed a site that would serve ~700 TB/mo and paid about $2,000 for the servers in total (SQL, Web servers and caches, including traffic). At Google's $0.08/GB pricing we would've ended up with a whooping $56,000 for the traffic alone. How's that justifiable?
Traffic's a luxury tax (along with RAM) that cloud providers assume that big companies can afford to pay if they're getting that much traffic.
Outside of the Cloud Providers Traffic is dirt Cheap, Hetzner includes 30TB traffic inclusive in their dedicated server i7 Quad-Core Skylake 64GB DDR4 RAM, 2x250 GB SATA 6 Gb/s SSD for 39 euro /month:
It sounds kind of inefficient though since different business types have extremely different bandwidth needs. So it's going to tax business by sector rather than by their ability to sustain it.
It's fascinating for me to see again and again people somehow accustomed to cloud pricing hear about bare metal hosting offerings and not to believe the prices could be that low.
BTW this applies not only to traffic, but also processing power and storage.
I think I'm paying about 68 euros a month for it. The Canadian dollar has taken a beating, so it's not as good of a deal as it use to be, but it's still a good deal none the less.
It's a dedicated bare metal machine for you. The tradeoff with Hetzner is, that it's not expensive Server hardware, so you will encounter hardware problems more often compared to a Dell or HP server.
You just have to build high-availability into your software. I'm using six Hetzner servers for over 1.5 years now and the only problem I had was one disk failure - support needed 10 minutes to swap it. I can highly recommend them!
I pay them ~200 euros per month for what would cost me 2.000+ dollars on aws...
I'm curious about this statement : is that as opposed to cloud apps ? Wouldn't you need to build high-availability into your apps whether they're running in the cloud or on dedicated ?
If you mean that you can have a load balancer in front of it managed by Amazon, that's true for dedicated as well (Akamai, CDNetworks, Limelight, even Leaseweb). Managed databases are available from most providers (usually without an API, but you can find them with an API as well).
Failures will happen no matter what. About the only difference I think you'll see is that most of their servers don't use ECC memory, so you're technically more likely to hit a problem there.
I've had one server with them for about 3 years, and another for 2 years, and haven't run into a hardware issue yet. Obviously a hardware issue could happen at any time, so anything I can't live with being offline until I can restore from a backup is configured with redundancies, including a Digital Ocean VPS just in case the datacenter my servers are in goes offline.
From my monitoring, however, I tend to see a short network blip about every other month, but it's less than a minute at a time. All other outages I've had were my own fault.
Hetzner also has options for Xeons and Dell PowerEdge servers for a bit more a month, but I've also had great experience with their best value hosting servers, ran a site on it for a couple of years without running into any h/w issues before moving to AWS due to its easy managed RDS, S3, SES services. But if I'd just needed a single dedicated server with great specs I'd use Hetzner in a heartbeat.
I picked up one of their new ex41-ssd machines and I'm actually kind of nervous about it. I've bought a few auction machines and they all came with Samsung SSDs, but these new EX line machines are using Crucible, hence the low price. I have zero confidence in Crucible and I'm not sure if I will buy anymore EX machines, unless somebody tells me Crucible has a different reputation now.
Hetzner is towards the vey low end of pricing (downside: latency if your users aren't in Europe), but dedicated servers from most providers end up far cheaper than AWS or GCE.
I don't get it. Google says they're going after the big fish in the industry by claiming they have amazing pricing. The servers look good, I'm ready to jump on board.
$120-$230 for first TB of egress bandwidth depending on where it goes. No thanks, I can get 2 TB for < $20 elsewhere.
These bandwidth costs leave small businesses, and individuals like myself, staying with the smaller competition. I suppose their reasoning is they can chase after that single $400-600 million contract. One major client like that is worth as much as ten million of us little guys paying $50 each. The big cloud providers exist to to serve gigantic enterprises. The rest of us are a drop in bucket and not worth the effort.
When pricing a value-add you want to price it linearly, with a volume discount, but such that after the volume discount the line is still steeper than the base cost curve. That way growing customers feel like they are getting a deal vs small fish, and are incentivized to use as much as they need, but you still drive your margins towards what the market will bear, provided your volume is growing. That curve will eventually squeeze out some of your biggest customers, but you can avoid this by cutting deals for them, e.g. Google with Apple.
Traffic is not important for every use case. If you run a store for physical items, how much traffic are you going to use? This is probably going to be less then 5% of your AWS bill, so you don't worry too much about it. If you host heavy images, big JS files (which you shouldn't do anyways) or offer downloads, you should probably use a CDN anyways. For big downloads, latency is not really that important as long as you get proper download speeds, so the CDN is going to be a lot cheaper.
Nor everybody wants to run the next Netflix or Dropbox in terms of bandwidth consumption. Even if you did, keep in mind that Netflix does not host the videos in the cloud.
Cloud, especially AWS is 10+ times more expensive that hosting same stuff on DO, Vultr or Bare metal. And you still need administrators, EC2 are just VPSes like any other service from other companies.
Why do you think they want any of that action? I think their pricing conclusively demonstrates that they don't. Some of those customers are waaaay more trouble than they are worth. Also Google and AWS have "premium" bandwidth - massive redundancy and lots of peering relationships.
I run a few websites with video content which leads to 50TB+ per month. The business is profitable, but clearly I would not waste my money on expensive bandwidth.
Which is funny because through YouTube they have to have the cheapest raw bandwidth in the world.
They need two traffic prices.... Fast low latency web traffic for the current 10 cents per GB. Slower more laggy CDN type bandwidth for like 10 cents per TB.
Edit: yes. (https://cloud.google.com/compute/docs/load-balancing/http/cd...) But it's more of a CloudFlare competitor—a distributed caching reverse-proxy with a 4MB object cachability limit. Costs $0.008/GB, which is cheap compared to a real CDN, but expensive compared to CloudFlare's "free."
You miss out something there, I think. The $0.008/GB is for the load balancing. On top of that, you still pay for network egress depending on whether it is internally in GCE or to the internet. Those rates are from $0.20 to $0.08 depending on location. (EDIT: For traffic to the public internet)
And those rates are still in crazy territory compared to most alternatives other than Azure and AWS which have equally messed up bandwidth pricing last I checked.
I build caching solutions for customers that want to store their data in S3 or Google Cloud Storage, because the bandwidth prices at the big cloud providers are so out of whack that as soon as someone uses lots of egress (few TB a month or more), you can often cut your bandwidth costs by 80%+ or more by getting some dedicated cache servers to put in between your users and your cloud storage. That is after the rental and management costs for those cache servers are included.
(the reason for this rather than building storage solutions is that if the above fails you don't lose data. If you trust your abilities or service provider, building a multi-location storage setup with 3+ times redundancy that beats S3 etc. on cost by a large margin is fairly straight forward... But it's often easier to sleep at night if you have other people do the risky stuff..)
If you have time, I'd recommend purchasing dedicated servers in multiple geographical areas and setting up a custom CDN. It's much cheaper, however much less reliable and much more time intensive to manage and diagnose.
you can try finding good deals in areas you are interested (be certain to ask for "test" IPs, then look up their connectivity via _multiple_ looking glasses) on www.oneprovider.com and then pair that with a robust DNS provider such as NSOne and you've got yourself a pretty decent, bespoke CDN. provided you already know how to do reverse caching proxies and all the other "magic" a CDN needs to work.
What kind of site would serve that volume of traffic and not have 56k for operating expenses? I mean, I can think of a few examples like Wikipedia maybe, since they are non-commercial and such, but for a commercial business? Maybe 4chan moves that much without a lot of revenue I would think, or maybe... imgur? but not really sure, I mean, it would seem like they could get that amount easily via ads alone.
What was the use case here?
Also, I think that 56k for traffic alone kind of depends on context. I mean, how much does Netflix pay for serving their volume of traffic?
What I'm saying is, isn't 700 TB a month something that would probably be very expensive no matter the context? Just storing 700TB would cost a lot, no?
Image hosting community site - notably without shady popup/layer/scam ads, which probably was the reason for the relatively small income. For a two person team that only worked part time on it, it still made good money.
The total dataset was just about 3TB, so storing it was not an issue.
700TB/mo is about 2Gbps - on the open market that should be under $1000/mo. Netflix's total cost is probably below $0.25/Mbps. $56,000/mo would get you over 100Gbps of committed capacity from any major provider (or a mix).
> 700TB/mo is about 2Gbps - on the open market that should be under $1000/mo
Is that a fixed-cost sustained pipe though? I was under the impression that (at least at the backbone level) those contracts got more costly the closer to full that your pipe was.
Yes, $1000/mo would get you a 2Gbps commit on a 10Gbps pipe. If you used over 2Gbps (95%ile) for the month, you would pay probably $0.60/Mbps for that excess ($0.10 over the commit price). Some providers dont charge more than the commit price for overage traffic.
Why is it "not have 56k for operating expenses"? Something that can be had for $2k is not something a healthy business spends $56k on. You should be able to find a better use for those $650k that year.
Depends on what they're using AWS for. OP seems to be running a simple and straightforward setup that just happens to use a lot of bandwidth. It doesn't take five full-time engineers to maintain a handful of LEMP servers.
Nothing prevents you from mixing and matching there are actually AWS services you can't find cheaper equivalents to elsewhere. My experience is that those of my consulting clients that want to migrate off AWS rarely have any problems replacing it. The cost savings usually pay back any development costs and the overall migration effort in 2-3 months at most.
If there will be 2 such companies, for that 108k I can wrote services backend that will be comptabile with AWS, so after a year you can transparently switch to that system on your bare metal and sell it to have your services for free.
I still didn't make my own "clone", because I can't afford machines to start selling it.
It's where they make their money. Like when a restaurant pushes the desserts on you - the desserts have the highest markup by far on the menu.
AWS has a lot of 'free' services, which still have to be paid for. Some of those free services are things that benefit both the client and AWS, but would be avoided by many if folks had to pay for them (like IAM credentialling)
One popular high traffic site I know build their own CDN to serve the large majority of their data by renting dedicated machines in OVH, Hetzner, etc. I can not remember their actually datacenters for their own CDN but they were not CloudFront or Google Cloud Platform.
Supposedly this has saved them immense amounts of money.
If your servers are efficient enough (and this is not hard to do these days), it's easy to get bandwidth-limited on a per server basis, i.e. your server could handle more traffic, but you've maxed out the bandwidth available to that particular server.
If you can load balance at the client, then you can "talk" to any server at the edge and don't need a router or proxy, so the net result is that you are only paying for whatever bandwidth comes with your OVH (or whatever) boxes. Effectively, you're buying bandwidth and the computer/storage/power/rackspace/etc. that comes with that bandwidth is free.
And yeah, it's ridiculously cheaper than AWS or Google's Cloud Platform to do things this way.
> Can someone explain to me why traffic is still so damn expensive with every cloud provider?
Because The Cloud(tm) IS cheaper--when you start and don't have any real bandwidth or CPU usage.
Whereas, every colocation facility I have quoted wants you to commit to a minimum of $500 for some partial cabinet. So, The Cloud(tm) wins the contract and gets to bill in increasing amounts when usage finally goes up.
Finally, how many real system administrators still exist who can provision your systems, configure the network, and understand how to connect everything to the network without getting p0wn3d? If you don't have that person, you can't escape The Cloud(tm) even if you wanted to.
Well, considering how many small/startup shops expect the developers to also do IT chores, "the cloud" makes the most sense... spending time learning the insides of systems they don't care to truly maintain comes at a cost... time to do other things, or cost to pay someone else to do it.
In the end, the cloud makes sense in a lot of scenarios.
"The cloud" does not mean you don't need real system administrators. I see time and time again companies get bitten by this. Overall devops efforts to run this well on AWS or GCE in my experience tends to be higher than provisioning dedicated systems because you have so many artificial limits imposed on you by the providers that makes things harder.
E.g. your example: Understanding how to connect everything to the network without getting hacked is far easier when your private network is physically wired to a separate switch, and your public network is physically behind a firewall and there's no configuration mistake in the world you could do that would change that, so the problem-space to get basic levels of security is reduced to configuring the firewalls correctly.
Still plenty of room to shoot yourself in the foot, but in my experience far less so than having people configure their own networking on AWS.
As or pricing, yes, if you want to do colo, the initial costs are higher. But dedicated rented servers with monthly contracts are also typically far cheaper than AWS for anything that stays up for more than ~1/3 or so of the time (obviously depends on the hosting povider). If you regularly spin up lots of instances for a short period of time, you should use AWS. But the moment you stop spinning them down again, it's time to rent capacity somewhere else.
Perhaps it is like the gas stations that sell gas for $4.99/gal when others sell it for much less. It's only worth their while to sell it if they make a healthy margin so they only sell to people willing to pay that much.
Storage is also a lot more expensive from 'cloud' providers, people often forget to look at the performance and redundancy and simply look at 'per gb' costs.
To clarify, we don't do that on Compute Engine. The number of IOPS you get is tied to the volume size for Persistent Disk. You choose between the two flavors (SSD and regular) and then size your disk. That does mean you have to buy more GiB than you "need" if you want to go faster, but PD is much cheaper than "bigger VM" in most cases.
So lets say... how much would say a 2TB volume providing a consistent minimum of 100,000 random 4k write IOP/s that's available across multiple VMs at once and must be highly available at say 99.9% cost?
* Note: I went to use Compute Engine's cost calculator but it appeared the site was down / under heavy load?
> As a business, I wouldn't do it until the cost of the bandwidth+hosting exceeded the cost of an extra, dedicated employee to manage the VPS server(s).
Why do you think you don't need that extra person to manage the instances in a cloud setup?
My experience is the reverse: It tends to take more man-hours per instance to manage a large cloud setup, because there are many more spinning wheels. The overall complexity is often vastly larger. In fact, I have clients I manage physical servers for where the time taken per server is on average still far lower than for cloud instances even including the 2+ hours lost on travel per visit to one of the data centres if someone has to physically go in (rather than rely on "remote hands").
This is before factoring in typically higher utilization rates for the dedicated hardware, because it's easier to customize it to get the right balance of RAM, CPU and IO for your workload. The result is usually fewer dedicated servers than you would have cloud instances.
If I'm using RDS or Azure-SQL, I'm not managing a database server... The list goes on, but when starting, you may only have one person or two working on actual development... features are important... actual customers and actual revenue may well be more important than scaling to millions of users.
Once you need to scale, you need that expertise... if I can use RDS, Azure SQL, or a number of other options to manage database services, or other systems without dedicated staff, that buys time to keep the lights on while actual solutions and features are created... an MVP needs to work... And "wasting" a few hundred a month on hosted services while trying to get something working is better than having to spend that time becoming experts on infrastructure, databases, or any number of other systems.
I'm not saying don't optimize, but I am saying that you shouldn't switch infrastructures unless you are saving enough to cover additional talent.
That is a key question I have been pondering myself.
One theory of mine (perhaps uninformed; I'm not really a networking expert) is that because of the dynamically configurable nature of their systems, they need to use routers rather than relatively dumb and cheap switches at almost every level - in order to have flexible networking and still maintain isolation between customers.
This could get quite expensive if you have to pay Cisco/Juniper for this. If this is true Google will have quite an edge with their software defined networking here, I would guess.
That's Google. They have put their cost levels somewhat below Amazon's. Maybe they don't see the need to be 5-10x cheaper than the market leader in traffic costs even if they could...
SDN is changing the model here, and Google is way ahead. In an enteprise, you can use VMWare to do a lot of the stuff you are blowing big bucks on for Cisco/Juniper on and use switches with higher density.
SDN is going to turn the cost structure on its head -- I wouldn't want to be a network guy now, easily 60% of tasks are getting vaporized in the datacenter.
As a network guy, it's a _great_ time to be an experienced network person. The only mature aspect of the ill-defined SDN sphere is OpenFlow and that will only get you so far. Try as they might, controllers like OpenDayLight and the various things that plug into Neutron/OpenStack aren't plug and play for those w/o significant network knowledge.
From my vantage point, it's going to be at least another five years before the cost structure really does turn over on it's head for folks below the hyperscale level
Google's really ahead on the networking front, and other cloud providers are following suite. Networking hardware is super cheap now. When you couple that hardware with open source software networking gets cheap.
Indeed - Internet Transit at scale (10 Gigabit+ ports) goes for around $0.63/Mbps at 95th percentile. [1] - for the above quoted 700 Terabytes/month, that works out to $1341/month, if it's evenly spread out on the lower 95th percent of the circuit at around 2.129 Gigabits/second.
Hosts play a big part of SDN in that they support the dvswitches along with guest VMs. Not everything is a Cisco/Juniper. Switching hardware is still common in TOR and egress.
VLAN's and Virtual Appliances in the same environment as the guest machines to facilitate routing should allow for scale without costing these virtualization providers too much.
That really depends on volume, location, and provider. For large volumes and cheaper providers (Cogent, HE.net, etc) it's been that way for 2-4 years or more. HE.net will now sell a full 10GbE port for $2600/mo, Cogent isnt too far behind. Sub $0.40/Mbit at >25Gbps volumes in major locations is doable.
pretty sure (at least) AWS builds their own network hardware. I remember reading something a while back that said they found it magnitudes less expensive than buying enterprise hardware, with better performance as they went about the affair as scientifically as you'd expect them to.
Old CoyotePoint routers were just a commodity x86 motherboard with an ancient SSD instead of spinning rust. Junipers use a duo of x86 (routing engine) and ASIC (packet forwarding engine). Cisco has supposedly moved from that architecture to an ARM and ASIC pairing.
The ASIC is just a hardware offload for known routes. Unknown routes, admin work, and Ping packets are handled by the x86/ARM CPU. It's not too different from offloading graphics work to the ASIC on your graphics card, or your mining to your Bitcoin ASIC.
We are consolidating all of our cloud services at Google Cloud and couldn't be more happier. We've had north of a thousand virtual machines scattered across ~6 2nd and 3rd tier providers and switching to gcloud has been a game changer for us.
> We've had north of a thousand virtual machines scattered across ~6 2nd and 3rd tier providers and switching to gcloud has been a game changer for us.
All the of the success stories I've heard about Google Cloud are from companies using significant resources. Why hasn't Google gone after startups? Perhaps I'm missing something but a turnkey package of computing, analytics, and advertising seem like a no-brainer.
I can't speak for the OP ... but from what I've seen, it's extremely good. consistent fast performance and their proprietary "live migration" really stands out. besides really good raw machine speed, the inter-networking is also far superior.
They've been a heavy Azure user too. Probably more than AWS.
I'm glad there's now at least 2 and probably 3 competitors for public cloud infrastructure. So many things were at risk, including adoption of public cloud in general, when it was a sole source monopoly from Google (OpenStack/Rackspace/etc. was basically stillborn, and VPSes aren't the same thing, nor was VMware ever really credible for public cloud)
Neither GC nor Azure are as comprehensive as AWS, but together at least one of them is usually a viable alternative for any given deal.
Google has some really interesting features, closer to docker, so some better mobility options from private/vps to google, and back. They seem to have some of the best compute options out there, and tend to perform above the others in a lot of ways.
Azure's services are imho a bit easier to use, at least from my limited experience, mostly vm's, queues, tables and hosted sql.
AWS has so many options and services it's hard to keep some of them straight... Lambda is really interesting imho, and some of their options for data storage are compelling to say the least.
Joyent's Triton/Docker option is really interesting, but their pricing model just seems too much for what they're offering. I do hope that they have success in terms of selling/setting up private clouds though... there's a lot of big companies that would be much better off with their solutions.
Can someone provide a little context towards this exodus from AWS to Google Cloud? I understand in DropBox's case that they (questionably) need their own infrastructure for cost saving. But then there's Apple and Spotify suddenly changing over. What's the advantage?
I have a fear that this trend among large companies is going to trickle down to smaller ones and independent devs. Considering these "Cloud Wars" I can see stories like continuing with different providers. Ultimately, a scenario could occur where one year, one provider is king. Then the next, everyone decides they need to migrate to the next big thing. That would be irritating for us contractors. We would have to learn new interfaces and apis at the same rate of JS frameworks.
There is no exodus. There are a lot of companies moving to multi-cloud, which makes sense from a disaster recovery perspective, a negotiating perspective, and possibly from cherry picking the best parts of each platform.
This is what Apple is doing. They use AWS and Azure already in large volume. This move adds the #3 vendor in cloud to mix and isn't really a surprise.
Thanks for the answer. That makes a lot of sense. I guess To some degree, I did know this. But the media has been portraying these moves as a complete move, hence the whole "exodus" hype. It bothers me still, because this rhetoric may lead to scenario I described above for smaller companies.
The media is sensationalist as ever — I would worry about any CTO or Engineering Lead who based such a huge important decision on a Business Insider article.
Its more catch-up than an exodus, but also overtaking in some ways. Short version I'd say pricing and data processing (DataFlow, DataProc and especially BigQuery)
Their core network infrastructure is more advanced, and Live Migration is pretty nice too.
Long version the recent posts of Spotify and Quizlet’s moves to GCP dive deep into their reasons why.
I so sick of EC2 rogue 'underlying hardware issues' and EBS volumes dropping dead... AWS Console status will say everything is 'Okay' even when there are major problems - it's a joke... I wonder to myself, is it because I recently migrated (December 15) over and they are starting to buckle? Really a bad experience. At this rate I'll be looking at Google next month, or going back to Colo (25 servers, 100TB) so not much, but still worth doing right.
I've had ~25-30 instances running for the past 3 years and only had 1 or 2 instances have hardware issues, never had issues with EBS. Running on us-west-2 but it seems like more issues happen in us-east-1.
I'm on Virginia zone D... the other day a 15TB EBS went down, even with status as good. Their explanation, which took a lot of time/energy to get, was that the 2nd replicated copy had a failure, and when rebuilding from the 1st good replicated copy (primary) it suffered an unknown error taking down that copy as well... I was upset to say the least.
Challenge accepted. Been building carrier grade equipment with significantly lower failure rates than that for >15 years. Gear I designed in my first year out of grad school is still in field use today.
New challenge: build your machines for low cost from commodity hardware, rent your machines out to millions of customers and never have a single customer have > 1% of the hardware they land on fail.
Agreed. Similar results here over the years. I shouldn't bash EC2 so hard, but they also shouldn't keep the same uptime estimates when they have degraded over time. My developer made it all pretty clear to me with a statement "if you tell me the strengths and weaknesses of the system, I'll code accordingly" Great developer, can overcome EBS failing with 3-9's, but only if they state that and not 5-9's!
I just find it weird that every time "the cloud" comes up on HN, people defend it as hard as they can, like running servers yourself is some voodoo magic to be shunned. Usually with examples of "well, X is only saving $56,000/month with this switch away from the cloud! surely they're making a terrible tradeoff in an increase in employees!".
The answer is no. People do these calculations before moving stacks. The cloud is where VC money goes to pad Amazon's bottom line. AWS is insanely overpriced if you actually sit down and do the numbers. I'm our company's part time sysadmin on a bunch of bare metal servers, I spend maybe 1-2 hours total per month kicking things/filing hardware replacement tickets/etc.
I don't understand this mindset against learning the entire stack. You should understand hardware, network and OS. Maybe I'm too old.
Would be interesting to know what kind of discounts Apple got on this. It's a massive PR win for Google, the kind I expect they could give $100m for. Apple is also notorious for getting a very sharp price from their suppliers, so the combination suggests there were some steep discounts.
The public cloud prices bear no relation whatsoever to what large customers pay.
I know people spending less than $1m/month that are paying ~25% of the public prices on one of the top three cloud providers. Frankly, I'd be surprised if Apple is paying more than 10%-15% of the public pricing.
The reason is that anything above that, and you can save massively by going to more traditional dedicated hosting.
My guess is that it's pretty much just BigQuery. No one else seems to be able to compete, and that's a big deal. The companies moving their analytics stacks to BQ and thus GCP probably make up the majority (in terms of revenue) of customers for GCP
Given how cheap bigquery is, there would have to be a lot more bq-only customers than customers that use other services. And given how seamlessly the different products work with each other, any beachhead product like bq will quickly garner more product usage.
I doubt it. Not only does Apple (maybe?) run one of the largest Cassandra clusters in the world, but surely they wouldn't leverage cloud provider features over open source alternatives for fear of vendor lock-in.
Apple does operate its own infrastructure. It has numerous, exceptionally large data centers in the USA, Europe and China. Most notable is the $1 billion, 500,000 square foot facility in Maiden, North Carolina.
Apple probably augments their own infrastructure with cloud providers for various reasons, e.g. increasing geographic diversity, allowing for progressive growth, and to handle comparatively small jobs (e.g. merely a few hundred VMs).
I imagine it would also be a waste of Apple's time to tool up their own data centers to offer general purpose cloud computing services.
Apple doesn't have "numerous" facilities. Compared to Google or Amazon they have very few. They really only have two worth mentioning, and having only two is the most expensive thing you can do, with 50% natural overhead.
Apple has at least 5 data centers in the US alone.
And it makes sense that the overall size of their facilities would be much smaller than Amazon, Google, or Microsoft... They're not running a major search engine or offering anything like AWS, GC, or Azure.
I think Apple does a combination. Both (from the article) of hosting on AWS, Google, & Microsoft, but also on its own data centers.
I suppose it also depends on what is being hosted. If you look at Netflix & Dropbox, they both took control of their core piece (CDN & Storage) - not the entire end to end platform. I'd imagine Apple does something similar.
In Netflix case, I believe owning specialized and custom built systems to handle CDN & storage are essential. I think this CDN is content/media CDN, not the web tier, which I believe is still on Amazon. But feel free to correct me.
I'd venture to guess it's about the same -- CDN (for their App Stores, OS updates, etc) and storage (iCloud backups, etc). My guess is their cloud compute needs are relatively low compared to storage and content delivery.
edit: I should note that yes, as the other poster said their core business is hardware, but their core cloud needs are what I posted.
They do operate data centers already and are building more[1] but the lead times on such large infrastructure investments are not insignificant. By using cloud providers they can meet the current demand while still investing for the future and later repatriating those workloads back to their own infrastructure when it is ready.
IBM may have the best coverage, but I believe their resiliency leaves something to be desired.
That said, if you're Apple, you could probably get IBM to do whatever you want.
Anecdotal, but lead infra guy for a global top 20 bank told me that IBM installed their choice of routers, and ran custom fiber into SoftLayer for them, to fix some of the more pressing SPOF issues.
They've been using Google Cloud Storage for blob storage of iMessage attachments for a little while now. They seem to use a combination of Amazon S3 and GCS (just watching connections coming out of the app on OS X).
I guess the article does say it was attributed to her, but whenever I read an executive-focused press article, I just think of the team that worked hard for months to get to this point, and suddenly the newly-hired senior executive marches in, attends a few meetings and reviews, makes a few phone calls, and then winds up getting all the credit. Seen it so many times at big companies.
Especially irksome is whenever a product launches or a deal is signed, the exec replies-all to the mass internal celebration email with a "So proud of this team!" message. Ok, thanks for smiling upon us peons with your lordly approval, after the 4 hours total you personally put into the effort.
Consider the possibility the team doesn't mind the executive getting the credit, or perhaps does enjoys doing great work regardless. I also used to view myself as a lowly peon, but that overshadowed the satisfaction of a job well done.
Also, consider Greene's (no relation) Law #1: Never outshine the master.
From my understanding, and I could be wrong, Apple does more on Azure than they do AWS. Also they aren't leaving AWS or Azure, but are diversifying to other cloud providers for scalability and uptime.
If you run little snitch on your mac and have your photos sync with apple, you'd notice the photos agent going to google for quite a while now. Maybe it was a trial?
I say this is why icloud is about 2x the price of other cloud providers, because they don't run it themselves and want a profit margin.
iCloud Drive pricing is equal to that of Google Drive: $3.99 for 200GB (Google doesn't offer 200 but 100GB at $1.99). At 1TB, both iCloud and Google prices are $9.99.
I don't think someone at Apple looked at Amazon's pricing table and Google's pricing table and decided to move to Google.
Very like sales teams of Azure, Amazon, Google must have done the mating dance for few months sharing their future plans etc. Very probably government's stand on encryption could have been one of the things that were discussed.
Some people must have played golf together and eventually made some decision. Also, very likely Apple will be well invested in all these three players and will remain so for a long time.
I'd be super interested to know what their backend looks like (at least the new stuff, not WebObjects), I wish they were as open as Facebook with regard to tech.
Unfortunately that's probably a wish that will forever be unfulfilled.
Does anyone enjoy working at AWS? maybe the Zon will have to up its game to compete but they're so mired in employee-thrashing it seems unlikely. Is it getting better there or worse? this seems to question that.
Good for Amazon too: it'll make them compete better on innovation and price. They have been quick to introduce products, but their technical infrastructure and abstractions thereof seem to lag Azure and GCP, and investment in those take a long time to pay off.
Whoever wins... we lose. But really, I'm glad that Google has stepped up with their cloud services (they will be revealing more awesome stuff at the GCP Next 2016). And looks like they have the best "cloud core": https://quizlet.com/blog/whats-the-best-cloud-probably-gcp
"It's been only four months since Google convinced enterprise queen Diane Greene to lead its fledgling cloud-computing business, but she's already scored a second huge coup for Google"
I can clearly Google Cloud winning the Cloud industry. It's only a matter of time and not a matter of if. Cases like this and Spotify, will make the shift happen sooner than rather.
There are quite a few very powerful players in this segment and I don't see anybody 'winning' to the point where they will exclude the others. Just a lot of secret sauce and attempts at locking in the customers.
What you will see is a shift from dedicated hosting providers to cloud providers, which is one of the reason why almost every large dedicated hosting provider now has their own cloud offering.
And that is born out by evidence, in fact, if Google 'won' the cloud battle and let's say Amazon would end up as a Google customer we'd all lose. I don't think that's even a remote possibility at this point.
Yes, Google will not "win" at the total expense of Amazon & Microsoft, but I would bet a good deal of money that they'll become the market leader within the next five years, and likely sooner. The rate at which Google has been open-sourcing things, too, will further expedite this, and the fact that they just joined OCP will give them better industry credibility on the data center / computing side.
"Each file is broken into chunks and encrypted by iCloud using AES-128 and a key derived from each chunk’s contents that utilizes SHA-256. The keys, and the file’s metadata, are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information, using third-party storage services, such as Amazon S3 and Windows Azure." (https://www.apple.com/business/docs/iOS_Security_Guide.pdf)
Although your IP address and some other connection metadata will be known to Google.
Ever seen an analysis of the traffic and breakdown of the metadata you speak of? If an account or device or advertising or other unique ID is sent to Google, it could help Google to track the user's IP Address changes and locations.
What we like:
- slightly lower latency to end users in USA and Europe than AWS
- faster image builds and deployment times than AWS
- fast machines, live migrations blackouts are getting better too
- per min billing (after 10mins), and lower rates for continued uses vs. AWS RIs where you need to figure out your usage up front
- project make it easy to track costs w/o having to write scripts to tag everything like in AWS, down side is project discovery is hard since there's no master account
What we don't like:
- basic lack of maturity, AWS is far a head here e.g. we've had 100s of VMs get rebooted w/o explanation, the op log ui forces you to page through results, log search is slow enough to be unsuable, billing costs don't match our records for the number of core hours and they simply can't explain them, quota limit increases take nearly week, support takes close to an hour to get on the phone and they make you hunt down a PIN to call them
- until you buy primare support (aka a TAM), they limit the number of ppl who can open support cases, caused us terrible friction since it's so unexpected esp. when it's their bugs you're trying to report and they can mature from fixing them