Unfortunately these changes don't really resolve that problem. "Standard" pricing is a paltry 20% less. That 1TB video egress still costs $80 and for that price I can rent a beefy server with a dedicated gigabit pipe for a month.
Why is "Cloud" bandwidth so damned expensive?
I'd love a "best effort" or "off peak" tier. I imagine Google's pipes are pretty empty when NA is asleep and my batch jobs aren't really going to care.
I like that I don't have to do calculus to understand how I'll be billed, or write my own retrieval software.
If you want your data back in < 5 minutes it's $0.03/GB, 3 - 5 hours, $0.01/GB, 5 - 12 hours, $0.0025/GB
AWS Glacier is cheaper per month than Google ColdStorage, but Google doesn't charge more for faster access to your data. So maybe you still need that Calculus to figure out which is a better deal for your data access pattern.
I know it may not matter that much for backup, I'm just wondering why would they set the bar so low - i.e. how do they organize the storage that they had to set it so low. I'm guessing MAIDs but I still don't get it.
One advantage of Amazon versus many other cloud backup providers is that (for a fee), they'll ship you your data on a hard disk if you need it in a hurry. Having cheap access to your 4TB of data may not be such a great deal if it takes 3 months to download it.
The calculator shows the same setup (albeit with some of the rate information hidden) as costing $3,934.91
Backblaze B2 has no retrieval cost, so the same setup (storing 10TB for a month, then downloading) would cost only $250 (less than 1/3 of the cheapest Glacier option). Glacier is really only a viable option if you're sticking within the same region (where transfer is free) or you plan to literally never retrieve your backups.
Otherwise, B2 and other alternatives are much cheaper.
Glacier has 3 charges: $/GB to store the data, $/GB to retrieve the data from "cold" storage, and $/GB to download it.
Retrieval !== downloading ("transfer"). Retrieval is an additional cost to downloading ("transfer").
Interestingly they say you can't use the storage for commercial purposes (whatever that means).
Say I'm a smallish ISP on the east coast and want to send 1 TB to another smallish ISP on the west coast. I probably have a contract with, I don't know, AT&T or whoever owns the cables (I'm from Europe so I have no clue, in Germany it would be Telekom). I'm guessing it would probably cost cents (and that would bring my data across the continent, not just out Google's door)? Definitely nowhere near $80.
There are also IXs, where you can peer with a lot of different networks in one place (DE-CIX is the biggest for example, located in Frankfurt).
Transit is billed 95th percentile, that means if you want to commit to 1 GbE on a 10 GbE uplink you pay the 1 GbE outright and can burst to the 10 GbE the uplink provides you. If you are over 1 GbE at the end of the month (95th) you will be billed the remaining mbps with a per mbps pricing.
Now, to answer your question, it basically works like that:
You are present in NYC and you want to reach a customer on the west coast that uses Comcast. You would buy transit in your NYC facility from Comcast to get a direct path/way to them. They will take care of everything else. You don't have to lease/rent/buy the dark fibre (cables) from the east to the west coast, the provider (in this case comcast) is doing that already. If you don't buy transit from comcast, but instead form some cheap carrier (say cogent) they will do the same BUT not guarantee a specific bandwidth to comcast, you only would get 10 GbE to cogent, but you don't know if you can push the 10 GbE to comcast.
Transit providers also vary vastly on price. DTAG (Telekom) for example is ridiculously expensive (5x the prices of regular transit providers).
Let's say you want an decent pipe (transit) with 1 GbE commit on 10 GbE, you would pay something like $500-$600 for the 1 GbE commit and after that around $0,50 per every additional mbps. Additionally you need a router, preferably two (redundancy). Fiber cables, transceivers, etc. All those things are very expensive.
What you also notice is that you can't get transit with a TB billing, but most providers offer you per GB/TB billing. They need to commit to a certain amount and hope they calculated well in regards to their commitments and usage etc etc. Anyways, $80-120 per TB, the cloud providers are charging, is a bad joke.
You can get a gigabit pipe from Hurricane Electric or FDCServers for $400/month. In a month it can take 327TB of data, so the cost of 1TB is ~$1.20.
Other datacenters can go cheaper if you need less rackspace and power, bandwidth is cheap!
This is one of the easiest ways for cloud providers to make money. I very often see people doing lots of advanced calculations for compute and choosing providers based on that without really thinking about bandwidth usage.
Famous last words, "Oh, it's a web service so bandwidth shouldn't be a big deal."
Because compute is the loss leader to get you in the door. The bandwidth is where they make their money.
It's most likely vendor lock-in and people are dumb enough to pay for it.
There's no other rational explanation why Azure or Google aren't hammering AWS on bandwidth cost as a competitive angle. They're basically conspiring in the industrial sense, no different than airlines that somehow magically agree to simultaneously raise prices or department stores that for decades all agreed not to discount the perfume etc. box section of their stores.
But it's not fair to say that price from Google is high just because some random hosting company gives you unmetered 500mbs for $99/month. As we have seen with many unlimited cloud storage offerings, some companies do unsustainable offers to attract customers. There are different strategies behind this. Some just count on that majority of customers won't use everything they could. Others know the price hike is inevitable, but believe customers will still stuck.
No you can't. One doesn't get dedicated gigabit for that price.
With 2 minutes of looking on webhostingtalk I found:
Sure it's a no name provider, but you can get an Intel E3 for $80 with a "Guaranteed 700 Mb/s" pipe.
Edit: Clearly I misread the original quote given the OP response
You are aware that Google is a Global Company, with users all over the 'globe' yes? That not all "people" (or "users") are in: "NA"... right?
Disclosure: I work on Google Cloud.
Remote desktop with super accurate color instead of lots of dithering and compression is tricky. Are you on Windows or Linux?
The software I need (mostly Davinci Resolve) runs on all major platforms, including Linux, so I can be on whatever I need to be.
Cache egress is about the same price as the "Standard" tier but you have to pay the $0.04/GB cache fill price as well.
The only real potential discount is Direct Peering or Interconnect, which can get it down to $0.04/GB. That's still $40/TB though.
Egress and ingress are outgoing and incoming data, respectively.
Cloud providers are expensive because they want vendor lock in. They want to make it artificially cheaper to use their own services (elasticsearch, etc) over something else. And they want a high switchover cost when you want to transfer all of your data to another provider.
- An unmentioned alternative to this pricing is that GCP has a deal with Cloudflare that gives you a 50% discount to what is now called Premium pricing for traffic that egresses GCP through Cloudflare. This is cheaper for Google because GCP and Cloudflare have a peering arrangement. Of course, you also have to pay Cloudflare for bandwidth.
- This announcement is actually a small price cut compared to existing network egress prices for the 1-10 TiB/month and 150+ TiB/month buckets.
- The biggest advantage of using private networks is often client latency, since packets avoid points of congestion on the open internet. They don't really highlight this, instead showing a chart of throughput to a single client, which only matters for a subset of GCP customers. The throughput chart is also a little bit deceptive because of the y-axis they've chosen.
- Other important things to consider if you're optimizing a website for latency are CDN and where SSL negotiation takes place. For a single small HTTPS request doing SSL negotiation on the network edge can make a pretty big latency difference.
- Interesting number: Google capex (excluding other Alphabet capex) in both 2015 and 2016 was around $10B, at least part of that going to the networking tech discussed in the post. I expect they're continuing to invest in this space.
- A common trend with GCP products is moving away from flat-rate pricing models to models which incentivize users in ways that reflect underlying costs. For example, BigQuery users are priced per-query, which is uncommon for analytical databases. It's possible that network pricing could reflect that in the future. For example, there is probably more slack network capacity at 3am than 8am.
I personally still hold that pay-per-use pricing is the cloud native approach , the most cost-efficient, and the most customer-friendly. However, it's unfamiliar and hard to predict, so starting out on Flat Rate pricing as a first step makes sense.
( work at Google and was a part of the team that introduced BQ Flat Rate)
Contrived example of this: since most HDDs workloads are IOPS bound, you decide to sell IOPS bundles and give space for free. Not before long all your customers are backup companies that have low IOPS and high space usage. Your service runs at loss, customers are doing nice price arbitrage on top of it.
Same goes for all aspects of computing platforms for sale: CPUs, RAM, Networking, HDDs, SSDs, GPUs.
Two additional problems are bin packing and provisioning: you need to sell things in such quantities and ratios that you can actually effectively utilize your hardware configurations. You need to order and design your hardware in a flexible manner to be able to adapt for changing ratios of component needs due to changing customer demand.
So it's easier to run "pay for what you use plus profit" pricing, but some customers don't like it due to perceived complexity and potential unpredictability.
Compare with the difficulties of https://cloud.google.com/images/locations/edgepoint.png
Elegant and subtle work. Just like the networking.
What is the ping time from India to Iran? That is a long trip for a packet.
The route was Chennai-Amsterdam-Jordan-Iran
The packets in the trace you posted were presumably routed by someone besides Google. Or did you run that traceroute from inside GCP?
Or I could be misunderstanding the diagram and maybe Google has some connections that aren't shown. I don't really get what the ">100" in "edge points of presence >100" means in the legend.
You know when I saw that map it made me wonder which nation (or nations) those two large point of presences represent in the Gulf. It's Iran, isn't it?
I think Google missed an opportunity here. They should have cut the prices more significantly for standard tier (sacrificing performance) to make this more competitive.
Right now Linode's and DO's smallest $5 plan offers 1TB of transfer, which would cost $85.00 on Google's new standard plan.
Compared to a VPS or renting a dedicated servers, the egress costs can be enormous if you come even close to using the traffic contingent you get with many VPS or dedicated hosts.
Just as a comparison, a dedicated server with Hetzner for ~50-70 EUR per month includes 30 TB of traffic, which would be at least 2,400 EUR on the Google Cloud.
On another note, Softlayer used to have generous multiple TB allocations for their dedicated servers but they took them away. It's likely that anyone needing high bandwidth, especially for static content, will have other edge networks and CDNs in place for that use case.
But most of their money comes from the Netflixes and the Adobes of the world, not from mom-and-pop stores :p
I suspect that the smaller hosts' plans traffic quota is sold below their own cost - assuming (correctly) that the vast majority of customers would not use anywhere close to the limit.
The traffic quota also scales much slower than the price as you move to bigger, more expensive plans.
I guess that it's fine if you reach the limit on a few servers, but if you rented 1000 x $5 droplets from Linode/DigitalOcean, and maxed out all of their traffic quota, you would get kicked out. Has someone tried to use these hosts just for cheap file servers?
It might happen but it doesn't refute the parent's point:
"Right now Linode's and DO's smallest $5 plan offers 1TB of transfer, which would cost $85.00 on Google's new standard plan."
* It's hard to push a lot of useful data with a tiny VPS.
So Google is offering a new "standard tier" equivalent to AWS and Azure, and undercutting both of them on network egress costs by a small amount.
Network egress costs are still astronomical.
I’m surprised it took two decades to see this network policy choice productized for mass tech market.
If you're interested in the history of earth-scale networks I recommend this free documentary on Cyrus Field and the heroic struggle to lay the first transatlantic cable: https://www.youtube.com/watch?v=cFKONUBBHQw
Love the parts in Cryptonomicon where they delve into the sea cables stuff.
Edit: there seems to be a bit of confusion what I'm referring to. I'm referring to the Open Internet Order of 2015  which states:
18. No Paid Prioritization. Paid prioritization occurs when a broadband provider accepts
payment (monetary or otherwise) to manage its network in a way that benefits particular content,
applications, services, or devices. To protect against “fast lanes,” this Order adopts a rule that establishes
A person engaged in the provision of broadband Internet access service, insofar as such
person is so engaged, shall not engage in paid prioritization.
This is Google acting as your network provider. You first choose Google, and then take your pick amongst their offerings. It's pretty much the same situation as it has always been for residential ISPs, where you have always had providers offering tiers at various speed/price points.
Net neutrality becomes relevant when you don't have a say in the choice of network. Practically, this will usually mean your audience's ISPs (called a "broadband provider" in that order). You get your servers set up at AWS, or GCE, or Linode or wherever you like the terms. But then, no matter what, you get an e-mail from some residential ISP in Florida, telling you that it'll cost you XX$ if you want to stream to your customers on their network at 1080p.
The problem is pretty obvious: the consumer choosing the ISP is not going to take your welfare into consideration, especially if your company doesn't even exist yet. The ISP won't be able to squeeze google, facebook, or amazon. But anything not used by 50%+ of the population simply has no power in these negotiations.
The result will be ISPs capturing almost every last cent you can earn amongst their customer base. If your service competes with any existing offering by either the ISP or a company with deeper pockets than yours, you will never have a chance to compete on quality.
Oh, and you won't just get an email from that ISP in Florida. You'll get an email from every single ISP in the world. Part of the startup decision making process will be like MadTV (if anybody remembers): should we invest these $2,000,000 into our product, or is it better spent on getting access to the markets in Iowa and eastern Michigan? Should we continue haemorrhaging money in southern Europe, or is time to cut our losses and go dark on that continent?
This is simply reflecting the reality, that different network connections have different costs and different performance. It costs Google more to transfer it over their own network than to deliver it to another network near your server, and the same in reverse, accepting traffic for your server only from nearby networks saves costs, and they pass the cost savings to you.
The problem with "network neutrality" is that the network was never neutral. Using a different connection that is shorter, or otherwise has less latency, and is less congested provides better performance, but that's rarely been exposed to end users. In Europe, it used to be common for customers to pay a different rate for EU traffic and out of region traffic, due to cost of transatlantic bandwidth, but I think that's mostly fallen out of favor as global bandwidth increased.
Using that same argument, Google should be obligated to provide maximum interface speed from any of their compute instances to any point on the internet, and should not be allowed to sell a tier that gets closer to that ideal.
really? even a good ten years ago when i was looking for apartments with good DSL speeds, it was always well explained that bandwidth listing was an ideal number that didn't apply to things outside the ISP's direct control.
To quote a nice layman summary from the EFF/Save The Internet:
Net Neutrality is the internet’s guiding principle: It preserves our right to communicate freely online.
Net Neutrality means an internet that enables and protects free speech. It means that ISPs should provide us with open networks — and shouldn’t block or discriminate against any applications or content that ride over those networks. Just as your phone company shouldn’t decide who you call and what you say on that call, your ISP shouldn’t interfere with the content you view or post online.
Without Net Neutrality, cable and phone companies could carve the internet into fast and slow lanes. An ISP could slow down its competitors’ content or block political opinions it disagreed with. ISPs could charge extra fees to the few content companies that could afford to pay for preferential treatment — relegating everyone else to a slower tier of service. This would destroy the open internet.
No real mention of advertised speeds being ahered to. Speed in the context of which you were using it also would not be correct. Speed in the context of Net Neutrality means no prioritized slow downs or speed ups in exchange for cash, essentially. Again, I think that had more to do with access and not actual pet megabit speeds
As a typical consumer, this negotiation is out of your hands with ISPs balancing performance and profit (although tilted towards the latter) but as a Google Cloud customer, you now have some control over how much of their network you use - all the way to the edge or just regional - without any discrimination of the traffic itself.
IMO net neutrality was never about equality of outcomes (there's probably not really any way to equalize performance short of socializing the Internet), but that seems to be a common misconception.
Disclaimer: I work for Google but not in networking. This comment is just my best guess.
I doubt that anyone would keep ALL of them for themselves.
"The new cable will become the sixth submarine cable that Google has a stake in (the others are Unity, SJC, FASTER, MONET and Tannat)."
So it looks like Google isn't the sole owner, but they do partner with others to do these.
If you pay this "public internet" rate, you're paying essentially 2007 transit prices. I hope you don't need to ship a lot of traffic. I hope you don't need to compete with someone that's paying market rate.
I would love to use GCS for our infrastructure, but with rates like this, it's hard to imagine us ever switching.
What does this mean? N+2 redundancy should mean, that even if both go down, then service will not be affected at all, no?
With N+1 redundancy, if there's a fiber cut, I won't call the remaining connection "unaffected", since it's no longer fail-safe.
Edit: but you're right, this might be what they mean.
I guess transit is still cheaper than maintaining ones own lines...
But in this context, it is about the providers of the "backbone" of the internet providing the same service to everybody. Usually, different companies and organisations had their public nets, and they allowed each others data to flow through their nets. They made so called "peering" agreements which are a bit intransparent, ISPs have to pay certain amounts depending on the traffic they cause. Then there are private connections that institutions use for themselves.
This is Google renting out their "private" net. What is different is that before, not many organisations had so massive private nets, and while you could buy traffic on them, this was something typically large organisations and companies negotiated (I know certain research institutes pay to have their data routed over a "direct cable" instead of over the regular network). Now, everybody can do so, and choose to pay for the faster route.
What they could have done instead would have been to sign peering agreements and add their connections to the "public" net.
Now, is this illegal or immoral? Well, they certainly have the right to rent out their private net. People have been doing that before. But I think net neutrality is not a binary question, but a continuum. If being able to afford a faster route means your site is faster, that violates net neutrality IMO.
If Google limits, deprioritizes or drops traffic from ComFlix, that's Google committing a NN violation.
If you limit ComFlix's 1 Gb/s pipe to 1 Gb/s, that's not a NN problem.
If Google limits your 10Gb/s pipe to 10Gb/s, that's not a NN problem.
If Google offers to replace your 10Gb/s pipe with a 20Gb/s pipe at the same price on the condition that video streams will be intercepted and limited so that they cannot support more than 480P, and you accept, both you and Google are violating NN.
If you ask Google for a 10Gb/s pipe but they refuse to sell it to you solely because ComFlix is a competitor for YouTube, you have an interesting court case.
Basically, NN violations occur when someone drops packets that they otherwise would have carried, based on the content, source or destination of those packets. But paying more for a bigger or more direct pipe by itself is not an NN problem.
It's the same as installing a direct line from your house to Google's DC. It's a private network that you pay more for, or you can just use the internet.
I would be extremely surprised to find that the new pricing is 1/1000th what the old price was. It's not really $0.105/TB is it?
I think the key confusing factor on the linked page is there is no "per" anything detailed. To answer your request for guiding your eye: http://imgur.com/a/9je4T
Thanks for taking the time. I figured someone at Google would read my comment and review the pricing table; I did not expect to engage with someone directly involved in the product release.
https://cloud.google.com/terms/ has explicit language around termination, disagreements around billing, breaking the law, etc. Sections 4 and 9 are probably the most relevant. Again, I'm not a lawyer, but read the indemnification and so on. Unlike other services at Google, we have real, paid, 24x7 support. I understand the general concern, but I think it's misplaced in this case.
[Edit: whoops, missed your comment downthread. I'll circulate internally with legal]
This assumes all traffic is of the same kind, such as web traffic. For other protocols or use-cases, things like latency have a big impact on the content.
So do you think that QoS to prioritize voip traffic violates network neutrality?
> Standard tier for Google Cloud Storage can be configured by adding the bucket as the backend of cloud load balancer and then setting the network tier on the forwarding rule. For this release, you cannot directly configure the tier on the Google Cloud Storage bucket.
Disclosure: I work on Google Cloud (but not on this)
Imagine the day when everyone has to use private routing and the public internet barely even gets maintained anymore.
Of course, public internet also suffers tragedy of the commons and not much is happening on that front. Like how most people are still behind ISPs that allow their customers to spoof IP addresses. And nobody has reason to give a shit. We're getting pinned between worst of both worlds. It's a shame.
When 90% of traffic goes through CDNs, it will be easier for ISPs to "accidentally" deliberately congest the remaining 10% without too many people noticing. https://ag.ny.gov/sites/default/files/summons_and_complaint....
Net neutrality is violated IMO when only those who can afford it get faster routing . The only form of price discrimination allowed is at the ends - you pay a certain amount for bandwidth and volume to your ISP or hoster. Inbetween, if you are an end customer, there is just this amorphous peering-net internet blob, where you pay market price to have stuff routed. You shouldn't care what route it takes, and the net shouldn't care what data you are sending. There are no pricing tiers.
Sure, Google is allowed to route stuff on their private net, or on the peered net. They can let other people use their private network, it happens all the time. They are not violating the letter of anything. But it is a slippery slope, and it can lead to a two-class internet, where some people can afford the "good" internet and some people can't.
 Edit: and I realize, in parts that is already the way it is. If you are doing high-frequency-trading, or sending huge amounts of research/big-data data, then you can pay somebody beyond paying your ISP for volume and throughput, and they will give you a custom connection. Net neutrality is not a black-white question.
But I'm just responding with another armchair argument. It would be good to see some actual numbers about Internet health.
It used to be 10x dedicated server traffic pricing ... and it still pretty much is. Let's say it's now 8x.
This won't make a difference in any practical comparison.
I think that's way more than enough already, thank you.
But fundamentally they just massively underestimated costs and need to find a way to adjust pricing. With app engine it was very conveniently beta, so they used the end of beta for the price hike. For this, they're having to invent a "Premium" and a "Standard" Tier, and hey guess what, everyone has been using "Premium".
My experience so far with Google has been "Use this now, and we'll have a massive price hike later, if we keep it around at all."
If they'd wanted to increase prices for the premium tier, this would've been the time to do so. Instead, the kept the prices roughly the same, with small savings for some scenarios and a tiny increase for edge-cases, while introducing a cheaper tier that has feature parity with the competition and is quite a bit cheaper. Whatever your past issues with Google, I think you're way off on this.