Holy crap! They're actually doubling the pricing (for some important products)!
I actually followed the links and found this:
> Coldline Storage Class B operations pricing will increase from $0.05 per 10,000 operations to $0.10 per 10,000 operations.
> Coldline Storage Class A operations pricing in regions will increase from $0.10 per 10,000 operations to $0.20 per 10,000 operations.
> Coldline Storage Class A operations pricing in multi-regions and dual-regions will increase from $0.10 per 10,000 operations to $0.40 per 10,000 operations.
> For all other storage classes, Class A operations pricing in multi-regions and dual-regions will increase to be double the Class A operations pricing in regions. For example, Standard Storage Class A operations in multi-regions and dual-regions will increase from $0.05 per 10,000 operations to $0.10 per 10,000 operations.
This announcement is just an eye wash to hide the fact that they're doubling their pricing structure for some products. And they claim most customers will see a cost decrease.
Sigh, I was just thinking of moving all my stuff, projects and even websites from cloud hosted solutions to my own home server and slapping a cache like CloudFlare on top of it and calling it a day. This is only pushing me in that direction, haha.
I’m not defending them, but that’s really an unfair assessment. They are tweaking the model for how the cold storage tier is priced, with some minor reductions on the storage component and price increases on the “operational”/tasking side.
I haven’t managed a hyper cloud service, but I have managed 7-8 figure enterprise services. Sometimes as a service and ecosystem evolves, you need to tweak the business model. For a service like this, I would guess a set of customers stumbled into or found some loopholes that affected the economics of the services.
It is still a simpler model than Glacier, which is the AWS service closest to this.
As a customer, supplier risk is always something to factor. You can’t be religious about tech stacks for this reason and always need to chase dollars. If you have the market power, sometimes you can delay these sorts of actions with termed price contracts. If you don’t have lots of compliance requirements, paying for them baked into GCP may not be a good idea!
If your business (or bonus) is dependent on the beneficence of AWS, Azure, GCP, etc, you need to make sure that you understand that you are rolling the dice and someday the happy times will end.
As an ex aws senior leader many products in aws operate at a loss when the underlying infrastructure becomes more expensive for some reason. Glacier is a great example of this, they ran at a steep loss as they mis-bet that hard drive prices would drop faster than tape storage and had to retire their original architecture and redevelop a tape infrastructure. It would have been totally reasonable to increase prices then decrease them once the new platform was available, but they chose instead to operate at a loss and not adjust prices. I believe they actually decreased prices once the tape infra was up.
That said margin is why they could do this. Aws as an overall portfolio of services is profitable enough that they can loss lead on many products that operate at a loss for many reasons.
Finally pricing is done with margins enough to later lower prices, mostly done to ensure they don’t end up in a permanent loss leader situation by misjudging margin or some other aspect prior to launch. But at least when Andy Jassy was in charge there never would have been a price increase - ever. We all knew down was the only thing he would consider, and lowing prices was a good way to get promoted.
That's fine, because as a business I can decide whether or not that price is acceptable to me.
On the other hand, if they are known to occasionally significantly increase the price of their offering, then I have to factor that risk into my budget (let's face it: most businesses are locked in to their current cloud provider to a significant extent) so the price on the label is misleading.
I much prefer to pay a bit more up front than to have to constantly guesstimate how much financial risk is introduced by my cloud provider's erratic business decisions.
Thinking more, I guess I'm comparing to more creative / niche competitors or different approaches, rather than just doing the same thing on azure / gcp.
They also don't discontinue any service as long as at least 1 customer* uses it. Which means: you will have the old (in your opinion probably lower) price forever, as long as you don't upgrade.
That's a very important distinction: increasing prices for users who can't go away and increased prices for users which migrate on their own to the new pricing structure. As far as I know, Google does the former which always has a "fader Beigeschmack" (DE; dulm aftertaste?) IMHO.
"We are reaching out to inform you that we will be retiring EC2-Classic on August 15, 2022. This message contains important information about the retirement and steps to take before the retirement date
How does this impact you?
Your AWS account currently has EC2-Classic enabled for EU-WEST-1 Region"..
To be fair, "EC2-Classic is a flat network that we launched with EC2 in the summer of 2006", so I'm not complaining, but thought it was an interesting counterpoint.
As anecdata I was an early user of a product called SimpleDB. Long after the product disappeared from their website my application still worked. I didn't like the early version of Dynamo enough to switch. I don't remember what happened, this was 12+ years ago now since I wrote it
That's a very important distinction: increasing prices for users who can't go away [as an example of something Amazon doesn't do]
That's an important note for Glacier, where a significant price increase could lead to a situation of "You can pay punitive rates for retrieval of all the data to migrate it or you can pay us a higher price every month going forward."
That's like saying Apple raises the prices of the iPhone through new generation of iPhone models, which is not true at all. If the same service gets a higher price, then it's a praise raise. If a new service gets a higher price, it's just a new service.
> That's like saying Apple raises the prices of the iPhone through new generation of iPhone models, which is not true at all. If the same service gets a higher price, then it's a praise raise. If a new service gets a higher price, it's just a new service.
I don't think the distinction is that clear: you could just rebrand an existing service and raise the price. "Try our new v2 APIs, guaranteed compatibility with our v1 API and only 10% more expensive!"
I think the reality is somewhere in between, where companies will use new product launches to add stuff for customers and raise prices to protect their margin.
Assuming the assumption that intergenerational go up is true, in general compute only gets cheaper over time, so escalating prices implies increasing margin.
OP has a fair assessment. It's an unfair defense. Whenever I've relied on Google, I eventually got !@#$%.
I've never had that problem with Amazon. Microsoft also doesn't do it much these days. This is really specific to Google (and Oracle; but Oracle !@#$% in the wallet, but at least realizes driving customers out-of-business is bad for business).
Not all GCP customers will be !@#$% here, but many will. People who rely on Google inevitably regret it at some point.
They will all let you down. If you think you aren’t getting screwed by Microsoft, you either don’t do a lot of business with them or aren’t paying attention.
Oracle gets the reputation, but Microsoft probably liberates more bullshit dollars from companies than anyone else. They are like taxes, minus deductions.
>> Coldline Storage Class A operations pricing in multi-regions and dual-regions will increase from $0.10 per 10,000 operations to $0.40 per 10,000 operations.
>>Default replication pricing in the us, nam4, eu, and eur4 locations will increase from $0.00 per GB to $0.02 per GB.
>>Default replication pricing in the asia, and asia1 locations will increase from $0.00 per GB to $0.08 per GB.
Quadrupling pricing. And a couple of bumps up from "free". Wow.
Do it! I just moved my stuff into my own home and it's been great. Cloudflare's tunnel thing (Argo?) works a treat, but if you'd prefer a setup a bit more complicated, you can use something like Rathole (which is amazing, btw) to tunnel out to the cheapest EC2/Droplet/etc you can buy.
Given that these price increases are for things like long term storage I wouldn’t start thinking about home hosting as an alternative. The whole point is having a secure backup and your home isn’t going to cut it.
I wouldn't discourage anyone from home hosting really. It's getting to be pretty clear that it's cheaper, far better for privacy concerns, and gives you much more control. It seems that companies are beginning to ramp up the prices now - since once you've put your data with them, they can increase charges whenever they like. They know they have a large portion of the population using these data storage solutions, and they're likely going to start abusing that power.
If you know what a desktop is, I would suggest trying to self host - even if it's just dead simple raid10 on OpenVPN with syncthing. Heck, put one at another family members place and you probably have a more geographically diverse setup for your data than Amazon does.
My home _absolutely_ cuts it. All my data is stored locally in Raid 1 and backed up once a day, immutably, to a remote location. I trust my setup far more than I'll ever trust Google Cloud, or whatever else.
Yeah, I did the maths on that (and similar from GCP and Azure) and deep archive could be less expensive, but the cost of not only restore but also egress made me quite apprehensive.
The current monthly cost is totally affordable and in the case I need to restore I’m not facing any additional charge… Now, I’m backing up a bit over 1TB - depending on the amount of data you might come to a different conclusion.
Good luck actually provisioning such an instance though. At least for the past month I have been unable to actually provision one of the free-tier instances due to no available capacity in the regions I tried.
Be careful with that!
You can launch free tier instances anywhere you like in Oracle cloud, but unless they are in your home region, Oracle will charge you full price.
The signal from HNers searching and click on it may have bumped-up its ranking.It's a long-tail(hah!) search term it likely doesn't much to push it up the page.
Not actually answering your question but reverse SSH is also an option. Port on a remote host (i.e. cheap vps) forwards connections to your local machine.
Yes. that's an option. I should also add comparison between rathole and `ssh -R`. One reason that people keep inventing this kind of software is that `ssh -R` is slow and lacks of customization. (I'm the author of rathole)
For anything that's not a hobby or personal website moving it to your home isn't really an option, for most businesses the pricing change is probably not going to make a big dent if you think about how high salaries are compared to cloud hosting costs.
This is quite a myth. Maybe these prices are low compared to US salaries, but in India or Romania or other big outsourcing places, salaries in the 4-5k EUR/month (48-60k EUR/year) are the norm, and it's easy to get even higher bills from cloud services if you're not very careful, even from testing and dev activities.
Actually, I haven't seen any clear sign of age-based discrimination here in Romania. There are two other factors that can give that impression, though.
For one, the programming demographics are extremely skewed towards younger people. There was almost no programming being taught or practiced in 1980s or 1990s Romania (and the same is true for all of the former Eastern bloc basically), and then there was a huge burst in the 2000s as the country decided to push this particular segment (even today, you don't pay the 16% income tax on your salary as a programmer with a relevant college degree). Overall this means that there are far, far more 30-ish programmers working in Romania today than 50-ish ones. I believe the same is true to one extent or another in most of the formerly Eastern bloc countries.
The second reason is less rosy. Big outsourcing firms, which make up at least a plurality of the programming market, have little need for experts. Their hiring and retention practices greatly emphasize cheap junior hires, whom they don't want to retain past some point. They will usually have a handful of seniors around for the more advanced contracts, but they don't have any reason to keep around a workforce that gains in experience. This is mostly greedy but also partly rational - unlike a traditional business where you get to accumulate valuable context as you gain experience, in outsourcing you will rarely spend more than a few years on the same project, so the advantage of having been around the company for years and years is much less.
Basically everyone that studied something like computer science in the 90s were electrical engineers and the computer stuff was barebones, it was more hardware. Software development like we know it wasn't a thing back then so people started late compared to the west.
I have a dev in his 40s and a sysadmin in his 50s as co-workers but that's rare because of the things you mentioned. With that much experience, high demand and low competition at that level they're going to have cushy jobs that they want to have and won't be sitting at crappy outsource shops.
It's plausible. But anyone who has used cloud hosting for a personal website knows how ridiculously expensive it is - and it scales linearly! Mostly egress costs - they want to hold your data hostage.
As a Coldline user myself I'm not exactly happy about this, but
Coldline is also the cheapest class of archival storage that Google offers. This means the increased costs will not kick in unless you actually need to un-archive data, which for typical archival cases like old logs happens quite rarely.
And there's no gotcha there, Google's always been open that retrieval fees are the tradeoff for Coldline being otherwise so cheap. Nearline is the better option for data you'll access more once a quarter.
I guess the only time you would use Coldline is if you rarely access the data, but when you do access it, a retrieval delay is unacceptable. If you access it frequently, use a cheaper retrieval tier; if you can tolerate a delay, use Glacier or GCP's Archive tier.
S3 recently added a Coldline-like "Glacier Instant Retrieval" class, FYI. Their "Deep Archive" class (the cheapest) still does require restore operations that take hours to complete, though.
AFAIK, the GCS Archive tier has the same availability characteristics as Coldline and the same latency as all GCS class (10s of milliseconds). It seems like the primary factor for how you'd choose a GCS storage class would be your cost projections based on how long you store objects for and how frequently you access them.
> AFAIK, the GCS Archive tier has the same availability characteristics as Coldline and the same latency as all GCS class (10s of milliseconds).
That's true as far as I know. And the combination of "the data is accessible instantly" with "there is no low priority retrieval, there is only the ultra expensive kind" implies that their architecture is extremely weird and/or some very manipulative pricing is happening.
Neither. It only means that hot and cold storage live right next to each other and pricing is used as a carrot/stick, as well as a signal, so that the two classes tend to stay the same (mainly to prevent cold from getting hot, of course) and a desirable balance is preserved. You could call it manipulative, if you want, but it's more like incentives.
Amazon can apparently get data cheaply out of cold storage with a day or so of warning.
Can google not do that? And if they did, they could still maintain the same kind of tiering.
If they can't do it, that's a very strange system.
If they refuse to do it, that's manipulative.
For archival storage Google's charging three and a half years of storage costs to retrieve data. Even in the most flattering scenario where roughly all of the cost is in doing the I/O, and disk space is "free", that implies that retrieval is at least 75% profit. If I/O is half the cost of storage and disk space is half the cost of storage, then retrieval approaches 90% profit.
If you're doing disaster recovery, do you really want to wait a day for your data? I don't understand why you want Google to slow things down. Why can't Amazon get cold data faster? It's fairly simple: all Google storage is online. There is public material on how it all works. Maybe you can ask them to add an artificial delay. There is offline storage, yes, but that's tape, which has bigger issues...
Your pricing model is flawed, because you're not taking into account other factors such as rebalancing. Years ago, I would have loved for retrieval to be that cheap to perform behind the scenes.
> If you're doing disaster recovery, do you really want to wait a day for your data?
I'd like to have the option.
The point isn't that I want to wait, it's that I want it to be super low priority to make it cheaper.
For a super low priority job, why should reading be multiple times as expensive as writing?
> Your pricing model is flawed, because you're not taking into account other factors such as rebalancing. Years ago, I would have loved for retrieval to be that cheap to perform behind the scenes.
I don't understand. If rebalancing happens behind the scenes, then that has to get paid for as part of storage.
Which means the cost of 1 single I/O is a smaller fraction of the storage cost.
Which means the profit margin for retrieval is significantly higher than my estimate.
My cost estimate uses the most flattering possible case for the retrieval pricing. Any storage costs I didn't account for, deliberately or accidentally, make my argument stronger.
> For a super low priority job, why should reading be multiple times as expensive as writing?
Because the particular aggregate mix of storage helps drive down overall costs. Changing that mix affects the entire stack, as well as capacity planning. Ok, make cold storage very cheap to retrieve. What happens now? Everybody will buy that and abuse it for more demanding applications, with quality of service for latency sensitive traffic going down the toilet. So you end up throwing more resources at the problem and/or charging more across the board. Pricing is one of the few factors that users really pay attention to in the real world, not best practices. Unfortunately.
Furthermore, to implement what you want, you can keep a request open for hours, which causes issues all over the stack (where do you keep that state? How does that interact with load balancers?) or you mark the cold object and return temporary failures until it's finally retrievable. That's extra state and extra complexity that doesn't exist right now. Those extra costs would have to be recouped somewhere.
> I don't understand. If rebalancing happens behind the scenes, then that has to get paid for as part of storage.
Why? Rebalancing doesn't happen in a vacuum. It's linked to the traffic mix. You can't look at just the total bytes used in a cluster and figure how many HDs, SSDs, CPUs, RAM and NICs you need to serve that data while still meeting your SLOs. Unless it's a W/O cluster, you need more signals. Amount and behavior of cold vs hot storage are two of those.
Anyway, cold storage that warms up most likely requires extra rebalancing that wouldn't have happened otherwise. How would you price that? Who would you charge?
Again, your cost estimate for retrieval does not take into account how things actually work. Rebalancing is not purely a storage cost. Yes, your argument is strong, but only if you start from flawed assumptions.
> Because the particular aggregate mix of storage helps drive down overall costs. Changing that mix affects the entire stack, as well as capacity planning. Ok, make cold storage very cheap to retrieve. What happens now?
It doesn't have to be very cheap. Let's start with just trying to match the price of writes. That shouldn't really affect the total amount of I/O, and there's no reason reads should be harder on the system than writes.
> Furthermore, to implement what you want, you can keep a request open for hours, which causes issues all over the stack (where do you keep that state? How does that interact with load balancers?) or you mark the cold object and return temporary failures until it's finally retrievable. That's extra state and extra complexity that doesn't exist right now. Those extra costs would have to be recouped somewhere.
I suppose. But the cost of keeping a request open should be much much less than the current cost of having everything fully accessible in milliseconds.
> Anyway, cold storage that warms up most likely requires extra rebalancing that wouldn't have happened otherwise. How would you price that? Who would you charge?
Reads that cost a significant amount of dollars each don't require rebalancing. I'm not suggesting they go so cheap that rebalancing is required. You'd still do only one read to a completely separate hot storage system, like it currently works.
My complaint is that the competition is awful here.
Capitalism is supposed to pit companies against each other and drive profit margins below 50%. And it's failing to do that here.
And moreso, it's a very cruel pricing system because it lures you in with low numbers, then overcharges to get your data back when you need it. Being antagonistic to your customers does have negative effects in the long run. And I think it's worth pointing out situations like that when people are shopping around.
I think you might be missing that this is an operation count price tweak. How many ops could you possibly need for cold storage? Zip up your files and you end up with one op.
40 cents per 10k ops is still so cheap. They probably tweaked this to take advantage of their lazy enterprise customers that don’t care how much they’re paying for op counts, so they use a bajillion ops.
To my knowledge they have not, and this is the third time (at least) that google has done this. Managed Kubernetes and Google Maps API are the other 2 that I know of.
I only ever remember seeing AWS lowering prices but I am curious if there are instances I am unaware of.
This continues me wondering how anyone can think going with google cloud is a good idea.
Maps also has one of the most restrictive licenses I've ever seen in the industry. If you stop using Maps you're required to delete all data and all derived data. The only time I've ever seen a more restrictive license was when using Bloomberg. At least in that case it made some modicum of sense given that there was a lot of manual data entry going on in the background.
The larger issue is that even though I would like to use Google in some cases, I know that I can't trust them. As a company they need to seriously rethink their approach to fostering customer trust.
Hold on, they said in the announcement that some customers could see a price decrease. Do you mean to imply that Google used vague and imprecise language to hide substantial price increases? If so, color me shocked!
Let’s be real here: nobody pays sticker prices for GCP. The only reason big customers use it is the deep discount Google gave.
Now this price hikes signals us two things: Alphabet is tired of losing money on GCP or they are looking to drive customers away so they can shut it down (it’s not like a free chat app they can just stop supporting, sunsetting GCP will have to take a little longer).
GCP has had a ton of price hikes and price reductions.
This change tells us one of two things:
1) they want to use price as a way to influence customer behavior
2) A PM wants to get promoted and this is a way to hit whatever arbitrary metrics they need.
Or a combo.
There is a zero chance alphabet as a parent company cares about the specific pricing of super specific SKU. And there is a zero percent chance they will shut down the fastest growing non-ads business they have...
GCP is extremely profitable, they are just reinvesting in more growth.
(I'm a Ex GCP employee)
Edit: I want to make it clear I'm not supporting this decision. Arbitrary (or what seem like arbitrary from a customer viewpoint) price hikes is one of the reasons I left.
I feel like the interface is more intuitive, has sane defaults and has really good setup wizards when you are provisioning things. Also, my understanding is that GCP is pretty much cheaper across the board compared to AWS, but I could be wrong about that. To be fair, the only real major headache we've had with AWS in recent history was EKS stuff, and GKE really shines on GCP.
> For all other storage classes, Class A operations pricing in multi-regions and dual-regions will increase to be double the Class A operations pricing in regions.
This is the real impactful change. This is for ALL STORAGE CLASSES, not just Coldline. As someone who manages billions of objects in GCP, I know very well that each of those object write is a Class A operation, so 2x in price.
Also keep in mind that anyone storing objects durably (DR, anyone?), e.g. your customer data, is using multi-region, so this is a notable cost impact.
Note that there is a curious inflection point where, the Class A/B operations on an object cost _more_ than the (bytes) storage cost, depending how long you retain the data. That's where this is very impactful.
Annual multi-regional Standard _storage_ cost for a 19MiB object
Annual multi-regional Standard storage cost for an object that you 1x write (A), and 2x read (B).
e.g. 0.05/1e4 $/A req + 2 * 0.004/1e4 $/ B req = 5.80E-06 $/yr/object
Here we see that the old annual "storage" carrying cost for this 19MiB object was less than the "request" cost. Now double the cost of the operations and you should consider storing fewer, bigger objects.
All data on GCS is stored durably (14 9s or something ridiculous like that). The only reason to use dual/multiregion is for lower latency, you're basically paying for geo replication.
Of course, I'm sure many users are not aware of this, and it's not helped by the GCP console defaulting to multi when creating new buckets.
Durability is great, but availability is also crucial, unless your business can tolerate hours or more of availability outages.
Let's say us-central1 is down for some reason (has happened a number of times), but your customer workloads need to keep running uninterrupted, so you fail over to us-west2. Where should your customer files, that need to be read/written for the customer workloads, be written then? If you kept them all in the region that's on the fritz, you've got upset customers, broken SLAs, etc.
Also for availability, if you need extremely high uptime - single-region outages are not unheard of, and even if your data is still there when they come back, you might have lost a significant amount of money and/or trust in the meantime.
You should for personal stuff. There are nicer VPS options too. Cloud usually makes sense when considering onprem costs to do the same thing which can add up for companies.
Google cloud has to be the most confusing product suite known to mankind. What an unbelievable mess.
After merging two companies I had to move a bunch of stuff over to a new bankaccount. three weeks later and I'm still not 100% sure that I got it all, the interfaces are so opaque and the different ways in which you can get billed so confusing (never mind the bills themselves) that it is nearly impossible to get a clear picture.
This does not feel like it is an accident, and this message is very much in line with that.
I always wonder how such systems come about. The number of confusing error messages you have to deal with for pretty basic stuff is off the scale. You can name anything, except of course when it actually matters and then only some cryptic UID is shown. Don't get me started on users and permission management, or how it is perfectly possible to orphan an entire project[1] if a person leaves your org. (Gsuite and GCP may superficially appear to share a bunch of stuff but that just sets you up for some very cute surprises, from which it can be extremely difficult to recover.)
My intuition, and some experience sitting in meetings with GCP folks, is that their engineering teams don't dogfood their own products end-to-end sufficiently (e.g. including billing) on a daily basis, like their customers have to.
The amount of blank stares and "Oh..."s that happened when asked about relatively simple, everyone-would-need-it use cases for management, visibility, etc was mind boggling.
GCP feels like Google rediscovering being Microsoft of the 1990s. If you have strong product teams, but no strong overarching experience teams, your resulting system is going to be a hash of well-polished but distinct products, with an extremely ugly unification layer.
Hardly a surprise. From what i have heard, it sounds like Google's internal use of GCP is mostly automated, using internal apis. I.e. it sounds like are not clicking through the screens, or even using the external version of the APIs. I question if the externally available screens are even capable of handling some of the special permissions that google internal workloads can be given.
This is in contrast to say Azure, where plenty of Microsoft employees are using the same resource manager APIs and even using the portal. I think even the billing related features get used as part of internal budgeting (they want teams to try to keep resource utilization reasonable). While teams developing parts of azure itself may be utilizing internal APIs (for example Microsoft Graph is basically just a giant wrapper around a variety of internal APIs), most of the rest of the company sees and interacts with azure in the same way we do. (Except that they also have access to dogfood/PPE environments that we don't, such that endpoints for say integration tests don't need to run on production azure).
It's true enough, but it never really felt like teams solicited feedback on their APIs or portal UX. Azure really only made this jump to using all public products internally recently; even the CI systems used by internal teams were proprietary until recent pushes to move to an ADO-centric model.
I'm also not certain that Azure really has the right internal pressures to produce great UX results. In my experience Azure's culture internally is very lackadaisical with only a few teams really pushing the platform forward.
As a power GCP user, I see them running UX studies all the time. Having been using GCP for the last decade, the UX updates feel like day and night changes. Frankly I find the AWS console a lot creekier.
> This is in contrast to say Azure, where plenty of Microsoft employees are using the same resource manager APIs and even using the portal. [...]
And still they don't have this incredible complicated and not understandable IAM. Most user I know just give everyone root because it's not possible to just allow some specific API operations for a specific set of credentials. Or maybe I am to AWS.
I really do think this dogfooding is why AWS is successful. Amazon's businesses run a lot of their workloads on AWS. Amazon is AWS's largest customer. So AWS has the benefit of having thousands of heavy use customers internally to discover bugs and edge cases and provide feedback.
For example, I contributed a fix to AWS documentation as a SDE in the Kindle org. This is the kind of improvements you get with dogfooding.
Billing is terrible, although it has gotten a bit better. Cognito is probably one of the worst services on AWS, and it's only getting worse (there are now two SDKs with different APIs for no reason at all). While things like EC2 and Lambda work pretty well.
Same for Google. Everything that's in use by Googlers is pretty dope - calendar, video conferencing, docs, search obviously, maps, etc. Everything that's not - less so.
You can tell which parts Amazon dogfoods! Billing is terrible, although it has gotten a bit better.
I don't have much detail on Amazon's policy, but there was an AWS devrel on Twitter a while back saying they had to run and pay for their own AWS account as if they were any regular user for their own playing around/research/etc.
I was reminded of IBM trying to get them to show up and actually sell things. It is _bizarre_ that you have to hound sales people to actually make an effort — it really seemed like they assumed the Google brand was enough to guarantee buyers and were surprised that anyone would question whether their products were the best.
(This was also the first time I heard Reader mentioned at the C level as in “what will we do when you cancel it?”)
Well, not everyone but it tended towards influential groups — they burned so many tech journalists that it really seemed to usher in an era where goodwill was no longer assumed.
It reflects the company culture. Unlike Amazon, where customers experience is their top value, at BigG they seem to build stuff for the sole gratification and ultimately promotions of engineers and managers. They don't seem to care much for their customers.
It's easy to dogpile on BigCo, but this is selling the employees and teams short.
They definitely care about their customers, and many things were made better through subsequent fixes.
But the larger point is that the processes and mid-high+ level management structure at Google don't seem to prioritize cohesive, customer-centric experience. Which means teams will always miss things... because the process doesn't ensure they're caught.
A fair amount of my own confusion with GCP's offerings comes from their decision not to use proper names for their services.
AWS may have arbitrary names that don't follow any patterns, and Azure may have names that are grandiose, but at least you know with both of those clouds that they will always capitalize the name of all their service/product in documentation. There's no confusion if they are talking about a load balancer in the abstract, or their specific managed offerings.
Much the opposite. GCP's names are painfully transparent and banal. No need to look them up, you can tell what they do from their names (which are proper nouns, so it's clear they're a service name)
- Google Kubernetes Engine - runs Kubernetes clusters
- BigQuery - runs multi-billion row queries with ease
- Cloud Functions - FaaS run time
Now sure, Cloud Spanner is a DB, without DB in the name, so you'll have to read the docs for a few seconds, but let's compare that to Cognito (brains?), X-ray (medical imaging?), Kinesis (something to do with mitosis?).
My own frustration is not with the overall product line names. My frustration is when specific options of each product line are discussed in relation to each together in documentation.
Take the example of "GKE Ingress for HTTP(S) Load Balancing." It's overly generic and doesn't tell you at a quick glance whether you are buying a specific managed load balancer or running something yourself, and whether you are using basic exits to the public internet or buying into a global network accelerator.
Is there a single thing called Google Cloud Load Balancer? My understanding is that there are about half a dozen different managed load balancing services that all have very different operating characteristics. They all have very generic names in the docs like "global external HTTP(S) load balancer."
Same here. GCP is far more logical and consistent compared to the others. There's just a level of complexity at some point that you have to deal with or use a simpler provider.
I ran into this in chrome. The org entity had billing, but I was project admin but not billing, but could give myself billing permission?
Shutdown project also needed some billing permissions maybe? Cryptic messages.
I actually like the "project" permissions model, but I don't like how often I have to figure out why clicking around in the admin page as the "admin" causes problems.
On AWS side the admin policy is pretty broad, and the root account seems to really work as root.
> how it is perfectly possible to orphan an entire project[1] if a person leaves your org
Maybe not the best example since this (unlike other IAM oddities) actually makes sense - it can only happen when you don't have a top-level org tied to a project, like when you do something like using a gmail.com account to spin up GCP resources. Inside a GSuite org, this is not the default and I can't imagine how it'd happen by accident.
If your project is not attached to an org, and all the accounts tied to are gone, then what else do you expect?
> Gsuite and GCP may superficially appear to share a bunch of stuff but that just sets you up for some very cute surprises, from which it can be extremely difficult to recover
The way it's implemented is actually quite nice for complex scenarios/defense in depth - for instance, you can set it up such that whoever owns the GSuite org does not automatically get access to all GCP resources. Of course, any security measures good enough to restrict an org admin's privileges also have the potential of locking yourself out in a way that's semi-irrecoverable.
I can't believe how confusing all these cloud products are to do the most basic things. It really makes me appreciate Cloudflare, they seem to do a really great job with their UIs.
I'd say Azure is the worst. It's like Windows control panel with 10 times more items except you don't have the muscle memory from older Windows to navigate it through.
And then some edge case where I was charged on an account I closed because some subscription was still left open while I could no longer login...
It would be nice if most of Google ran off GCP, but alas, few marquee Google properties run on GCP. Ever notice how Google search, GMail, etc are usually just fine during a GCP outage?
Well this is just untrue. Read the docs. Gcp uses lots of the same infra as other Google services. The whole global load balancer infra is one and the same. Spanner is used by maps, Gmail etc. In fact, that's where the service spawned from.
> Will customers’ bills increase? Decrease? The impact of the pricing changes depends on customers’ use cases and usage. While some customers may see an increase in their bills, we’re also introducing new options for some services to better align with usage, which could lower some customers’ bills. In fact, many customers will be able to adapt their portfolios and usage to decrease costs. We’re working directly with customers to help them understand which changes may impact them.
There is a zero percent chance they haven't ran the analysis and concluded what % of customers would see a bill increase. It's high. If its low-to-zero, cloud companies are clear about how the prices are changing, and usually outline how many customers are would be negatively impacted. If it's high, they're ambiguous about what is changing, and shift the blame onto customers; if its still expensive for you, you're just not using it right.
More specifically: if the majority of customers were going to see lower bills from Google, even if the top N% would see higher bills, you can bet that the headline would be "New pricing structure reduces bills for most customers".
The "Always Free" egress change is probably going to mostly affect the large pool of free and near-free cloud users. So a very large number of people who "have Google Cloud accounts" may see costs go down.
But the costs will go up for all the customers heavily investing in Google Cloud and using Google Cloud for storage of a lot of data. So the overall outcome will be more money for Google, in an update that claims a cost reduction for a large number of users.
> There is a zero percent chance they haven't ran the analysis and concluded what % of customers would see a bill increase.
Zero percent is correct. I'm a GCP customer, and today I received an email from Google with a table explaining precisely how my bill would have changed, with columns labeled, e.g., "List Price $ increase in monthly bill due to data replication", and a corresponding dollar amount. My bill will increase by 5% overall if I don't make any changes.
> we’re also introducing new options for some services to better align with usage, which could lower some customers’ bills. In fact, many customers will be able to adapt their portfolios and usage to decrease costs.
- Using multi-regional storage (used to be a 30% premium, now much more)
- Making lots of object writes (Class A) vs lots of reads (Class B)
So, if you can:
- Move to regional or dual-region (NAM4) storage
- Snowball your writes into bigger overall objects, <3 immutability
- Keep your data in the same region as access
Then you can reduce the impact here.
They are also closing the Coldline/Nearline loophole, where you could use a bucket lifecycle policy to keep your objects in Standard (cheap access) for a few weeks/months, and then move them to a cheap long term storage tier (Nearline/Coldline), because that move is another Class operation, that just got a lot more expensive. This is inline with two years ago when they quietly moved lifecycle operation pricing from the origin service tier (e.g. cheap Standard storage) to the destination tier (e.g. much more expensive Coldline), cutting down on the savings of tier jockeying.
This. It's using pricing as a user behaviour mechanism.
I don't like the price increases either, but it's kind of clear they're trying to make sure multi region buckets aren't the default a user uses just because it's there. It costs gcp more to run this, and for most users multi regional doesn't make sense.
Lots of enterprise orgs don't even use multi region because of either gdpr, or regional laws. The data has to either be classified as public/not sensitive for a real use case of it.
They are expecting behavior to change based on the new prices, so that’s why they have to be vague and can’t precisely predict what the final cost will be to customers.
That sounds like a PR statement: there’s no way they don’t know what the impact would be now and could make that clear by adding “at current usage” to any estimates.
Put another way, if the cost was going down do you really think they’d avoid saying that because people might start to use more?
This seems to be the biggest deal, a few links away.
> Reading data in a Cloud Storage bucket located in a multi-region from a Google Cloud service located in a region on the same continent will no longer be free; instead, such moves will be priced the same as general data moves between different locations on the same continent.
If I understand correctly (do I?), this means that storing frequently used data in a multi-region bucket is suddenly very expensive — we go from paying $0 to $0.02/GB. Reading 10TB / hour goes from $0/year to $1.75M/year.
We can switch to single-region buckets, but it's quite an effort to move all the data.
I'm no GCP user but if you've planned for "schema on read" and throw a bunch of poorly indexed/partitioned/compressed files in there you could probably get to it pretty quick...
Who cares about DR, and having 3x copies of your data 100mi apart from each other? Small startups, or Enterprises? Enterprises can just push those costs to their DR budget.
The fire last year at OVH showed us impressively that it is not a good idea to have your data only in one region. So don't do it and stick to multi-region.
Well, that depends. IIRC, OVH's fire hit multiple floors, and each floor was a "zone" - the power, networking, etc were all independent, but they still had a single core dependency - the building itself.
I actually got excited, thinking that this would be another drop to egress in order to compete with Cloudflare and AWS. AWS just significantly improved pricing on egress to compete with Cloudflare, so it seemed like an obvious next step for other clouds to do so.
Instead, huge price increases? That's... confusing. I honestly wonder if Google wants to kill off Cloud, given how much money they lose on it every year.
Having worked on both AWS and GCP, my experience was that AWS had a much better organizational grasp on how to price services. They track the predicted revenue/costs compared to the observed, and expect each team to have roadmap projects to improve that ratio over time (or at least to keep the ratio the same as they drive down prices). When I was there, Google has not such process for tracking their costs. Engineering teams had much less understanding of their costs as well. I never worked on Azure, but I heard similar stories there to my experience at GCP; that there was no institutional process for reducing costs.
Building top down process to improve costs to enable price drops is one of Amazon's core strengths. It is core to how they run all their businesses.
I don't know but for the past 2-3 years Google has been on some crusade to reduce its storage usage/customers.
For me it started in 2020 when Google announced my Firebase storage usage would go from maybe $20 per year to something like $800 per year, for a single app. Apparently they had forgotten to charge Firebase users for egress, for years.
But then also Gmail stopped adding more storage at some point so I was forced to get a Google One subscription or migrate 15 years of emails to some other service.
Etc.
I suspect Google has realized it's better to reduce its storage customers and just keep the ones that are ready to pay more, instead of expanding their storage capabilities ad infinitum.
According to Google's own calculations (in the email they sent about the price changes), this will increase our GCS bill by about 400% (and our entire Google Cloud bill by about 60%).
It would seem that we have until October to move elsewhere... :(
> It would seem that we have until October to move elsewhere
the biggest fear especially with this class of infrastructure (long term cold storage) is that they can make it too expensive to leave at any time by upping the retrieval / egress costs. How expensive is that move going to be?
Not sure about your usage model, but if you have a defined retention for customer data (e.g. 1 year), then you can start pushing new data to a different tier/cloud provider, as the old data drops off, and a year later, you're completely "migrated", without having to stage a stop the world migration between services.
In our case, it's effectively "forever" storage. Which is to say, we are obliged to retain data for at least 10 years but targeting 20 years.
At this point we are doing this with on-premise tape backup but that is in part because I'm yet to be convinced that we can trust cloud providers with this, especially since our future retrieval needs are unclear (some outlier scenarios could see us needing to retrieve substantial fractions of the data). Not to mention that even the coldest cloud storage seems to still be substantially more expensive than DIY tape archival (admittedly, not taking into account things like internal IT staff costs etc).
Google as an organization seems hellbent on teaching their users not to rely on them. On the consumer side it's by rapidly abandoning products, on the cloud side it's by dramatic price increases.
I think this is the third time we've been slapped with a new charge for something that used to be free. (In this case, egress from multi-region storage to a local region.) That's not going to burn us super hard, but maybe it's only a matter of time before they add a new charge that hikes our bill by 50%.
we moved off Google Cloud functions after they become 10x more expensive for us
they first introduced container registry, which made us pay for the storage (before you only paid for invocation and egress)
> If your functions are stored in Container Registry, you'll see small charges after you deploy because Container Registry has no free tier. Container Registry's regional storage costs are currently about $0.026 per GB per month.
recently they sent an email telling us new functions are going to use to “Artifact Registry” and prompting to migrate our old functions
luckily we started migrating before the announcement
i'd recommend checking serverless framework (serverless.com) or openfaas (openfaas.com)
best thing you can do is not get involved with provider-specific APIs: use Docker/Kubernetes for building and executing your code, Postgres-compatible database (Hasura if you want Firebase experience) and S3 for object storage, send e-mails using SMTP
Wanted to add another option to ushakov's comment: KNative (which is actually what CloudRun is built on).
If you run k8s clusters anywhere, OpenFaaS and KNative are both solid options. OpenFaaS is seems better suited for short running, less compute intensive things. Whereas KNative is a great fit for API's.. it just removed a bunch of the complexity around deployment (like writing a helm chart, configuring an HPA, etc).
Actually lol’d at “unlock more choice,” - if it’s truly a commodity product we’d expect basically zero margin. Clearly Azure, AWS and GCP are not zero margin, which implies oligopolistic (does Oracle even count?) price coordination for enterprise cloud. (Edited, forgot Azure)
Cloud is not a commodity product. Commodities are easily interchangable. For the most part, a banana is a banana, a pound of corn is a pound of corn, a ton of steel is a ton of steel. There can be quality variations of course, but at any given level of quality there are still multiple suppliers, and the costs of switching between them are fairly low.
That is not true of the cloud. Every cloud is unique in their own special snowflake ways, the APIs are often fairly different, the switching costs are high and there is a small number of suppliers.
They are mostly interchangeable. They all store data; they all run Linux VMs. Switching costs are high though.
It's surprising that vendors make their custom cloud features (e.g. SQS) more expensive than running the same thing yourself - because those have the most vendor lock-in.
While I agree that there's a lot of marketing speak here, I have to note that:
1) You wouldn't expect zero margin, you would expect normal margin, that is, these companies should have around the same margin as the average of the rest of the economy.
2) Commodity markets don't have to be low margin, because a commodity market with high market concentration will be a high margin market.
I pretty sure it refers to how the market share is spread between the competitors in a market. In a market with low concentration, you have say 20 competitors and no one has more than 10% of the market. When there is high market concentration, 3 of those 20 competitors might have 80% of the market.
A little surprise hidden away in here: it is currently possible to exfiltrate data from a Cloud Storage bucket at standard tier ($0.085+/GB) instead of premium tier network rates ($0.12+/GB). This is achieved by making the bucket a backend for an external HTTP(S) load balancer [1] ($18/month).
This announcement adds an additional $0.008+/GB for the cost of outbound data moving through the load balancer, so effectively that's a 9% increase on the standard tier bandwidth pricing.
Once again this proves Google is NOT a customer focused company. These price increases are driven by accounting and are short term calculations.
As someone who's used AWS for most of my professional career, I've only ever seen prices being reduced to be more competitive rather than increased to 'align' with offerings of other vendors.
Newer generations of compute and storage are often cheaper and faster than previous generations which shows they are able to invest in technology to make things cheaper for the customer and lower cost for them to maintain which is impressive.
I do expect AWS to capitalize on this and persuade GCP customers to switch. I have no idea why GCP thinks that their customers are sticky enough to stay with them through the price increase.
I do. There is a substantial cost to switching stuff like this. We use Gsuite and a very minimal GC setup so from a financial perspective it doesn't really matter all that much. But clearly GC is set up for huge enterprises and for an SME customer all that flexibility translates into considerable overhead. Having a 'single supplier' is a risk because it puts all of your eggs in one basket, at the same time it should normally simplify things. But in the case of GC it probably doesn't.
That said: neither AWS nor MS are particularly attractive either, none of these companies really have my sympathy, it is choosing the least bad rather than choosing the best. Technical merits, pricing, cost to switch, company image, it all factors into decisions like these.
I agree the number of customers outright switching cloud platforms will be low. But some of them might start small explorations of multi-cloud, even if it's just at the level of "my team wants to use an AWS product for this internal project" isn't auto-denied. Long-term, that chips away at GCP's leverage on their existing customers.
My corporation is on Google Cloud and its taken 3 years, trained thousands of engineers and jumped through hundreds of FTE-years of bureaucracy to get a few applications set up. Its very hard to use cloud, and to switch to save a bit of money isn't going to happen.
The difference is that GCP is a distant #3 (going on 4). It’s much easier to find engineers and tools for AWS and a fair amount of the cost historically was working around gaps. That doesn’t mean there are no reasons to use it, of course, but it undercuts the amount of pressure they can apply. Given the well-known internal deadline for profitability, I’d be surprised if didn’t give some current or potential customers pause.
It's not hard to imagine some customers jumping to AWS or Azure, which most large organizations are likely to already be familiar with[1] moves like this & that looming mandate causing C-level concerns since everyone knows GCP is not profitable at the current pricing but is expected to become so soon. A lot of big organizations prefer to pay a predictable amount of money than run the risk of price increases blowing their budget.
The other possibility I was thinking about is consolidation in the less popular providers — e.g. what happens if IBM sells their cloud service to Oracle or Salesforce, or a major customer switches a lot of volume to them. Oracle comes to mind since their bandwidth pricing is like an order of magnitude lower than AWS — I'm sure Zoom negotiated hefty discounts but they still picked Oracle for a reason and I'd bet it's the fact that their business is largely network egress.
1. e.g. if you use Office 365 and GCP, there is a valid argument for consolidating on a single vendor and I'd be surprised if that didn't work on a few customers since there's approximately a 0% chance that the Microsoft reps aren't going to toss some discounts around to encourage it.
Yes, I always tell people to use AWS because of the nature of Google. What can you expect from a company that makes money by spying on people and forcing people to see ads?
Looking at just the storage pricing, it looks like GCP was already priced lower than AWS and Azure, this increase brings them either to on par, just just slightly below AWS and Azure.
GCP was trying to "loss lead" in to dominance, does not look like that was working out since even being more expensive AWS and Azure were still killing them.
Of course if you only choose GCP because of cost you have little reason to stay so...
Many enterprise care more about the risk of prices changing than the absolute prices. The later you can account for in budgets more easily than the former. Especially if the price increase is one that goes from $0 to $non-zero since that could be a massive increase in absolute dollars.
AWS has never afaik increased prices which is a pretty strong selling point even if specific services likely are a loss for them perpetually as a result if mis-priced initially.
>AWS has never afaik increased prices which is a pretty strong selling point even if specific services
Technically true, but they do it a little different, where by they add different SKU;s with higher prices, and discontinue the old SKU's forcing you to move to a "new product" instead of just increasing the prices.
Not all services are like that but they just did that with compute instances, I believe this is the second time they have killed off a "generation" of compute
Do you have any examples of this? It makes sense that old hardware be replaced but it's usually over a LONG lifecycle and the new instance type pricing is often lower than the previous generation.
Has any other large cloud provider increase prices like this ? I remember using Google App Engine awhile ago and switched to AWS when they increased prices and I don’t understand why you would just have prices higher and eventually lower them once you get more customers . Other than BigQuery and TPUs I’m not sure of the advantages of Google cloud …
Not yet, but with Amazon increasing everyone's salaries, the price of everything in general going up, you're likely to see every large provider raise their prices.
It seems like only the new players will lower prices.
How much revenue does Amazon generate per employee? This comes up a lot in arguments about the minimum wage where people talk like the price of a Big Mac will double because they aren't accurately accounting for the percentage of cost which isn't human time. Amazon, Google, Microsoft, etc. pay their people a lot more but they also typically can amortize a developer's cost over many thousands of customers so I'd be surprised if this drove a big increase — especially compared to the stress we're likely to see if China has an extended Omicron lockdown.
22% increase in some per-GB costs and 50% increase in some per-request costs of the most fungible, commodity service any cloud offers. Really no idea what to make of this. At least it seems reasonable to expect further pricing changes from other clouds in the coming weeks (and knowing AWS, maybe even an announcement in the coming day or two)
I’d be surprised if AWS announced increases: they LOVE to note that they’ve never done that in their sales pitches and enterprise customers value predictability more than the absolute lowest cost. I’d guess that their margins on things like network egress would cover most fluctuation but otherwise I’d expect at most to see something like the EU data centers getting a temporary Russian war energy surcharge while they figure out how to buy a ton of green power contracts.
If AWS wants to be cut-throat, they'd cut the margin on NAT Gateways down to, say, 20% and run a press release calling attention to the increases on other providers.
AFAICT they were a loss-leader with some of the cheapest cloud storage prior to this update, and this brings them closer in price to AWS/Azure (although still slightly cheaper). I wouldn't expect price changes from other platforms in response to this.
> The impact of the pricing changes depends on customers’ use cases and usage. While some customers may see an increase in their bills, we’re also introducing new options for some services to better align with usage, which could lower some customers’ bills. In fact, many customers will be able to adapt their portfolios and usage to decrease costs. We’re working directly with customers to help them understand which changes may impact them.
If they were decreasing prices, it would be in the opening paragraph like essentially every other cloud providers price decrease announcements.
Whenever reading these announcements, if price decrease isn't seen within the first few paragraphs (the earlier the better), it's basically them trying to explain away price increases for the vast majority.
The fact that they even have to try to argue/explain whether prices are decreasing/increasing is a worse sign.
> Cloud storage and multi-region replication and inter-region access are changing in pricing.
> The introduction of a lower cost option in archive snapshots for Persistent Disk pricing.
> New pricing for Load Balancing (to bring it in line with other providers. Read: very likely AWS pricing)
> A new price for Network Topology, now included in the price is Performance Dashboard and Network Intelligence Center.
All without what the new prices will be so based on the fact that it is several services with varying prices based on usage it could be a substantial change or not much at all.
Quite vague and unhelpful of a post by Google other than to give you a heads up to not be surprised about your bill in October.
FWIW, one cost decrease actually also uses the word "increase": The amount of Always Free Internet egress will increase from 1 GB per month to 100 GB per month to each qualifying egress destination.
But I don't know if 100xing the free egress offsets all the doubling storage costs...
This seems like GCP is shifting its strategy away from trying to win more market share and catch up with Azure, AWS, Tencent. Perhaps they realised that this is futile and not are now focussing on revenue, milking their existing customer base.
> In fact, many customers will be able to adapt their portfolios and usage to decrease costs.
Sounds like a sneaky way to win some quick revenue because they know a huge number of customers are not going to be able to go back and re-engineer their storage use in time, so will end up out-of-pocket even while Google gets to pretend they are saving everyone money.
It seems particularly problematic to increase prices on anything to do with long term cold storage. That is where customers are placing the most trust in their vendor since much of this data is held for mandatory compliance reasons and retrieval costs are sufficient that it is completely infeasible to migrate out.
> Storage Transfer Service will be available free-of-cost for transfers within Cloud Storage, starting April 2 until the end of the year
Is this a loophole to get free retrieval of data from cold storage that we could exploit for other reasons?
I want to store a few dozen TB of genomics data in a publicly accessible way. Are there any better alternatives to S3 or Google Cloud Storage? I've been waiting for Cloudflare's R2 but the beta is not yet open yet and I'm not even sure it would work well for me.
Better in terms of what? There are many variables to consider, and some might be more important than others in your case. If you "cost" is the top priority, get a dedicated instance with unmetered connection and a 1 or 10gbps port, then you'll have a static price/month that won't surprise you. If latency/bandwidth is more important, throw a CDN in front of that instance, but price will vary more then as you'll pay per data served (in most cases).
Wasabi [1] is admittedly up to 80% cheaper than S3. But it forces you to keep your files for 90 days at least. I saw it recommended several times as a cheaper alternative to S3-backed storage for OwnCloud.
How about B2 combined with Cloudflare? Egress from B2 to Cloudflare is free and egress from Cloudflare to the internet is free for their free plan, and perhaps competitive on their not free plan?
It'd be great if, rather than just sending an email to a somewhat confusing calculator or pricing sheet, they'd show the potential cost increases alongside your actual bill so that you have 6 months to tweak, negotiate or move off the service if you really can't afford the price increases.
I don't get why it's not just "easier" to make the effects super obvious. If people are going to leave, they're going to leave.
While everyone else is able to discuss the announcement, the contents aren't even loading for me on FF98. The OP URL redirects me to https://cloud.google.com/blog/. I just see the header/footer estate, and a forever progress bar right at the top. Cache cleared, private window. The middle is just blank. https://i.imgur.com/WpyxwqB.png
Setting object lifecycle policies is now more important than ever because of the changes to the pricing in Google Cloud Storage. If anyone's interested to optimize their Cloud Storage costs - check out https://www.economize.cloud. We are launching support for Cloud Storage this week.
Even the automatic transcoding between the tiers of GCS requires billable read/write OPs, so if you have a lot of data in Coldline and now want it in Archive then do it now at the lower prices.
I think it's a mistake to call Google the most customer-hostile company to have ever existed. This would require them to even grasp the concept of what a customer is. I don't think anybody working there has every seen or met a customer in their life.
They are man-childs that during a lockdown can't even make a pot of coffee of their own, that work on "cool stuff".
I’m a software engineer at Google, working on Cloud Bigtable, which I personally think is pretty “cool stuff”.
I speak to different prospective and existing customers multiple times a week, sometimes multiple times a day, over chat, email, video conference and occasionally in-person (pre-pandemic).
I worked from home and made two pots of coffee today - one pour-over and one drip.
I don't think that's the take home here. Viewed across their entire offering I suspect this is a relatively minor pricing change and won't have a big effect on the bottom line. Pure speculation, but I suspect that since Thomas Kurian took over there's been an increased focus on becoming profitable, so there's been more focus on tying up those little areas that were leaking money (e.g. legacy g suite free users). My guess this is another change along those lines.
Except that HDD, SSD storage is getting cheaper quite fast. CPUs and RAM also didn't get more expensive and continues to eat less watts.
So I think it's more like: sorry guys, we ran an analysis and found that when we raise the price, most people won't migrate away and we make more money :]
I'm not seeing that. In the last 2-3 years prices haven't changed much, when comparing drives of the same performance and warranty. And the best price per TB are still 1TB SSDs, large SSDs are still very expensive.
Exactly. Prices aren't determined by costs. They're determined by supply and demand. This might indicate demand continues to increase relative to supply.
Global chip manufacturing isn’t a a single product. Most of the headlines focus on the automakers because they slashed orders at the start of the pandemic, disrupting all of their vendors, and then twisted arms to get capacity back when they saw business didn’t evaporate. If you were competing with that, it sounds like you have had a miserable time.
If you’re not, however, things haven’t been so bad - Apple, AMD, Intel, etc. haven’t had the equivalent of those Teslas shipping with missing parts. There has been the pox of cryptocurrency’s ever higher demands for waste affecting GPU buyers but that looks like it’s far more an issue of demand than supply.
When there is a wafer shortage, flash devices can go to more layers rather than more chips. Less cost effective when wafers are cheap, but acts as a buffer as wafers get expensive.
I don't know details - I don't even know that storage is getting cheaper, haven't been paying attention - but flash chips are, IIRC, way easier to manufacture than current-gen processors; it's plausible that SSDs are escaping being affected the way CPUs/GPUs are.
I'm really surprised they don't just cut to the chase...
"Existing customers will see prices rise by 10% per year, because we know leaving is hard, and new customers will get a massive discount and loads of free credits. If you migrate in from Amazon we'll pay your final AWS bill for all the data transfer.".
I actually followed the links and found this:
This announcement is just an eye wash to hide the fact that they're doubling their pricing structure for some products. And they claim most customers will see a cost decrease.Sigh, I was just thinking of moving all my stuff, projects and even websites from cloud hosted solutions to my own home server and slapping a cache like CloudFlare on top of it and calling it a day. This is only pushing me in that direction, haha.
Reference: https://cloud.google.com/storage/pricing-announce