We just told 2 big vendors where to go as a result of not just 20% rises, but one was an eye gouging 600%.
We spent 2 manic weeks scrambling to implement alternatives but we have now done so and in the process saved ourselves several hundred thousand bucks and got arguably better systems as a result.
They did this to me and I was shocked and disappointed, especially since their entire business model is to become the plumbing between your different systems. Seems predatory.
It would be interesting to see what the corresponding figures are like for on-prem, although they would be much more difficult to calculate and would vary wildly between companies.
The only thing that really stood out to me was the Google storage price increase, which seems rather large, as in way out of line in comparison to our 2023 spend and 2024 budgeting.
It would also be nice to see what exactly is meant by SaaS vs presumably IaaS. I.e. would Amazon Glacier (random selection since we've been comparing the pricing with tape recently) fall under their definition of SaaS.
Storage prices depend on the reliability you want.
Someone like Google cannot ever lose data of any customer. So if you pay for 1GB of storage, they probably actually store 5GB of data or more for you. It will be redundantly stored within the datacenter across different racks, but also stored (also redundantly) in different datacenters in case of flood/fire. Theres probably a copy on tape incase of a catastrophic software bug that wipes all the drives. Or two copies on tape because if there were a software bug that wiped all the drives, the chances that every single tape was readable for a restore is low - so more redundancy needed.
However, if you go for a smaller player, they probably still keep multiple copies of your data, but it might be a RAID-5 -like setup, requiring only 1.3GB of storage for each GB you store with them. It can survive a drive failure, but two drive failures or a datacenter fire or an engineer fat-fingering an erase-all command and your data is all gone.
Thats (part of) why the big players charge so much for storage. I actually wish I could choose less reliable yet far cheaper storage option with a big player, but they don't want to offer that because of the PR hit when they do lose customer data.
That makes sense and is a factor in our calculations, we store everything at least twice (as in RAID + actual copies not including tape off-site results in ~ 2GB stored for every GB), with at least a third replicated to DR off site backup. That explains the on paper per-GB price difference (which we would expect to be more expensive in the cloud, the main advantage being that we don't have to coordinate all of that, so there are areas where we would save, its just very difficult to do a price comparison given we don't know the details of their system).
It doesn't explain a huge percentage increase though. Presumably they (Google) were already doing due diligence there with respect to reliability.
Essentially agreeing with you on all those points though. Especially the bit about the PR hit. It's a constant factor in our budgeting with the understanding that if you "lose the backups", you are probably out of a job.
> So if you pay for 1GB of storage, they probably actually store 5GB of data or more for you
The actual factor is most likely around 1.4-1.5x and for sure can’t be any more than 2.2x in this day and age. Dumbest possible implementation will be “only” 3x so no it’s nowhere close to 5gb
Edit: looks like it’s public so i can actually tell you that google uses RS 3,2 which gives 1.5 replication factor. When i was there a few years ago storage folks told me they never lost a single stripe of data
And mirrors to at least one extra datacenter as they can lose bandwidth with a fiber getting cut, become unreachable due to networking snafu, or even burn down entirely.
A common problem would be throughput though. Storage capacity scales much faster than access speed. If you are storing an item only 3 times and lets say each storage location gives you 50,000 IOPS max then you can only ever service 150,000 IOPS of this item which might not be enough.
I was thinking of gcp regions in which case you do have to pay for it. For colossus cells within a single regions you obviously don't but I don't know enough how it maps it out down there and whether it just moves data around in the event of PCR
I did a detailed price cost calculation of onprem vs AWS as I worked at a MSP. Our cost of compute and storage including DC construction over 10y was about half the cost of AWS.
We also used cheap supermicro and had no service contracts or warranties we had on site staff. Their salaries were included.
I think Cloud has never been a question of costs — it’s generally about not having to become operational experts in house and maintain that expertise as it’s not a core business function (hard for SMEs for example, to retain talent outside of overhaul and project cycles), overall desire by senior management to single source vendor management, and a “throat to choke” that keeps you from losing your job if problems arise.
These are the drivers I’ve seen, with cost a distant third, fourth or fifth. Can’t wait to see what happens this economic cycle.
The infrastructure the simplest little offering on a modern data center is so immense that honestly it’s a good deal often. If you’re a small-midsize company what’ll you do - put a bunch of computers in the office closet and get what uptime and latency?
Pretty good uptime and exactly the same latency (CDNs != cloud, and that's where most latency is handled). Not 99.9%, which is the big selling point of clouds... as long as you forget about all the times that their complex infrastructure collapses on itself and loses you that third 9.
Thanks for this insight. It’s a perspective I don’t have. Did you build the site? Is it performing as expected?
I know “nobody got fired for choosing AWS” but the real value seems to be in burst loads. If you have predictable, stable workloads I can see on prem or hybrid making more sense.
So long as stable also means "unchanging" (in the sense of no new machines are being deployed). Our biggest win from AWS was never having to think about DDoS; our second biggest win was never having to wait for our Ops team (who was very good overall) to have the discussions about how to deploy new hardware, new storage, etc.
I get annoyed when a new EC2 instance takes 2 minutes to launch now. That time used to be not productive to measure in hours.
We also need to bear in mind that SaaS vs On Prem also often implies 'subscription' licensing vs perpetual software licensing (one off + support payments).
If a SaaS product is $100k per year (subscription) the equivalent perpetual license cost is probably $200k (one off) + $40k annual maintenance (my very rough rule of thumb after doing lots of tenders - a perpetual license is usually two years of the subscription fee plus 20% of the perpetual license cost as support fees).
If you can afford to build a data center and know you have that fixed amount of capacity for 10y then you’re not talking about 99.99% of businesses. For everyone else the cloud is cheaper. In fact, to your customers, you are the cloud, so I’m not sure what you are arguing.
> It would be interesting to see what the corresponding figures are like for on-prem
They replied. Not everything is an argument.
Also, they didn't say anything about fixed capacity - they're likely just talking about 10Y deprecation which is a very common accounting thing to use - and "For everyone else the cloud is cheaper" is definitely not true. It depends on workload and architecture. If you're in the business of selling infrastructure, for example, using the cloud will eat most of your margins (this is not to say that it's not done, but usually there's some caveats, e.g. maybe you will get some cheap VMs provisioned and do your own fleet management ontop instead of building everything on top of say, firebase or dynamodb).
> It would be interesting to see what the corresponding figures are like for on-prem […]
You won't find any, not easily anyway. The world of enterprise SaaS follows a different business model that I refer to as «hiding the dead horse in the cloud» and holding the customer to ransom.
The premise: the customer is already locked into the wares the vendor supplies, and the customer can't easily migrate away from the product. Oftentimes, the product is also ridden with the technical debt, but it either has a feature critical to the business or there are multiple business (and technical) processes that have a deeply ingrained reliance on the product. It is either the data or the integration with the product. Regularly both.
The vendor repackages the product («the dead horse») as a SaaS, rolls it out into the cloud (the act of hiding the said dead horse), bumps prices up and slips the bill under the customer's front door (the ransom). Cloud costs are passed onto the customer at a markup. The product (and sometimes the customer) might get minor tangible benefits from the repackaged version, e.g. improved availability and reliability, although even that is not always the case. SaaS products typically do not use native cloud services, they run on EC2 instances (or their equivalent in Azure, less often GCP) and are cobbled together just enough to make them not fall apart.
SaaS, as a business model, is not about the engineering excellence most of the time. It is about squeezing the last drop of blood that there is left in a legacy product, now being offered as a shiny-shiny SaaS version («hey, lookie, we are also int the cloud!»). This is the reality.
The theory is somewhat different. In between 2000s and 2020 (approximately), vendors used to tailor their products to specific needs of each specific customer which increasingly became difficult to maintain, update and upgrade as there would be no single product titled «ABC», there would a «ABC customised/hand-rolled for customer 123», «ABC customised/hand-rolled for customer 456» and so forth. So the original premise of SaaS was to have a single version of the product for ALL customers that exposes simple data centric and whatever other technical interfaces that the customer would hook into. The enterprise world does not work that way, though.
There are positive exceptions in the world of SaaS, and almost all of them are in the startup universe and outside the enterprise.
GCS launched as an S3 competitor (with an S3 compatible API). So their pricing was basically copy pasted from S3.
From the start though they offered more expensive to run features (a consistent list api, 1GBit transfer per file vs 100mbit for S3 at the time, Glacier-like storage with instant retrieval).
I think this pricing jump is mostly pricing it at what it always should have been. Plus a bit of the now being so focused on enterprises that the list price means less and its all about "call sales for more information".
S3 has built most of those features since then though, without a price increase.
That doesn’t include R&D & marketing costs which make them unprofitable overall.
I guess they are betting that R&D costs are fixed(ish) or at least will grow slower than revenue. Which of course doesn’t always work, especially over the last few years where free money resulted in massive bloat.
I propose a new term "TDaaS": Tech Debt as a Service.
(Feel free to modify the name to something less clunky.)
A SaaS can provide a useful tradeoff. Just don't get scared by FUD marketing tactics, like "Why you shouldn't build your own X" etc. It's a tradeoff: you introduce an external dependency and give up control.
> It's a tradeoff: you introduce an external dependency and give up control.
I think this is why self-hostable solutions are becoming more common.
When a solution is self-hostable, both sides win.
The SaaS provider can operate the solution, which offers revenue. Customers like it because they can get going quickly.
Or the customer can host it. This allows them to control where the data goes and minimize costs. I like the way this tweet puts it "The real reason to buy SAAS is for someone else to do the ops work"[0].
In both cases the customer benefits from the continued development of the software (similar to how a library improving benefits all applications which depend on the library).
And the ability to self-host removes a business risk. If the SaaS vendor fails, well, we have to support it ourselves. If it is OSS or we have the code in escrow, all the better.
> Price should be the maximum the market can afford while still beating ones competition.
This is such a frustrating perspective. I have so much respect for people who build projects and companies with a profit margin that lets them earn a comfortable living, without trying to extract as much as they possibly can from everyone around them.
This approach has an actual name in my industry. “The Nonprofit Starvation Cycle.”
The basic premise is that in a very resource constrained environment, there’s never enough money to invest in infrastructure and the continuous improvement of process and product. This affects most vendors that serve the nonprofit market exclusively. The price you can charge your customers lets you earn a modest profit and pay your employees, but eventually your chronic lack of resources to invest in your people and product kills your business when a fresh product with fresh funding comes onto the market. The cycle then repeats as that new vendor is unable to make the investments needed to keep up with the broader industry state of the art.
So file this under one of those things that sounds nice in theory but kills your business as competitors eat your lunch in practice.
Problem with this approach is your competitors who price higher will end up with a bigger pile of money which they can use to outcompete you for land, people, equipment, and other resources.
For example, you are bidding for a business, and buyer A models max profit/lowest costs, and buyer B models less profit/higher costs (such as paying employees more), buyer A is going to be able to offer more money and secure the asset.
Why does a bottle of water at a ballgame cost $5, at a food truck $1, and at a supermarket $0.15?
Personally, I find the ballgame price exploitive, but the food truck has added a bunch of convenience (and a few pennies of refrigeration cost) and that's worth paying $0.85 extra to a lot of people.
Difficult to calculate that given lock-in. If you've outsourced everything infrastructure related, you aren't in the same market without substantial upfront investments being made.
Amazon/Google/Microsoft/IBM et al. all know this, it's why there are significant incentives if they know you can walk.
Maybe it's a long term strategy - get everyone on your SaaS platforms and then, once everyone is dependent on you, start picking winners.
Or just keep raising the bar and watch people go crazy because you already have something like 1% of all the money ever printed. Why? For the sheer cruelty of it, seems to be a good answer.
A lot of decisions being made at the moment seem to be 'we could be empathetic and kind but where's the fun in that, lets rinse them out like rags' type of decisions.
They focused on growth before. Growth phase and then greed phase is very common in this industry, you start out offering everything for free to grow, then you remove the free offering and force people to pay to stay with you, that is the greed phase.
With diminishing access to venture capital we see companies go to the greed phase faster than usual.
If they had given you a seat that gives you power, money and influence, and you had the knowledge to expand it both for yourself and them, what would you do?
Nothing? Well that seat would vanish pretty quickly.
This is complete nonsense, right? SaaS prices have nothing to do with input costs --- prices are plucked out the air (or from somewhere else...) based on some guess about what the market might bear vs aspirations for volume. Then, once the smoke clears, you jack the price up a bit in order to buy a new boat.
As TFA says, these companies know that their "all in one" cloud offerings are sticky. My employer just spent the last few years going all in on Microsoft's cloud. We use it for email, productivity suite, Teams, storage.
This was after several years of using Google's tools (though not with the same level of commitment -- we still used Office, and had Exchange and AD on premises). Google jacked their prices for Workplace so that was the motivation to move to Microsoft. I fully expect Microsoft to do the same, in another year or two, but this time around I don't think we'll change -- this time we're too invested, it would be too big a project and have too many ancillary costs.
If we had been more "all in" on Google, we probably would have just paid the increase.
It seems a little dangerous. Maybe many companies will be like - you know we panic bought all these online things during COVID and do we really use them all that much?
Long term rug pull on CTOs who thought the cloud would somehow be better, easier & cheaper.
CTOs get lots of free credits, downsize their infra staffing.
Start using the basic building blocks, but then of the basic building blocks (servers & storage) are marked up and expensive.
Next, to find cost savings their org need to move deeper and deeper into alphabet soup of cloud vendor proprietary stack, at which point they are locked in.
They’re even more locked into cloud now than it seems because not only have they shut down their in house ops but they’ve lost the skills and the work force.
Now SaaS and cloud is free to squeeze as hard as they want. The lock in is strong.
Saw this coming miles away. Nobody listened of course. This industry worships fads. When the buzz becomes “everyone is doing X” it becomes truly hard not to do it too. All your bosses, employees, investors, etc push for it.
Right, and a lot of the cloud naive make lofty positive assumptions that aren't true.
For example, you can't lose data in the cloud.
There are very much operational mistakes you can make with combinations of S3 settings around versioning and deletes, that result in permanent irretrievable dataloss.
My last shop managed to do this and then was implementing some sort of cloud data backup scheme.. in the cloud, lol.
The other assumption is that sure compute is costly, but since you can spin it up&down you'll save so much. As it turns out, most apps, most of the time, do not have bursty use cases that merit paying 2-3x for compute in hopes of spinning it down when idle. The funny thing is the same people selling this line are also the ones telling you to negotiate savings with some of those annual agreements that require a minimum amount of compute/spend.
The last one is assumptions about hockey stick compute growth needs, and that of course choosing AWS will make this easier than having to constantly procure servers. Maybe! But few have hockey stick compute growth needs for long. And it's not that impossible to trade your servers out every 18 months to get your compute density growth. And you aren't always guaranteed AWS compute at prices you want, as we've seen shortages of certain classes compute and needs to reserve up front, etc.
> They’re even more locked into cloud now than it seems because not only have they shut down their in house ops but they’ve lost the skills and the work force.
On the other hand now they are free to concentrate their labor on their core business and competencies. Why on earth would they want to tie up payroll and benefits on people running the company email?
Large software companies still hire their own accountants and lawyers and HR correct? And this is largely expected and understood to be a requirement. Why should IT be any different?
Once you hit a certain size, it doesn't make a whole lot of sense to say something like "let's outsource our IT, they are distracting me from focusing on our core competencies". You should have a CIO/CTO and a whole bunch of other people to be distracted for you. In fact, that's kind of the purpose of having an IT department, so that you can focus on your core competencies rather than keeping up to speed on the latest changes to AWS services or whatever.
This is weighted of course. The more your industry requires competent IT, the larger a % your company should probably be handling in house because then it is a part of what should be considered your core competencies if you want to stay around for very long.
And related - the further you decide to let things that "aren't your core competency" be outsourced, the less competence you retain to understand who the good vendor is to outsource to & manage their performance.
Companies are going through the "reduce our cloud bill NOW" exercise. They have to cut the unnecessary, wasteful infrastructure and raise prices.
So, after all these years of hearing "who cares about performance and cost - just buy more engineers, CPU, and memory", gravity is once again being the bitch that it is.
I can relate with that. I was working in a company that after some layoffs and other cost reduction measures the COO (ex-Amazon) came and gave the instruction to reduce the maximum amount of costs in infrastructure (in our case GCP) and after 3 months we saved more than USD 15 million. Most of the things were GCS and Big Query (some situations where people were doing USD 5K querys).
Turns out building massive workforces on top of software business that don't require them isn't good business.
The big mistake most companies made is that they applied the old-school "Fortune 500" way of company building to software businesses (i.e., bigger is better). Humorous, because the promise of software was that it reduced the need for that behavior.
The end result is that they've built companies which ironically have great margins (or at least, should), but they burn cash like they're still in their seed round. Even worse, this behavior pushes out most of the early talent which means you need ever-more people to fill in the skill gap created by hiring lower quality talent. This also creates the problem of a once-great product deteriorating into mediocrity over time which also threatens grip on market share.
As a co-founder of a SaaS company, I look at my (much much) larger competitors as a competitive advantage rather than a disadvantage. Most of our competitors in our space have much higher pricing because they have 1000+ person headcounts to maintain. We will certainly make less money than our competitors but the general idea is there is a sweet spot where we can charge less for our service and still walk away rich because there's less things to burn cash on.
Zero interest rates. You raise big rounds so now you have to spend it. You raised at crazy valuations so now you need massive ARR and have to enshittify and raise prices.
The whole phenomenon of huge SaaS companies with huge prices and huge workforces seems like a zero interest rate phenomenon.
That is definitely the case but also a lot of those saas companies dont even have great margins and were pure zirp. That’s because they also keep building their stack like they’re in seed stage.
> Software price hikes are driven in part by inflation. The cost of living has surged post-pandemic in most economies. Higher electricity costs, chip shortages, and rising wages all increase the cost of doing business.
Not that there hasn't been inflation, but prices have gone up much more in the last year than inflation itself has. Inflation is 3.2%, lower than the 100-year average in the U.S. It was higher a year ago, but still only about 9%. All of the price increases in that article are significantly higher than that, most are multiples of that. Even if they hadn't raised their prices for a few years prior, that still doesn't add up. I don't buy inflation as a valid excuse, though I would it as an invalid excuse they're still using because people perceive it as true.
It’s not caused by general inflation, it’s a response to the attempts to mitigate inflation.
Increased interest rates have meant an end to cheap credit and put a damper on stock prices.
Companies are trying to shore up their valuation by signalling to investors that they are driving towards profitability rather than debt-driven growth. This means raising the price of their product and cutting costs (ie layoffs).
If the published inflation rate is significantly less than the increase in price of a wide variety of goods, then doesn’t that indicate that the inflation rate isn’t being calculated correctly? My understanding is the inflation rate should generally reflect how much more expensive things are getting each year
The ever-more-tiresome issue is that the textbook, accepted-by-rote relationship of interest rates to inflation hasn't held up. The Fed has raised, raised, raised rates for what, a couple of years now? To negligible effect.
That's because nobody is calling the administration to task for allowing the REAL cause of recent spiraling costs: monopoly and oligopoly. This is straight-up corporate profiteering and price-gouging. It's infuriating to see the dereliction of the press's duty in its failure to demand answers, and to regurgitate embarrassing circular "logic" by saying "higher prices are driving inflation!"
In other news: The heat is driving up temperatures around the world.
When the four or so meat processors whine about a "labor shortage" and then report gargantuan profit increases, I'm not really keen on waiting any more. And meat is just one example.
It's typical that prices change by different amounts in different areas. For example, if you go to https://www.bls.gov/cpi/ you'll see that the overall +3.2% for June is broken up into +4.9% for food, -12.5% for energy, and +4.7% for everything else. The more finely you slice, the more often you'll see something moving very differently from the rest of the economy.
The profit-price spiral of the greater economy is why. Everyone raises their prices so everyone raises their prices. 60% of inflation in this post-pandemic echo was measured to be due to corporate profiteering, e.g., greed.
SaaS companies are owned by billionaire-investors.
or actually investor-companies. The ones at the top of the world.
Same ones who get the printed dollad on their hands first to spend it before other common folks wake up to the fact that it was printed and starts loosing value
I red somewhere that 80% of dollars were printed (=made up digitally) in the last 3 years. so the inflation is now coming fast, 1$ = 20$.. that's 5x the prices.
And most common things have only gone up 50 to 150%.. more to come
We spent 2 manic weeks scrambling to implement alternatives but we have now done so and in the process saved ourselves several hundred thousand bucks and got arguably better systems as a result.
Lean doesn’t have to mean poor.