Do you need a tool to tell you that the cloud is more expensive?
Do you need a tool to tell you that your company underpays or is bad at recruiting? Or maybe that you do something boring that people don't want to work for?
IMO Keda is the more important product in this space, because it translates business requirements like max queue wait time into compute resources.
If you are operating in a way where small cost differences decide if you are break even if you have already failed, no amount of "FinOps" will stop your trend from going to zero. It is delaying the inevitable.
Price is supposed to cut in half every 3-4 years, if it isn't it is largely because cloud vendors can't bear to take less and hope there really is that much more volume of half as valuable computing to do.
The things they sell as an average CPU performance, etc are basically the average when their cloud began.
You've mistaken my point about "total cost of customer" for "raw cost of materials."
The "top brands" probably do marketing. That costs. They probably have better supply chain availability. That costs. They probably have on site engineering support they can offer you. That all costs.
What you're highlighting is the cost of CPUs has little to do with raw materials and with all this other attendant process. That has been true since a few short years after the product started existing.
No, Moore's law is fine. If the CPU takes half the real estate and uses half the power but costs the same it is the same perversion of the market expectations of Moore's law as if you don't deliver twice as much in the same price.
From Wikipedia:
Some forecasters, including Gordon Moore,[122] predict that Moore's law will end by around 2025.[123][120][124] Although Moore's Law will reach a physical limit, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning.[125][126] Nvidia CEO Jensen Huang declared Moore's law dead in 2022;[2] several days later, Intel CEO Pat Gelsinger countered with the opposite claim.[3]
I think it's just because if you believe in economics it would suggest AWS must've lowered the price per vCPU since CPUs have become significantly more efficient in the last decade. Obviously that didn't happen...
The single most important factor in pricing is accurately predicting the future, as we haven't invented time travel yet we're gonna call that a wash. Here's a tool that measures everything else.
Yep! You are agreeing with me. The tool measures a bunch of noise, essentially, since it cannot predict Amazon’s pricing roadmap. Whereas the Kubernetes ecosystem has plenty of valid forecasting tools, such as one which forecasts compute usage, that are valuable.
To that point, I believe it's possible to buy spend futures now in cloud costs and if you're doing that then technically it is possible to predict Amazon's pricing roadmap. I know for sure if Amazon themselves didn't (they do) third parties would (they do, DoIT is an example of one I know that does this but there are quite a few of these compute resellers out there).
But sometimes people are not near enough the spend or product support level where it makes sense to do that. I would gander, though, if you're running cloud cost tooling you probably are one of those people.
You moved the goalpost from "cloud is definitely more expensive" to "well it isn't but maybe it will be for you someday" and alluding to vendor lock-in.
Vendor lock-in is a concern with cloud platforms but doesn't relate to the app in the submission.
I challenge you : give me a single product that cannot be moved away, given some (cost or time).
At the same time, I again challenge you : give me a single product that you can move between providers with no cost nor time.
To that last one, you can actually find stuff. For instance, you could setup your nginx in instances, without any kind of interactions with the hosting provider. In which case, you limit yourself with what everybody can do, leaving all opportunities to only use the base minimal.
But wait .. are you not locked-in with nginx (or to use a better example: with redis) ?
Vendor lock-in is a scam to increase your bills. Because, in itself, everything is vendor locking. And it does not matter. Think your architecture, design your code, and act when necessary.
I don't think you've ever been through the pain of moving a big, fully integrated, multiple ETL pipeline data warehouse built on something like Redshift. Vendor lock in is more about inertia and less about there not being an alternative. Some things are just really, really difficult to move and ridiculously impactful to change.
You seem to be trying to reduce the vendor-lockin issue to a binary "locked" or "not locked," but Idon't think that's a productive or realistic way to think about it. It's really more of a spectrum, and I prefer to think about it as "options" rather than "are we locked." You definitely have a lot more options if you go with Nginx and the project turns evil than you would if you went with some proprietary offering.
Yes, one can migrate their entire CI/CD ecosystem, countless IAM policies across multiple AWS accounts in an organization, their lambdas, databases and S3 buckets over to Azure given enough time and money.
The thing is that most everything in the cloud is billed on subscription, and many core components of networked services need not be (except support). Open file server, self-hosted IAM and standards-based execution stacks. Don't like hosting your nginx on Vendor-A? Move it to Vendor-B.
Yes, the tradeoffs are costs. No, vendor "lock-in" is not the only component one should measure when deciding an architecture. Yes, "lock-in" can occur with on-prem software, and hardware, as well.
Yes, it's real and worthy of concern when a vendor knows they can squeeze you because you have no other choice and the level of effort to migrate is too high.
It is always more expensive. I haven’t moved any goalposts. What I am trying to show is that you don’t need a tool - indeed you don’t need to look at a single price of anything - to know why. It can be both more expensive and a better value though! If you can’t recruit people or what you do is boring or if you want to operate a business that loses money in the long run: those are some common reasons it can look cheaper when it’s not.
RDS is an example of something that is more expensive but can be better value - because a lot of the managed service stuff saves the time of those looking after it.
15 years ago I worked an infrastructure department with 50+ employees - these days a lot of that work that we used to do back then is taken care of by AWS.
Scaling the infra to meet peak capacity (which might be two days in a year) because you can only run on hardware you have, having data centres and data centre engineers, are all costs that go away with cloud, even though you are paying more for the compute you do use.
It’s the wrong question usually. The right question is how many more features and what revenue growth would you get by using cloud? How many less people would you need to employ? Would you need to manage facilities - worry about hardware, power, diesel generators, etc.?
Cloud is more expensive. But you’re not stuck with capacity and sysadmin tickets.
> Do you need a tool to tell you that the cloud is more expensive?
More expensive than what? Doing your own?
That depends on what you are doing. Often, the cloud is cheaper if you factor in the human cost. I've seen many spreadsheets that just quote the hardware costs.
Not only is it potentially cheaper, it can be better than rolling your own stuff in-house.
Kadokawa can tell you all about how their in-house shit (literally, Japanese suck at computers) got pwned while the stuff they had in the cloud (I hate the term, but it is what it is) were unscathed.
Keeping in-house has its benefits, but the costs are also steep.
This tool makes it relatively easy to find cost savings, and also to do chargebacks. Most kubernetes clusters waste tons of compute for no reason at all.
Unless you are currently in possession of a bunch of latest generation NVIDIA DGX, which many people are, 99% of your workloads are OpEx.
They might be CapEx in some imaginary, nonactionable accounting sense. Like maybe you are mislabeling exclusive rights to dig holes in some neighborhoods as telecommunications equipment capital expenditures. Just pay an accountant to make up stories for you. You don't need the tool.
Meanwhile, we’re transitioning to on-premises infrastructure due to the increasing complexity of cloud services. Kubernetes and Docker are powerful platforms—we love them—but they were never meant to be cost-driven. Kubernetes is already incredibly efficient. However, surviving cloud costs and avoiding its traps requires granular cost control—far beyond just monitoring RAM, network, or CPU usage. It’s become overwhelming.
In a nutshell, any K8S deployment on-premises tends to be inherently optimized, saving a significant amount of time and resources. I have new servers and old servers in the same cluster, that is epic.
Modern FinOps often feels like a frustrating exercise: Should I choose 2x 2XXL instances or 4x 8XL instances? The conversation rarely focuses on optimizing software performance or database efficiency. Instead, the cloud has turned into a maze of cost centers, where it’s easy to get lost in ‘managing’ the cloud rather than building valuable products for end users.
sorry optimizing sql queries isn't a priority this quarter, could you write a business justification for it so we can ask in the next sprint planning if it can be scored against our business needs?
thanks!
p.s. we got some complaints about slowness on a few pages. can you schedule some time to sync up and take a look? we need to get this solved!
That’s exactly my point. Instead of optimizing software, FinOps nowadays focuses on optimizing costs. While they might seem similar, they couldn’t be more different. We spend 100% our time optimizing our software, databases, etc. if we need more capacity we just add a new server to the cluster. But this hard on the cloud, easy onprem.
IBM thrives on complexity—that’s the core of many of their products and business model. The fact that even they are getting into ‘cloud cost optimization’ should be a signal for everyone to rethink public cloud strategies.
But is that what's going to happen? IBM has acquired a lot of clever things, only to not really utilize them and just let them wither away.
IBM bought RedHat 6 years ago, can we truly claim that they've done something useful with that purchase? I get that they haven't managed to mess it up like they did with SoftLayer, but they've also haven't done much RedHat couldn't do on its own.
Anonymous for obvious reasons: speaking as a current Red Hat employee I disagree with the statement “they haven’t managed to mess it up like they did with SoftLayer”.
IBM policy has definitely infected the company and while outwardly the host still resembles its old self, the infection is spreading and the host is as good as dead.
I some point hoped that IBM and Red Hat would evolve into a ‘reverse takeover,’ where Red Hat’s culture would eventually take precedence over IBM’s. According to many friends, that outcome is still far from happening
As someone who used SoftLayer shortly after it was acquired (and it was still pretty much untouched by IBM) - SoftLayer was pretty bad to begin with. And I'm not only comparing it to AWS. It was bad even when compared to RackSpace.
Standardized/commoditized cost data has been a big win and makes public cloud feel a little less dirty. I give these guys credit for opencost + finops focus
Do you need a tool to tell you that your company underpays or is bad at recruiting? Or maybe that you do something boring that people don't want to work for?
IMO Keda is the more important product in this space, because it translates business requirements like max queue wait time into compute resources.
If you are operating in a way where small cost differences decide if you are break even if you have already failed, no amount of "FinOps" will stop your trend from going to zero. It is delaying the inevitable.