Hacker News new | past | comments | ask | show | jobs | submit login
Terraform Cloud Pricing Changes Sticker Shock (shavingtheyak.com)
227 points by exponentialgenx on Nov 19, 2023 | hide | past | favorite | 135 comments



Who is upvoting this blog spam from an account created 4hrs ago?


While it's a bit unusual for a new account submitting only one site to have two posts on the front page, it seems to be a well-written article. I certainly appreciate fresh takes over someone submitting the same Paul Graham or Kalzumeus page that been submitted dozens of times, or over users who submit several times a day, every day (tosh, tomte, etc)


it seems to be a well-written article

It's LLM-assisted/bloated fluff which it pretty much admits (with some more LLM-y fluff) right at the start

https://shavingtheyak.com/2023/10/29/seo-generative-ai-and-t...


The article you linked doesn't admit that at all.

> I will use generative AI for ‘color’ content like pictures and icons due to factors like cost and time, but I don’t intend to use AI for anything else at all other than possibly helping me sort through ideas or summarizing pertinent information from overly long articles.

TFA doesn't read like ChatGPT nonsense and it provides some interesting discussion. It looks like it's just a new technical blog someone created and is earnestly sharing with HN.


Read the linked post, it's completely generic and repetitive and 'possibly helping me sort through ideas' is doing a lot of lifting that the author obviously isn't.

If you look around right now, it seems generative AI is all the rage. There is a growing pool of people now out there who:

[bulleted list]

Lets think about what generative AI can do right now and what this could mean for web content and organically generated search traffic going forward:

Right now, generative AI can:

[bulleted list]

Lets take a look at the web for a second. Websites are what? Text. Images. Sometimes videos or music and sounds. Sometimes websites are dynamic and interactive, like web applications. All of these things can and will be created using generative AI. The use of generative AI to save time creating these things will increase exponentially over time.

This is original writing the way trying to meet the word or page count for your middle school homework is original writing.


Is it a community-determined "best practice" to prioritise reviewing a contributor's history over reading their contribution and upvoting/flagging according to its content?


Hashicorp is one of the extreme examples. Terraform Cloud is an okay product at best, which went from expensive to very expensive. Moreover, they aggressively changed the Terraform license from open-source to BSL.

So after the whole community contributed a lot of providers, they want to profit from that.

You should use OpenTofu and buy IAC from one of the companies sponsoring full-time engineers on that project: Spacelift, env0, Harness or Scalr.


In all fairness, the Terraform provider registry has always been clearly proprietary, prohibiting commercial use by default:

> You may download or copy the Content (and other items displayed on the Services for download) for personal non-commercial use only

https://web.archive.org/web/20201106225027/https://registry....


I listened to a changelog.com podcast the other day on the subject of OpenTofu (It was a few months old so they were still calling it OpenTF).

Apparently a lot of the actual images are hosted on the GitHub container registry (ghcr.io), and in many cases, the Terraform registry is just passing the request through to download an image from a repo that may be owned by someone else.

So in effect, they're putting scary licensing text on a lot of content they have no control over.


It is, btw, why it was so easy for us to spin up a quick-and-dirty registry for our alpha release[0] that basically contains all providers and modules - almost everything is hosted on GitHub.

Right now we're working on the stable registry[1,2], and that's basically the main blocker left to a full stable release. You can track the progress in our weekly updates[3].

Note: Interim Tech Lead of the OpenTofu project

[0]: https://github.com/opentofu/registry

[1]: https://github.com/opentofu/opentofu/issues/741

[2]: https://github.com/opentofu/registry-stable

[3]: https://github.com/opentofu/opentofu/blob/main/WEEKLY_UPDATE...


I would use opentofu but still waiting on cdktf https://github.com/opentofu/manifesto/issues/58


Yet, HashiCorp does not support CDKTF natively in their commercial product! It puts very little effort into CDKTF as it doesn't make any extra revenue. The only reason they did it was to offer an alternative to Pulumi.


Why not use the HashiCorp one? It's still MPL and should remain compatible with OpenTofu...


At the hashi conf last month they started talking about adding terraform features that would only work on terraform cloud.

It was very sad to see


Sure, could do, just that I'd rather ensure that compatibility remains rather than retool on opentofu (admit that's not a big deal currently whilst in parity) and have some compatibility drift in the future. Would rather wait and see if there's a more unified solution.


https://www.nullstone.io/ (YC 23) is also a great option


I am a terraform user but haven't courage to migrate as IAC is trickier than it looks. How much is OpenTofu compatible with terraform? Is there any page which tracks this or list down things which might break on migration.


OpenTofu is a superset of Terraform. They maintain complete compatibility with Terraform while adding additional features on top of it. They have more dedicated developers on the project than Hashicorp does, and they are building out a fairly robust RFC process for the additional features.

I would not recommend using OpenTofu in production until they've had their first official release though. They currently have some alpha releases out that you can experiment with.


A superset of Terraform 1.5. Terraform 1.6 has features that OpenTofu does not have.


No, it's a superset of Terraform. The developers of OpenTofu have been adding every new feature in, and are committing to version parity. They've already got the new testing framework built out for example, as well as the changes to the S3 backend.


Is there any source to this? I have heard lot of conflicting things just like in this thread.



All these companies saw Snowflake's success and thought, "we want to use some kind of usage based pricing scheme to capture the value that we provide". Implicit in this thought process is that the price should scale in an unbounded way - if you're providing X value for a company with 10M revenue, you should be providing roughtly 1000X value to a company with 10B revenue.

And it may be true that you are providing 1000X the value! But that doesn't mean that you are going to be able to get away with charging 1000X the price when the cost of hiring a team to fully replace your product with something built in house is only 10X or 100X.

Snowflake doesn't have this problem because very few companies could recreate Snowflake, period, much less recreate it for less than their Snowflake spend. But all these "hosted open source product" offerings should have realized by now that they need a ceiling on their pricing structure and/or they need to stop open sourcing their code.


"Value-based pricing" is a code word for "reach into your customers' pockets and steal the spare change". It's the end game of monopolists, actual or wanna-be, and it's a good way to burn literally all of your goodwill as quickly as possible.


I completely agree and I've been trying to shout from the rooftops how this kind of greed inherently prevents banking value in your "brand" which means you'll only ever be one mistake, incident, or bad news story from a bunch of angry customers. This puts exactly the same pressure on a company and product as only shipping "barely good enough" functionality (which makes sense since you're ratcheting up the cost with this exact balance in mind). My take is that this is a really bad trade for a saas company/team trying to build for anything except short term.


That's the biggest challenge - you can't capture all the value that you're providing your clients, otherwise there's 0 reason to use your product over either

1. Building it inhouse

2. Using someone who is capturing <100% of value provided.

Then the only "moat" you have is lock-in, but clients tend to not like that, and you're squandering your reputational capital.


The problem is that the value Terraform brings and the value Terraform cloud brings do not scale the same way. Hashicorp prices as if they did.


This is a great point.

I get that companies want to price their products in a way that they don't leave surplus value on the table (i.e. the more value a customer gets out of it, the more expensive the product gets), but the fundamental problem is that it's... so hard to actually "judge" the value that's being created by using the product in many cases.

In such cases, probably for the best that companies err on the side of caution with some formula they're sure is below the actual value but isn't too far away.


Based on recent experience with large deployments at S&P500 types, IMHO the value snowflake providers to most companies can easily be deployed in house with either old school Hadoop,Spark, Hive or old school Impala, Hbase or more recently Trino.

The clients I had where we installed snowflake every is super worried about any additional copy operation because everyone is keenly aware of the unlimited cost. This makes getting shit done harder.

Clients that have a decently sized k8s cluster running Trino or old school Spark/Hadoop cluster on fat VM's you know what you got you know what your paying for you make estimate of how much ram / cores you need for certain workloads and once they are purchased your engineers get really good at squeezing as much work out of the given resources as possible. And no constant complaining in meetings about what potential extra cost this additional computation may have.

Also if other people working in other parts of your org don't run snowflake themselves you gotta pay for their snowflake usage on your bill or you pay for copying data back out to S3/ADLS/sFTP so that other departments can get to the results of your computations. And snowflake really doesn't like it when you do this, they even gave exporting data a new name, they call it "unloading" making you feel like your undoing something which you probably should not undo.... On that note, Snowflakes data export options are significantly underdeveloped in contrast to Databricks, Cloudera and also the original open versions Spark, Trino and Impala.

</rant> :)


Curious, how does Snowflake compare against Redshift, Trino, etc.?


I’ve loved using Hashicorp products for many years but I have (despite often being in a position to do so) failed miserably to engage in any kind of commercial relationship with them.

Their pricing has always been opaque. Despite trying my hardest, even multiple phone calls with their sales teams, I never got even a sniff of how much it would cost for enterprise deployments of Hashicorp products at any of my clients.

I’m a shareholder in HCP and lost money on them after the IPO, so maybe I’m a bit sore, but I’m dumping my holdings soon. I’ve got no confidence this company actually knows how to sell into the bulk of the addressable market that they outlined in their S1.

I think the growth they are currently generating will top out pretty soon when the shock of paying about the same for the management tools as the actual cloud services they are running hits home with their customers.


I can give an insight! We tried buying into Vault Enterprise and let me tell you, it was incredibly expensive. We're talking Splunk levels of prohibitive cost here, maybe even worse.

Their pricing models are actually insane. For Vault Enterprise, you buy into a fixed limited number of clients with a pricing ladder that would make Apple blush. You start at 100 “service tokens” with the next level being 1000, 2500, and 50000 tokens. Any “service” that needs to connect to vault is a client, and the definition of a service is pretty loose. For Kubernetes / compose stacks, any one pod / running container is a service.

It gets worse: Once a token has been claimed, it can no longer be used by a different client for the entire billing period (meaning: a year). This means that you can run out of valid client tokens, even if you're only actively using half if you spent the other half for testing purposes or no longer run the architecture that used up those tokens. Oh, and users are clients, too.

All in all, the ballpark moved somewhere in the low six figures for their 100-token agreement, if I remember correctly. We had to decline because Vault alone would've cost a large part of our infrastructure budget.


So the pricing in essence defeats the whole fine grained permission model you should be using with Vault.

Also I wonder if same user having multiple different tokens would count as different tokens... Probably, just to inflate the number...


Did you end up running the free version, or some alternative?


They tried to sell 100k/y minimum licenses is what I’ve seen so probably just disqualified you early in their sales process


* Gross margin: 80% (hey this one is pretty good) * Operating margin: -62% * Net margin: -57% * Return on Equity: -22% * Return on Invested Capital: -21%

Their sales and marketing is pretty bloated and is destroying all of their gross profit alone.

Their stock seems to be priced on the hopes and dreams that they'll grow revenue out of their current problems before they run out of cash. But the headline and the reactions here show how they're trying to do that.


That is not too unusual for software companies on IPO, lots of them paying $5 to acquire $1 in revenue. I think the common justification is that new customers will stick around for years.


I'll give you a clue: it's fucking expensive and their stuff creeps in everywhere.


Kind of a shameless plug but it's hard not to with such a title.

Make sure to check out Spacelift[0]. It’s a CI/CD specialized in Infra as Code. Terraform/OpenTofu are first-class citizens and it brings advanced customizability with cross-statefile dependencies, OPA-policies (not just for access control, but e.g. customizing your gitops flow) and others.

The pricing is reasonable, too, and not per-resource. Generally based around concurrency.

Disclaimer: I work at Spacelift, but I do legitimately think it's a great product and recommend it.

[0]: https://spacelift.io


I second this shameless plug and I'm about to be a 2 time spacelift purchaser. I did the same dance with Hashicorp a few years ago which led to a product shoot out and Spacelift won easily. That was before it also handled Kubernettes and Ansible.


That's great to hear and thanks for the kind words!


Is concurrency in this sense mostly just 'how many terraform plan/apply runs you have going at once?'

Also, is the Enterprise plan significantly more expensive than the cloud plan?


> Is concurrency in this sense mostly just 'how many terraform plan/apply runs you have going at once?'

Yes, though not by minute, but by "max concurrency" over a month, with some room for bursts.

> Also, is the Enterprise plan significantly more expensive than the cloud plan?

It's quite a bit more expensive. I do recommend contacting our sales team[0] and presenting your use-case, though, to get more details. You can definitely work something out with them.

[0]: https://spacelift.io/contact


Thanks! Feels like most of the time I fall right in the horrible spot of basically 'very small enterprise'. I need some enterprise features, but will never be able to get funding for the enterprise version.

Thanks!


Same games as with Terraform Cloud - no published Enterprise pricing!


Terraform Cloud costs are an absolute joke. I was actually the decision maker a couple of years ago when we chose our IAC stack and decided not to go with Terraform Cloud despite thinking the product is strong, and it was entirely due to the business model and cost.

For us (and I'm guessing for many VC funded companies), SAAS managed IAC is a nice to have, but certainly not a MUST-have and certainly not something in the same "willing to spend money range" as your monitoring or your main cloud hosting costs.

I see these kind of tools as a tier below your monitoring tools like DataDog/Splunk etc. And these tools are a tier below your AWS/GCP costs. If your IAC or CI costs are approaching or overtaking your monitoring costs, something weird is afoot. Likewise, if your Datadog costs are approaching your AWS bill, this is obviously wrong.

Hashicorp, in my opinion, thinks their tool is more mission-critical and more of a value-add than in reality it actually is, and I think they also don't understand that in the current high interest rate environment, companies are FAR more willing to put in engineering time to do migrations or money saving projects. My own company put in 100s of hours of Engineering time to reduce the Datadog bill by roughly 40-50%.


Honest question. If you need to pump data into datadog from your cloud, why not just stop doing that, and use the cloud tools? Datadog isnt' creating metrics. The metrics are already on your cloud provider (gcp,aws,etc) -- AND all these providers have ways to view metrics. Why do people pay 20k per month for data dog? They dont' want to use GCP metrics explorer? Same with TF cloud. You can just setup a jenkins job and do all the features TF cloud has in a few hours. Why pay TF cloud? Are people confused? I must be missing something. I worked at startups that thought all metrics need to be in datadog, but they didnt' understand they can just use metrics explorer. You can't even use sql to create graphs in datadog, it's awful.

IMHO If you are serious, dump the metrics into bigquery/redshift and start doing sql like an adult.


If you start pulling all the metrics we currently use in DataDog out of something like BQ/RedShift you're going to spend WAY more in terms of engineering hours and infrastructure than we're currently paying DataDog.

I keep hearing something about a company that switched their monitoring from DataDog to DataBricks and I can see that yes you probably could go build a monitoring solution on top of a datastore like that. But I certainly wouldn't want to.


For viewing, cloudwatch is simply nowhere as convenient, writing whole custom queries takes a lot of time, and setting up altering for just the right thing sometimes requires you to copy the complex result as another metric. Basically I can create a time-shifted difference of two metrics reporting at different intervals, smooth it and make it alert the right teams through slack and pagerduty in a minute or so (while getting a visual feedback on the query the whole time) - this would take a significant amount of time to do through the plain AWS process.

It's a bit like "you can write everything you want in assembler without the overhead of extra layers of X". Yeah, sure, I can. But I appreciate the extra layers of X. I'll take the right cost/convenience balance, like an adult, thank you.


I would not consider metrics explorer to be a particularly good product. For queries where the query builder isn't good enough and you write PromQL instead, they don't even let you alias the legend; instead you must see the entire query as the label for that line.

A pretty minor nitpick, but indicative of the level of attention they give the product.


At a prior company, we had a terraform slack channel, where people would post lock and unlock emojis and apply terraform manually.

Obviously, it's not a perfect system, and it doesn't indefinitely scale, but it worked well enough.


You can setup remote backends which support locking (e.g. Azure storage, Amazon S3, etc..) that way it's automatic.

You can see a list on the left-hand side here: https://developer.hashicorp.com/terraform/language/settings/...


We use Atlantis [0] for CI/CD automation of Terraform pull requests to a centralized repository. It's pretty good too, especially for a self-hosted solution. I can't see how Terraform Cloud's costs would be justifiable for us without a custom contract.

[0] https://www.runatlantis.io/


At my last gig we built a bespoke version of this on top of our CI provider. Now i'm a technical co-founder in a startup and this is something I haven't quite solved yet, and this looks like exactly what we're after. Thanks!


How does this work? I thought the terraform state file was the single source of truth - if people are applying terraform 'manually' I assume that means on their local device? Are people sharing around the state file but don't have a central location for a lock file? Apologises if this seems obvious...


My assumption is that they're still using a remote backend for state, but they haven't set it up to use the locking features of the backend.

For example, I've use S3 as my Terraform backend for years, but I've never bothered to set up the locking feature, which uses DynamoDB.

In a small team that deploys Terraform changes rarely, you may never encounter the problems solved by using locking. Maybe good communication and a Slack channel works well enough for you.


As others said, we checked the state file into git. It was up to you to ensure you pulled after taking a lock.


One can just commit the lock file to Git or use any backend (like S3, HTTP, ...) to share the lock file.


StackOverflow famously manages database migrations this way; you post a message in a shared channel and get the next revision number.


I think Terraform Cloud (and most Hashi's enterprise offerings) aim at absolutel behemoths of deployments with many, many infra teams where the complexity comes from scale and the companies are not Google or Facebook and therefore this problem can't be solved through talent. For many such enterprises it's easier to throw money at the problem.


Except those same companies then run into hidden limits that exist on Terraform Cloud and hadn't been previously mentioned. Honestly, even bigger companies are better off going with Spacelift as it can scale extremely well, has reasonable support, and is much more feature rich.

As a disclaiming I'm currently writing a book on Terraform, and I've been interacting with a lot of the people in the space. Up until a few years ago I was a huge fan of Hashicorp, but their price changes and lack of support were what made it so I couldn't recommend Terraform Cloud anymore. The license changes they made were the icing on that cake.


You would think that, but having just went through trying to implement Terraform Cloud at work, its tools for doing anything that approaches behemoth deployment are abysmal. Permissions management is a nightmare and any sort of orchestration between different layers is barely there.


Then they should price them with smaller margins and make it up with volume.


Although it's certainly not always the right strategy, it's certainly a valid strategy to serve a smaller number of relatively price-insensitive customers (and provide them with ancillary services/high-touch sales/etc. that they want). The fact that you may not personally be on the market for that sort of thing doesn't mean there isn't a market.


In particular think major regional non-technical companies, like grocery store chains or logistics companies or hospital networks. Beyond basic help desk support none of their IT is in-house and they don't want it to be in-house any more than an IT company wants to run an MRI machine for its employees, so they have dedicated a block of money to making that problem go away and don't really care how much it all costs as long as it comes in under that line.


I personally like what render.com is doing with IaC through their blueprints format. It's definitely a kid's toy version of IaC, but it's a really nice step up from Heroku.


It appears to be proprietary.


Yes. Still a great tool for small dev shops


Why would you pick a proprietary solution for a small dev shop?


Less people available to own self-hosted solutions.

Proprietary also can mean a well-funded product with money allocated to DevEx and ease-of-use.


> Less people available to own self-hosted solutions.

There are other ways that are not proprietary and don't require self-hosting.

> Proprietary also can mean a well-funded product with money allocated to DevEx and ease-of-use.

How did that work out for Heroku?


I wouldn’t lay blame for Heroku’s financial problems at the feet of architecture choices it made customers adopt.

Also, Heroku’s buildpacks were anything but proprietary. They’ve been adopted by a bunch of other products.


What financial problems? They were bought out by another company and tanked. This is the exact reasons why you don't want to tie yourself to proprietary solutions.

Sure, the concept of buildpacks were copied to other platforms (I personally worked on building one of those platforms, Cloud Foundry), but they were a proprietary solution that others adopted for ease of transition off of Heroku, not because it was some great solution to IaC.


This is a bizarrely combative way to discuss this. Can you please instead tell me what you think I should be doing?


Why would I care about proprietary/opensource if it's affordable and saves time as a small dev shop? I just wanna ship and the company might be dead before any of this matters.


It is mission critical, just not the SaaS version.


ZIRP is over and SaaS is entrenched. Prepare for a LOT more rug pulling and price gouging.

The labor and capital cost savings for moving to the cloud and SaaS were to get you there and get you dependent. Now that you don’t have in house IT anymore it’s time to turn the screws. You will soon be paying 2-3X what self hosted internal IT cost.

The the pendulum will start to swing the other way. This is one of those endless cycles in software and IT. Get ready for Harvard Business Review articles about how much someone saved exiting cloud.


What is ZIRP?


Probably it is a "Zero Interest Rate Policy"

https://en.m.wikipedia.org/wiki/Zero_interest-rate_policy


Zero interest rate policies - cheap debt


Interest rates being near zero


It never really swings back the other way. It slows down adoption but we’ll never see large back-in-house IT projects. Because in-house offerings (and open-source) aren’t up to the task anymore.


It’s entirely possible that in-house offerings aren’t up to the task largely because the last 15 years have seen tens if not hundreds of billions of dollars invested in the idea that SAAS is cheaper and better?

Now that the VC subsidy is over, and we’re into the gouge all those billions back from the people you’ve hooked onto your drug phase of the adoption cycle, it’s entirely possible some investors and technologists might see an opportunity in helping companies break out of that situation.

So we may actually see some investment and effort in bringing in house tech up to par.


GPU and data storage costs are driving folks back on premise, maybe not to the same scale as before yet, but when you already need a colo for these services... Might as well bring back compute too.


Until very recently we were exclusively deploying on-prem and had been for over 20 years. The amount of tooling we had to write ourselves to get even a semblance of modern devops practices is insane. Absolutely no one supports the deployment methods and requirements we have, so it was either hacking existing things to do what we want, or writing our own stuff.

My team maintains our own Terraform providers, Ansible playbooks, bare metal management infra, CLI tooling, network automation, ITSM integration, on-prem Kubernetes running on metal in our own DCs, on-prem CI/CD, on-prem Gitlab, evolving IDP, etc.

SaaS, what SaaS?


I disagree, there are great offerings out there. Open stack has stagnated a bit as things have radically shifted to the cloud, but it's still a good scalable and supported option. VMware also can get you pretty far. You can also go hybrid, and have the elastic scalability of cloud with the low cost of on-prem. Products like OpenShift can make this seamless to developers, and easy to manage and maintain.


In-house IT isn't up to the task of building systems that are infinitely scalable but as the price goes up lots and lots of shops are going to realize they don't need infinitely scalable, they need the scale they're at and have no reason to want to grow it. In-house IT is great at that.


I feel like people also haven’t updated their mental models from 20 years ago about what a small server can do.

A Raspberry Pi 5 has more power than a huge NT back office server in the late 1990s or early 2000s.

The cloud hid 20 years of progress more or less. Things got more powerful but cloud prices didn’t come down at the same rate. Cloud providers pocketed the difference.

The void for on prem is mostly in the software. There just aren’t good management solutions or modern devops stacks for it. The hardware is way more than adequate. It’s also a lot more reliable than it used to be. Spinning disk is dead unless you are warehousing massive amounts of data.


On a lark a made a Ceph cluster a couple of years ago with a dozen RPis, a dozen commodity 1TB USB drives, and a SO/HO switch and... it worked. Like, it was a joke project one afternoon but suddenly we had a 3X redundant 4TB S3 pool that was faster than our WAN connection and just took switching out a HD or RPi once a quarter or so whenever Nagios lit up that one had failed.


Yep, but companies need to pay tens or hundreds of thousands a month for AWS to run it for them.

I feel like we are ripe for another turn of the on-prem/off-prem wheel.

Then, of course, once it's all back on-prem we will once again have price gouging monopolies (probably in software) and someone will have the bright idea of moving everything to... what will they call it next time? Maybe "the grid" instead of "the cloud?"


And what is the ratio of services that need to scale infinitely to those that do not? At least if you are for example in more traditional internal software development?


Almost none do. Internal software never does. External-facing software only does if the intended user base is "the world".


The "task" changes from time to time as well. Creating crazy new things, then figuring out they're too complex and then reinventing them simpler is another endless cycle, much older than software development itself.


Ahh but the big boy bare metal companies are catching up. Nutanix and the like can simplify on prem deployments and provide cloud-like experiences, though I admit nowhere nearly as polished yet.


Just use Atlantis, it’s really great. My company switched from Terraform Cloud to maintaining an Atlantis instance and it made things so much smoother, and it’s OSS.

https://www.runatlantis.io/


Maybe I'm missing something but although Atlantis seems great, you have to expose a webhook to the open internet that points to a service that has full admin access to your infra. If an attacker finds a security issue with Atlantis and decides to abuse it, you've basically given them admin access. For that exact reason Atlantis a prime target for vulnerability exploitation


You can put it behind something like cloudflare and make the url something that can't be guessed, but yeah it is not the best. I really wish github would publish a list of IPs it calls from.


Now we have to pay for every single GitHub issue label, for every single Terrarform Workspace tag, etc., which we manage with Terraform. Basically, their pricing forces people to do more stuff outside of Terraform and incentivices ClickOps. I'm no longer going to use TFC as there are plenty of open-source alternatives! We have no choice but to pay this year although we're looking to get rid of TFC and evaluating competitors. I don't like their TFC Stacks feature, which is also patent-pending. They limit Terraform so that you must pay to get your problems solved with TFC. It's a nasty scheme that will cost them dearly! They are like Docker as a company. Vault now has a bunch of competition as well.


The era of startups charging below fair price (to deliver the service) in pursuit of "growth" comes to an end. This was the end game for lot of these companies working in the ZIRP environment from 2009-21. Underprice your service, lock in customers to your service, hope it will be too difficult to migrate away, when dust settles jack up prices to reach fair value, of course if everything fails, sell to a large company who will not have a problem enforcing. We will begin to see waves of these re-pricings as VC funded subsidies comes to a close.


Maybe. Or maybe the price of passing the cost of internal bloat to the customer is coming to an end?

Look at their gross vs net margin. Where is the money spent? How much engineers vs. non engineers do they employ? Did they acquire naming rights to a stadium?

My company was just hit by an egregious 6.5x price increase from one of those managed open source companies. A cool $7m per year delta. We will feel sore for a year, since we were told 2 months in advance of the new pricing, and we will migrate off it easily for the demonstrable ROI. I didnt even need to ask the C suite for the migration money, they were so pissed.


Can confirm. Had a good experience using Terraform Cloud on personal projects. Tried to bring it to my Big Co team but leadership / accounting couldn't get over the sticker shock from their resource based pricing.

Azure DevOps does a good enough job for us.



Fun read, but sad in a way.

The infrastructure and people that 'back end' your organization have value. This is intuitively true, but for the folks who are customer facing (and often senior management) there is often a minimization of that value.

Open source really cut the cost of that back end, and reinforced the uninformed opinion that this stuff wasn't worth all that much.

Once you're tied to a platform, because it's "cost effective", that platform can then say, "Okay, now it's time to start charging you for the real value we provide." That leaves the operations folks going to management saying "Hey our costs are going way up, kthxbye" And usually starts an uncomfortable education cycle where the real cost of running the business gets penciled out. Sometimes it kills companies.

The <thing><as a service> provider gets to set pretty much an arbitrary price on their value.

If you have followed the saga of Music you know that the "labels" who hold all the copyrights (generally) always win at keeping all the money for themselves and thus kill business after business that would help the artists. This as-a-service(AAS) companies can do the same thing. It's one of the interesting things for me about 0xide and making 'on prem' a thing again.

It feels like another shift is coming and this is a preview of what the motivations for the shift will be.

And lastly I'm not a fan of AI generated illustrations in a blog post.


> I'm not a fan of AI generated illustrations

Why not? I thought that the illustrations fit the article better than the bland and irrelevant stock photos people usually use.

As an aside: I figure that in about 3-4 years, no one will be able to assert that they do or don’t like AI generated art because there won’t be any way to know if it’s man made, machine made, or reality.


I'm going to disagree, it will be trivial in 3-4 years to recognize that no one blogging is going to make enough money from blogging to pay a human artist to do illustrations. :-)


Shameless plug: Co-founder of Terrateam[0] here. We have flat rate pricing with unlimited operations and users. Terraform Cloud pricing is absolutely crazy.

[0]: https://terrateam.io


You don't seem to support monorepos, similarly to Atlantis and others, but a Terraform monorepo is a common practice, i.e. one PR could modify multiple workspaces.


When they changed price from seats to resources we migrated to self managed


> I wouldn’t at all be surprised if someone creates a self-hosted clone of TFC

This already exists[0]. Still in development but it looks promising.

[0] https://github.com/leg100/otf


they are copying the pulumi pricing which is insanely expensive as well. i don’t blame these companies for trying to make money but it’s one thing to codify the infrastructure. it’s another thing to pay for infrastructure as a service (on top of your iaas cloud)

i think there’s only 2 paths forward: host your own (short term) and cloud providers will offer this for free / lower cost (long term)

i think google and aws and the others will be strong at this in 5 years and will direct customers to iac more than clicking through a website. or simply offer better tools. the iac solutions today are just bandaids on poor user experiences in bigger products


Just as a quick fact check there: Pulumi may have a similar pricing model, but if you're claiming that means HashiCorp copied the Pulumi pricing model, then you must also conclude that Pulumi copied the original HashiCorp pricing model.

Resource-based pricing was the original pricing model for what is now Terraform Cloud (Atlas as-was).


Shameless OSS plug time. I made a small utility many years ago to solve the Terraform state problem in CI by using a file in S3 with a expiration timestamp as a locking mechanism. Obviously this only scratches the surface of what Terraform Cloud does, but might be enough for many teams. Feel free to fork it for your own purposes: https://github.com/evantbyrne/terrarium


I managed my Cloudflare "zonefile" (mix of manual records and flags, but ends up with 50+ resources per domain) with Terraform Cloud when this change was announced.

Very shitty that a managed kubernetes cluster and a domain record can supposedly be boiled down to the same value now. My state is in S3 + Dynamo DB now.


There are a few fast-growing and very profitable companies building products that do nothing but “abstract” TFC.

Inertia is stronger than we think. Large corps would rather pay more than move to another (cheaper) solution. The downside is asymmetrical (and catastrophically so), so no CTO or CEO would take such risks.


In my experience in big-co, the decision makers on something like TF are going to be constrained by cost. They'll be picking between many different tools and they are incentivized to provide as much value as possible.


Was there actually a price increase recently and not 6 months ago?

If people just want to vent about Hashicorp/Terraform, it seems like a text-post would be sufficient for that.


It was announced a while back but the deadline to migrate to the new billing model is EOY.


Ok, that makes more sense, but if people haven't migrated away by now... the odds seem increasingly likely that they won't migrate in time to avoid that deadline.


Meanwhile my dumb corporate employer is still moving perfectly good internal applications from our data center to AWS. Productivity is tanking.


Can some one tell what with other tools from their stack? Consul, Nomad?


Don't read - there is AI art there.


What if I'm not reading it for the pictures?


Are you boycotting articles with AI art? For any particular reason?


The dude probably ChatGPT-ed the entire thing. I might as well ask those tools directly.


every word came from my own keyboard. It takes an hour or more to write a post and I don't have the time to hunt down the artwork - I lose enough sleep over my tech job, figured adding some fun art stuff would only be worth it if it didn't add to my stress - I do apologize if some folks are offended by using AI generated art though - I've created art myself in the past and the whole idea bugs me on a deep level, so I'm a bit conflicted on that


Two options are acceptable for me personally:

1) "Crediting" the source models explicitly.

2) Generating away or cleaning up afterwards: incorrect spelling, hands with unnatural fingers, non-sensical devices/symbols etc.


Thanks for this - I'm pretty new at blogging and it never occurred to me to credit the model. Next time I get on there I'll fix that.


That's quite the take. I suck at drawing but write words goodly, and can imagine using AI to make pictures to pair with the words I've hand-written.


Usually people "credit" the source model in a label underneath each image or in front of a text.


oh no not AI art


It's good art though?


been using terraform with atlantis for years and years at this point. highly recommend it.


Just use cloudformation


Please don’t take this advice as gospel. Cloudformation has a few problems:

- it only lets you manage AWS resources. And even if you’re not running multi-cloud, you’re bound to have a bunch of non-AWS resources like a DB vendor somewhere that you want to put on IaC as well

- it’s slow as molasses

- even if you feel HCL is bad, it will probably not be as bad as the huge swathes of YAML/JSON Cloudformation has you write

- despite what the console makes you believe, drift detection is practically non existent


These all are the symptoms but not the cause and cause IMHO are as under which is an industry wide culture (of waste, pride in that waste) and practice that is hailed and seen as the only rather best and recommended way to do things:

- Investor driven companies that are waiting a 10x return on their money in 10 years at max or less. Or else.

- Microservices - where every one instead of collaborating on a single monolithic app is needlessly invested in microservice systems that result in software components that are "loosely coupled", have no transactional guarantees, can't go more than one caller down in depth thus can't be stacked on top of each other.

- The DevOps movement: Developers are also operators. So now you need such controls and products (kinder garden if you will) where developers can provision infrastructure for the microservices at will that they came up with (along with a cool name, could be some star wars, star, galaxy or greek mythology - anything goes, a peasant shouldn't be able to decipher the reference if any is the point)

For the one above, you definitely can build certain types of tools without tons of venture capital or at least command line tools can go that way even if they have a web interface.

For second, highly disputed and controversial - the jury is out. But I think most (and please pay attention to most) companies and business domains can do without microservices.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: