I'm thrilled to see that OpenTF appears to be aiming to win on positives, not negatives. They could be focusing on the licence and their perception of Hashicorp, but instead there's a lot of emphasis on the positive: getting up and running quickly, upcoming releases, public roadmap, and pledges of engineering work.
In the long run few will care about this licencing change (as much as they should!) – business users will accept it, individuals won't take any notice, and competitors would get mired in issues as expected. Basically a win for Hashicorp.
But winning on positive changes beats licencing issues. Business users will go where the centre of gravity in the ecosystem is, individuals will too, and will prefer free (in both senses) solutions, and competitors will push this hard to all their customers. It'll be interesting to see Hashicorp needing to build OpenTF support into their products to remain compatible.
> So far, four companies pledged the equivalent of 14 full-time engineers (FTEs) to the OpenTF initiative. We expect this number to at least double in the following few weeks. To give you some perspective, Terraform was effectively maintained by about 5 FTEs from HashiCorp in the last 2 years. If you don’t believe us, look at their repository.
Wow. Even if most of this doesn't actually play out in the long run, that's some good support.
> They could be focusing on the licence and their perception of Hashicorp
They literally do:
"HashiCorp even had all contributors sign a CLA which explicitly said (link to the CLA in the Internet Archive as HashiCorp has of course removed this wording): [...]
The move to BUSL—which is not a free and open source license—broke the implicit contract. That was the brash action!
Terraform would've never gotten the adoption it did, or all the contributions from the community had it not been open source. Most of us would've never agreed to the CLA to contribute to the project if it was BUSL licensed. Taking all those contributions and all that community trust, and then changing to the BUSL license is a bait and switch." [1]
I agree with the overall sentiment, but they could've left out all the judging side comments.
In the announcement literally the only mention of this is:
> The manifesto outlined the intent of the OpenTF initiative in two steps — the first was to appeal to HashiCorp to return Terraform to the community and revert the license change they were making for this project. The second, in case the license was not reverted, was to fork the Terraform project as OpenTF.
> Since no reversal has been done, and no intent to do one has been communicated, we’re proud to announce that we have created a fork of Terraform called OpenTF.
What you’ve quoted and linked to is the manifesto before this announcement.
They could have played the announcement much differently to how they did, but they chose not to, and chose to focus on the positives instead.
It’s not even in the announcement faq.
> they could've left out all the judging side comments.
That is literally what they did.
I imagine the manifesto will disappear now that it is, obviously, superseded.
Just give it a tiny bit of time, yeah? Come on, what they’re doing seems like good work, well played and without nastiness.
OpenTF core member here. To build on this comment, we've tried to intentionally focus on positively meeting our needs, not on negatively villifying another company for trying to meet their needs. You can see a good example of this in https://github.com/opentffoundation/manifesto/issues/165#iss....
We do not fault HashiCorp for trying to build a thriving business and we respect their amazing achievements and contributions. However, on this one issue of the Terraform license, we do not agree with their position and believe strongly in the need for a truly open source, community-driven Terraform.
Of course, this is untrue. They may have had all contributors after some specific date sign one, but it is simply untrue that all contributors have signed one at all.
Remove the code from the repo, put it in another repo, probably. I don’t remember signing a CLA 7 years ago, but somehow they relicensed without me needing to agree. All my code was pulled out into a separate package/plugin that is still MIT.
All HashiCorp have done is made their own future contributions subject to BSL, not relicensed any previous contribution, nor do they have the ability to do that without copyright assignment of each and every contribution, which they do not have (specifically I have never signed a copyright assignment, and nor has at least one other of the top ten contributors of all time).
Nothing about MPLv2 prevents inclusion in a commercial product provided the attribution and file-based copyleft provisions are followed.
> All HashiCorp have done is made their own future contributions subject to BSL, not relicensed any previous contribution,
There's are only 2 .go files left in the hashicorp/terraform repo on GitHub which still have MPL-2.0 in their license headers, so either there are virtually no such contributions left in Terraform or they have in fact attempted to relicense code.
Replacing code whose license you don't want is definitely a fair-and-square way to manage a project and change the overall terms under which you can distribute it. (As it happens, that tactic is often essential to efforts to open-source formerly proprietary projects, too.)
*Provided that the replacement is not simply a rewritten version of the same code that achieves the same result through overwhelmingly similar methodology.
Is that something you can do? I wouldn't have thought there previous license would be compatible with the BSL, and so distributing the software under the BSL wouldn't be possible?
MPLv2 is file-based copyleft, and there are no restrictions on including it in a commercial work provided that changes to MPLv2-licensed files are accessible per the terms.
Indeed it’s something of a spiritual successor to CDDL which was designed precisely for that purpose (although reliable sources also say it was designed to be GPL-incompatible, so that is literally a he-said/she-said case).
Changing the license in the repo does not change the license of files you have already distributed under the MPL. That would require them to revoke all those licenses, which they can't do under the terms of that license that they've already agreed to. This entire thread and OpenTF, and maybe all of OSS itself would probably not exist if that were not the case.
In the official repo I can still check out an older commit with the MPL on it, which means they are still distributing code under that license, but it wouldn't matter if that were the case or not under the law.
edit: In fact, software as a business ( and many other professions ) would not exist if license terms could be changed or revoked after distribution. License changes for new software to be distributed can change, sure. But short of a breach of contract as outlined in the license itself you cannot change a license you have granted.
That's not what they mean. They went through every file in the repository and updated the license to BUSL from MPL.
Unless they have approval from each contributor for a given file (or the contributor signed a CTA or CLA), they cannot re-license that particular file.
It's still MPL and only once all contributors approve or the MPL code is removed can it be re-licensed to an incompatible license.
Of course if they had wanted to move to a compatible license, for a lot of FOSS licenses that is an option. Regardless of whether that is an option for the MPL, they are moving to an incompatible license and could not do this.
----
So their only options for files that they can't convert to the BUSL from the MPL are:
1. Keep contributions to those files under the MPL but require contributors to sign a CTA or CLA guaranteeing those contributions can later be relicensed.
2. Move BUSL contributions out of the file and only keep the MPL contributions in that file until they've all been replaced.
Strictly speaking it is on Hashicorp to prove that they had the right to re-license the code. Which would require listing all contributors and whether a given contributor gave written agreement to re-license the code or whether they had signed a CTA or CLA.
To my knowledge they have not done that so anybody whose code is being used without permission could fight them on copyright infringement.
And worth noting is that they don't have a particularly small number of contributors. Terraform has over 1700 contributors and many of the other repos have over 1000 contributors or close to 1000 contributors (unsure of the overlap).
If they only started requiring a CLA or CTA later into their development, they probably haven't gotten permission from all the rights holders.
Can anyone fork and change the license? I assume yes if originally MIT or similar.
I wish it were not the case. It should be more like contributors get paid like employees retroactively in a situation like this but no idea how something could be structured like this.
Strictly speaking, no. MIT does not give you permissions to change the license. But you can incorporate MIT licensed code into a project that contains code that is under a different license of your choice, open or closed source for a nearly identical effect.
So you can just license all future contributions under AGPL and use the MIT licensed code, effectively making the combined work AGPL. But the MIT part remains MIT, anyone could fork your project, remove all AGPL code and return to MIT. It’s wise to clearly mark which code is which, especially since MIT requires that the copyright notice must be retained:
> The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
It's support from companies who make money from Terraform being free, and at least one looked like a direct Terraform Cloud competitor that probably hurt Hashicorp's chances of making Terraform profitable.
It makes sense they'd do this, but we need to stop loving people who appear to give us things. They're just making reasonable business decisions.
> This competition didn’t hurt Hashicorps chances of profitability, they were factored in from the beginning.
They weren't, actually.
From a Hashicorp FAQ article (which is a transcript of a video interview which has since been delisted from YouTube) titled 'Why is HashiCorp committed to open source?':
> Mitchell: [I]t's always sort of been a default for me. [...] When we were starting the first projects, we both didn't intend to ever start a company around them. There was no monetization goal at all, and so I think open source was an obvious default then.
That feels very icky to delete that at this very point in time. It’s PR, it’s spin, very corporate, very untrustworthy. They could have left it up and responded to it, but deleting is manipulation in this case.
Agreed, although there's no good way for them to leave that up.
No matter what addendum they add, or how they mark it as archival, it reveals that HashiCorp was not actually committed to open-source like they claimed to be. Their word was 'committed', but in actuality open-source was just a feature of the license they happened to be using at the time. So that page will always invite questions about what else they say they're 'committed to'.
Leaving it up but changing the title would seem underhanded in the same way. Leaving it up and changing nothing would be confusing and look like an obvious oversight.
To complement sibling post, some other stuff that I think makes it clear that open-source was not one of HashiCorp's values even before this change:
DM@07:30: So how we think about it is, we're kind of obsessive about journeys. [...] And we decompose the personas into two: there's the practitioner, and then there's the decision-maker. The practitioner, you're trying to progress through a journey of discover and learn, try and trial, use and advocate— and you do not care if they buy anything from you. The decision-maker you're trying to progress through the why/try/buy journey. [...] It is a pre-dredged river, it is not fair; they do not know what is happening to them. They are not going where they think they are going— they are going where I want them to go. [...] And it turns out it works.
GS: Is there anything specific, do you think, about open-source that you layer in?
DM: For us, open-source is really just a distribution channel. (Also it's a development channel, but predominantly it's a distribution channel.) [...] So I don't think it matters whether it's open-source or not, as much as just being really, really obsessive about the digital videogame that you play.
--
On the one hand, it's nice that Dave McJannet is so up-front about these strategies, how manipulation is a part of them, and how much market positioning is about pushing people around to retain control of an ecosystem.
On the other hand, he sounds like a pushy, arrogant asshole tbf.
Sure, some of it. In Uppercase the most important strategy messages.
DM: Dave McJannet, CEO at HashiCor
GS: Glenn Solomon, GGV Capital Managing Partner
15:22 GS: "...reminds me a little bit of you know when you when you listen to Mitchell and Armand talk about the early days with the open source and how they were kind of you know hand to hand go really going going for meetup to meet up and trying to get feedback from users..."
15:43 DM "yeah I think open source lends us up really well to the rapid open feedback loops which is one of the reasons why I think the best products are built in open source YOU HAVE TO BE REALLY CLEAR ON WHAT THE MONETIZATION PATH IS but but the feedback groups are hard to beat um you know I think the the neat thing about that we've done is we actually uh we did it on six different products and now eight so we've kind of run the same..."
16:18 DM "how do you get that minimum viable audience right that there are different ways you can do that in the early days we did that through social media um I have to talk about that one um and you know content and entertaining people but I think this notion of also building Network effects into the products is the other reason so yes we went knocking door to door but it was deeply considered in terms of how you integrate..."
16:43 DM "think about how terraform works as an example there's terraform core and then there's a plug-in for every piece of infrastructure in the world THAT'S ON PURPOSE RIGHT AND THE PROJECT'S ACTUALLY CONSTRUCTED THAT WAY SO THAT OTHER PEOPLE CONTRIBUTE AND COLLABORATE so there are lots of different ways to do that but ultimately it's about Network effects and how you drive it and then..."
17:11 GS "Dave start with you maybe uh talk a little bit more about like uh what what that TERRAFORM PROVIDER COMMUNITY is built um uh how you get third parties involved and like what what's been some of the secret sauce that's made it so Central"
18:08 DM "... so I went to Microsoft and I went to Oracle and I said hey I think we can help you I went to Google we can really help you because you're way behind and and and we actually did BD deals with them to to get into their communities to make it relevant to their communities and IT WAS DELIBERATE there are only four communities that mattered at that time and we kind of played them off against each other then we invested deeply in this digital distribution channel OF A MALICIOUS USER JOURNEY to success for people it was a combination of that kind of the anchor tenants plus the big investment in the digital experience plus you know we you know we to this day we probably have 60 people..."
18:54 DM "today yeah so today there are about two and a half thousand terraform providers out there in the world we develop about five of them um because what we were able to do is WE'RE ABLE TO FLIP THE MARKET AGAIN THIS IS A LOT IT'S A LONGER LONGER DESCRIPTION IT'S LIKE IT'S IT'S VERY MALICIOUS IT'S THE RIGHT WORD so what we did is we said hey we decomposed the project into core Plus providers because that's how you do it in the open source Community we control core outright and play every committer but anybody can contribute to a provider but number one we control the certification process right nobody else can certify it number two we then when we got to you know we knew if we got to 200 The Fortune 500 using terraform to interface TO AMAZON IT WAS OVER BECAUSE THEY'RE IN CHARGE not Amazon and so you were then able to force every isv in the world to say if you want to be part of you know JP Morgan's Cloud program you have to build a terraform provider so it was a combination of a few things like that but IT'S VERY VERY DELIBERATE SORT OF THE ARCHITECTURE of the projects owning the certification process and then then owning the key communities that you care about..."
They should have been. Them being irresponsible early on is negative for the community.
Cynically, I believe they intentionally were stupid knowing that their tool would only become widely used if open source (who wants infrastructure as code locked into some proprietary thing).
One of the things that comes through for me in that FAQ, Mitchell's story about learning to code by reading and playing with open-source projects, and Armon's assertion that this license change is continuous with the company's original 'open-source ethos' is that this has always really been more about 'source availability' for them— I think they're credible, in way.
I still think of open-source in a traditional, historically informed way: 'open-source' is about software freedom at bottom, even though its name and common arguments in its favor are more practical and self-interested.
I get the impression that Mitchell Hashimoto never saw it that way, and certainly didn't buy into the actual 'open-source ethos' in the sense of the values of the people who decided the term and founded OSI, etc.
A source-available license like BUSL probably would have been a better fit for HashiCorp from the start, or as near the start as possible. But I don't know that 'source-available' as a label could have driven adoption the same way in the early days, and it seems plausible to me that the founders might've realized that too.
> A source-available license like BUSL probably would have been a better fit for HashiCorp from the start
My problem with source-available licenses is that it makes the software miss out on the network effects of contributions.
I’m not going to contribute to source-available licensed software just like I won’t contribute to Windows (source code is available to see but they don’t even accept contribs).
I like contributing my time to communities and to building things together. I make pretty minimal contributions because I’m paid to write software for an organization and don’t have time to contribute meaningfully.
Even if hashi worked out some way to compensate me for my contrib, I don’t think I would bother because the amount would be negligible.
So if the idea is to have high quality software due to network effects, source available doesn’t seem like a good fit for this.
It also seems moot to me because decompilers have been able to get me the source of things that aren’t open source. Being able to view the source code isn’t as important to me as being able to work on things as a community.
Having a single corporation reap all the benefits of a community is not something I want to work on.
Totally agreed. But we've also gotten clear signals from HashiCorp that they aren't very interested in community contributions for a long time now, like removing commit access from community maintainers years ago and no longer allocating company time for reviewing community contributions.
They liked the idea that customers could try before they buy, and that customers could inspect the source code for debugging or other purposes, but they clearly didn't buy into the whole catb thing.
From where you and I sit, it's a missed opportunity. Perhaps Hashi had good reasons to believe that they had to be the ones to drive Terraform forward themselves, and that community contributions would at best play a negligible role, whether that's because core Terraform code is hard to work on, because the contributions they saw tended to be minor, or because they wanted all the core contributions to be their own for the sake of retaining control. But whatever their reasons, they weren't on the same page as you and me about this.
I mean, businesses change? The founders didn't have a crystal ball to see how every decision would play out.
Reading through these comments, I'm reminded of a pop psychology book that I read that essentially said "never try to take someone away, no matter how small, it is perceived as a much larger loss than it is".
If Hashicorp had started with this new license in the first place, do we really think they wouldn't have had the business success that they've had? We'll never know, but my guess is that a license that says "competitors can't copy us" would seem totally reasonable to folks contributing, and irrelevant to customers that have a problem to solve. Someone correct me if I'm wrong here, I want to understand this obviously passionate response.
(Since I've commented a couple times on this let me also say I'm not a hashicorp employee nor know anyone there)
> I'm reminded of a pop psychology book that I read that essentially said "never try to take someone away, no matter how small, it is perceived as a much larger loss than it is".
That's not just pop psychology. That's straight from "The Prince" by Machiavelli.
“Injuries, therefore, should be inflicted all at once, that their ill savour
being less lasting may the less offend; whereas, benefits should be conferred
little by little, that so they may be more fully relished.”
I don't know that it's at all clear how successful Terraform would be if it started with a more restrictive license. I mean there is a reason why a lot of projects start out with a different license and only change to BSL once a certain level of popularity is achieved, right?
As a possible contributor, and with all things being equal (which, really, never happens), I'd prefer a GPL-based product to have my time over a BSD/MIT license, because I really want my work to remain free. A BSL-based product would have to be very important to me to make it worth for me to dedicate time to it.
In reality, the usefulness of the product and friendliness towards developers is much more important.
Do you think crack dealers change their mind and suddenly discover they can’t make money off free crack? Or they give it away knowing that’s how you develop a customer base.
Bait and switch isn’t about the literal means of production. It’s about corporation and organizational dishonesty.
It’s lame that hashicorp (and others) are fair weather open sources who start as open source to gain critical mass and then jettison their purported ideals when it benefits them.
Dishonest because they trick contributors into working toward something that they may not, I think probably wouldn’t have, chosen to work on had hashicorp been honest since the beginning.
I don't know what "literal means of production" is referring to here. Crack dealers don't generally control anything one might refer to as a "means of production"; they are more like resellers. Marxist terminology is a bit too much of a Duplo-brick description of reality to apply usefully here, or indeed in most places.
All the work done in this case can be forked into a different product, as is being done here. The reason crack dealers do what they do is they control the supply, not a "means of production". Hashicorp explicitly licenced away their control of the supply ten years ago.
> It makes sense they'd do this, but we need to stop loving people who appear to give us things. They're just making reasonable business decisions.
I don't see OP talking about 'loving people'. Just stating that a very good approach was taken. Wrt 'give us things', it is not about giving either (that's like free beer), but about freedom. If OpenTF indeed ends up under Linux Foundation / CNCF it doesn't matter how many competitors are involved, but that freedoms are assured.
Well, OpenTF hasn't given anyone anything there - the original open source licence Hashicorp put in place is why CNCF/LF can adopt the project. OpenTF is just an administrative middleman in that process.
My comment was about a previous commenter thinking it was amazing that other companies put FTEs in place for a few years to work on OpenTF. It's not amazing; their businesses were built on Hashicorp licencing Terraform as OSS. They've been given far, far more than a few FTEs' worth of effort, and their continued existence depends on OpenTF being actively developed. It's not a noble thing (unlike the original open sourcing); it's just business as usual.
env0 founder here, direct competition of Terraform Cloud, and core member in the OpenTF initiative. Thank you for your note. I wanted to mention that indeed env0 enjoyed Terraform being free, but also contributed back to the Terraform ecosystem, with github.com/env0/terratag OSS and TheIaCPodcast.com for education.
Also important to mention another and probably a more important key member in the OpenTF initiative - Gruntwork, creators of Terragrunt and Terratest. I believe we all contributed nicely to the community. Just my 2 cents, in order to add a bit more context to "companies who make money from Terraform being free".
Understood - but that was completely for your own gain. There's nothing wrong with that, but I don't like the mischaracterisation of HashiCorp as the baddies and this new entity as the goodies. Hashicorp just were too open and giving, and didn't have a way to profit from all the money they invested, the way you and others have profited from their investment.
I could not agree more. Hashicorp are not the baddies. They did what they chose is right for them. They have any right to do so. Also, what Hashi did for OSS in the last decade is amazing, made OSS better and built many communities. Now it is time for something/somebody else to keep Terraform OSS. env0 is proud to take a significant part in this initiative (together with our friends in other companies and the support we got from the community so far), forking Terraform into opentf and donating opentf to CNCF/LF.
Yeah. I don’t really love that companies exploited something hashicorp gave away for free, then play good when they finally have to commit to the ecosystem when hashicorp takes it away.
Where was this support when they built a business on this free work?
Hello! I think your comment is missing some facts that might change your opinion:
1. Open-source software is a gift to the world. You make it, release it, and people can do whatever they want with it, including not contribute back. There is no exploitation here. You can build a trillion dollar business on top of Linux, without paying Linux anything. This is how open source is meant to work. This is the known contract when you release something as open source. To imply that by following the spirit of open source, somehow HashiCorp, a 5 or 6 billion dollar company is being exploited doesn't quite jive, IMO.
2. HCP has been clear that they will not put forth resources to review pull requests. They let many good pull requests languish until they die. If HCP were better stewards of the Terraform community and prioritized contributions, would things be different? I don't know. But I think if one is going to say the competitors should have contributed to Terraform, one also has to acknowledge that HCP has explicitly stated they are not going to prioritize reviewing your contribution. If you're looking for the best way to spend your engineering time, the best business decision for you is probably not to spend a lot of time on a piece of work that may die in the vine.
3. But it's not even just to say none of these folks have contributed back. They may not have many commits in the repository, but Gruntwork has created Terragrunt, which is free and open source. This has impacted the community considerably. The founders have written a book on Terraform.
4. Where is HCPs acknowledgment of all of the people that saw Terraform as stable open source foundation to build providers for, to build tooling for, to contribute pull requests to? Terraform itself is pretty simple, the hard work is in the providers. HCP makes some of the important providers, yes, but so many providers are made because Terraform is popular. HCP wants to make this all about them. They want you to think they put all the work into making Terraform what it is today. They did put a lot of work in, but the community did too!
> Terraform itself is pretty simple, the hard work is in the providers. HCP makes some of the important providers, yes, but so many providers are made because Terraform is popular. HCP wants to make this all about them. They want you to think they put all the work into making Terraform what it is today. They did put a lot of work in, but the community did too!
Could not agree more. Terraform, the tool is quite simple, engineering wise. Its the providers that interact with all the different apis, thats where the complexity lies in, and where a lot of community (+ corporate, through official providers) effort has gone in, and what has provided the most value.
Hashicorp builds its Terraform business on the free work of thousands of contributors to various Terraform providers. It’s also true that Hashicorp never really accepted PRs to Terraform itself, so there wasn’t a way for these companies to contribute directly anyway until now. They are seizing the opportunity to do so now that it’s feasible.
I am sceptical that 14 - 28 FTE will be dedicated to openTF. If I was to guess it is 14-28 people working on their companies offering and when needed upstreaming bug fixes and features, PR review
env0 a small Company is providing 5 FTE, the web claims their revenue is <5m (1.4m)
We actually specifically don't want this to be our employees. Rather, we want the foundation to employ the maintainers (whom we'll pay for). Which is partially because of the bad incentives the former approach creates.
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.
env0 founder here. env0 raised $42M (from investors such as Microsoft) and growing fast. We are honored to support OpenTF and already assigned 5 engineers (you will see the usernames in the opentf GitHub repo once that become public).
> We completed all documents required for OpenTF to become part of the Linux Foundation with the end goal of having OpenTF as part of Cloud Native Computing Foundation
Imho the best possible choice, and one that was easy to see coming when they announced they were joining "an existing foundation".
I agree. I have been outspoken in my criticism of the LF -- but I would like to think that I have been fair as well, calling out cases where they are the right fit for a project. And in this case, for myriad reasons, they are indeed the right fit. Kudos to the OpenTF crew -- we loved speaking with some of them on Monday[0], and left enthusiastic for the future of OpenTF!
Personally speaking, I wish it had been OpenInfra.
I think they have a stronger sense of community building, and more more intentionally make space for individuals to take leadership roles that are truly independent from their employer (member company) affiliations.
As someone who has been involved in many open source foundations over the years, they all have their pros/cons. If you are looking for the most customized approach, the Linux Foundation (LF) is probably the most customizable as they have build hundreds of entities that many people may not think of being part of the LF... CNCF... GraphQL Foundation... R Consortium... OpenJS Foundation... Overture Maps Foundation... LF is really "foundation as a service" and they are best at ecosystem building from my biased perspective.
There are other foundations out there with their own advantages... ASF is very lightweight and pretty much accepts anything open source as long as you adhere to their fairly simple rules... EF is great if you have a need for a European base etc
Seeing OpenTF on the CNCF website would be glorious. TF has become too ingrained in the cloud operating model, CNCF is where it belongs. Maybe a Vault fork will join it someday.
Will HashiCorp remain relevant throughout this? Seeing a lot of parallels with Red Hat's recent mistake...
There’s parallels but it’s not the same. Red Hat’s projects are still all fully open source. And it’s not clear it’s a mistake yet.(I work for red hat but this is my personal opinion).
Also FWIW Red Hat license policy (as implemented publicly through Fedora) disallows software under the Business Source License:
https://gitlab.com/fedora/legal/fedora-license-data/-/blob/m...
Red Hat has previously worked to eliminate product dependencies on 'source available' licenses and we're currently having to do this wrt Hashicorp stuff.
Not sure how much you details you can provide, but I know RH products use Terraform under the covers for a few things (like in OpenShift). Are you removing this functionality because it's no longer FOSS or fears around the BSL verbiage?
Since Red Hat is at the earliest stages of grappling with this issue and I can't speak for the teams involved I don't think there's anything I can say, other than that our corporate policies on product licensing by default do not allow stuff under licenses like BUSL-1.1. The only case I am aware of offhand where Red Hat briefly had a 'source available' license in a product concerned some code that was transferred from IBM to Red Hat (the source available component was third party with respect to both IBM and Red Hat; IBM does not or at least did not have the same restrictions on use of such licenses that Red Hat has).
Just speaking personally, I'm happy to see this fork occurring and hope they succeed in joining CNCF.
For sure it will not update to BUSL-licensed versions of Terraform as mentioned above, but I can't say if it will stay on an older version, use OpenTF, use Ansible or something else.
Well, they clearly alienated themselves from the community, or a significant part of it. I'm not sure if it's a mistake from a business perspective but early leaders of Red Hat were very careful to collaborate with the community.
I can say that, the scientific computing community has been affected deeply because of this move. They wanted to eliminate "The Freeloaders", but the collateral damage was enormous, and they either didn't see or don't want to see what they have done.
The thing is, the big majority of these systems won't flock to RedHat, and won't continue to use CentOS.
Yeah a large portion of research computing standardized on Red Hat (see e.g. Scientific Linux). The stability is quite important when trying to run ancient/niche scientific code that sometimes would only support (or even build on) Red Hat, and when you have a large number of nodes that need to be essentially bug-for-bug identical you want the package churn and update cycle to be kept to an absolute minimum.
The licensing of real RHEL never could have made sense in the HPC space, and I'd be shocked if a meaningful number of deployments would be moved to purchase RHEL now.
When I was a "sysadmin" in this space I always kind of personally preferred Debian, which has similar longevity to its release cycle, but it could never gain much traction.
I hope at this current juncture the HPC community might rethink its investment in the RHAT ecosystem and give Debian a chance.
> a large portion of research computing standardized on Red Hat (see e.g. Scientific Linux). The stability is quite important when trying to run ancient/niche scientific code that sometimes would only support (or even build on) Red Hat
> I hope at this current juncture the HPC community might rethink its investment in the RHAT ecosystem and give Debian a chance.
Nowadays Nix and Guix are available, and they're fundamentally better fits for this problem. Traditional Linux package managers aren't really suited to scientific reproducibility or archival work. It would be much better to turn more towards functional package management than just swap in another distro.
> Nowadays Nix and Guix are available, and they're fundamentally better fits for this problem.
No, it's not. First of all, reproducible installations for HPC nodes is a solved problem (we have xCAT to boot, for example). However the biggest factor is the hardware we use on these nodes.
An HPC node contains an Infiniband interface, at least, and you generally use manufacturer provided OFED for optimal performance, which requires a supported kernel.
I wasn't talking about NixOS or GuixSD, or OS installation.
I was talking about tools that let you run ancient software on modern distros. With Nix and Guix you don't have to hold your whole OS back just to run 'ancient software'.
Well, if my memory serves right, this investment started with RedHat's support for CERN's ScientificLinux and snowballed from there.
Then this snowball is solidified by the hardware we use, namely InfiniBand and GPUs, plus the filesystems we use (LustreFS or IBM's GPFS) which requires specific operating systems and kernels to work the way it should.
It's not as simple as "Mneh, I like Debian more, let's replace".
While I strictly use Debian on my personal systems, we can't do that on our clusters.
Red Hat is also strictly against copyright assignment agreements in general, and keeps many under the GPL, so few Red Hat projects could realistically be relicensed like this to begin with.
IBM probably disagrees and as much as people expected RH to show IBM how to work, I think history is repeating itself and things are happening as they always did.
I understand why it's tempting to buy into this narrative but it is just not the case.
Aside from the fact that IBM had no involvement in the recent decision relating to git.centos.org (if I remember correctly, IBM found out about it when it was publicly announced), IBM has had basically zero influence on any aspect of Red Hat open source development or project or product licensing policy.
On the other hand, Red Hat has had some limited influence on IBM's open source practices. For example, IBM has moved away from using CLAs for its open source projects, I believe mainly out of a desire to follow Red Hat's example. I'm not aware of any use of copyright assignment by IBM projects.
Your comment dances around the point so avidly that it’s un-understandable to me. How have things been happening, and why would they happen, now, at Red Hat?
Allow me to spell it out: if IBM could guarantee themselves a maintenance or growth of market share in the short-term while simultaneously clamping down on licenses that are anything but closed-source, they would. iBM didn't buy redhat because they think it's doing things the "right way". They bought redhat because they thought they could make money with it.
Fully open source in the strictest possible sense, but with the added caveat that if you choose to exercise your rights under the GPL you'll no longer be able to do business with Red Hat [0]. I personally wouldn't categorize Red Hat's current position as compatible with the ethos of FOSS.
As with Sun in the old days, good luck actually collaborating as in a healthy open source project but the license doesn't specify there should be a community around anything so it's all good.
For them to get accepted into the CNCF would require relicensing a large amount of MPL work. What's always been confusing to me about Hashicorp's change and any subsequent relicense of OpenTF is that I know for a fact not everyone who contributed code to Terraform signed the CLA and allowed permission to relicense.
I suspect if OpenTF tries to relicense to a more permissive license like Apache 2 (rather than less in the case of BSL) license we might see some fireworks.
The CNCF has made exceptions on their license policy before, specifically for MPL based software. It'll probably be easier for OpenTF to go through that process than to relicense (which is likely not even possible for anyone other than Hashicorp).
Disclosure: I'm on the CNCF legal committee, which mainly makes recommendations to the CNCF board on things like exceptions to CNCF's fairly strict licensing policy.
This is correct, but I believe MPL has never been approved as a main project license for a CNCF project before, as opposed to a license of a third party dependency (the default rule is that such projects must be under the Apache License 2.0). FWIW I would not hesitate to support such a request for a policy exception.
That's great to hear. We at Oxide are huge fans of the MPLv2 and it's our default license for everything; I think it's reasonable that the default expectation for CNCF is Apache 2.0, but would love for MPLv2 to also be considered a first-class license!
In my personal opinion, there's no good reason to have a license policy at the CNCF, or any Linux Foundation directed fund, that makes using copyleft licenses so burdensome, especially when they are as "weak" as MPL 2.0 is.
I know that there are Reasons. I just don't think they are good ones.
> Community-driven - so that the community governs the project for the community, where pull requests are regularly reviewed and accepted on their merit and changes are proposed through a public RFC process
IMO if OpenTF wants to quickly pull ahead, they'd do well to quickly revive popular but neglected PRs from Terraform, inviting those authors to rebase on top of OpenTF and quickly getting them merged.
The RFC process will slow them down here, at least for features, but presumably there are still old bug fixes that died on the vine, too.
Recently I (new to it myself) introduced Terraform on my team at work (my department historically mostly uses CloudFormation), and this whole license rug pull left me wondering if I'd made a mistake.
But if OpenTF really flourishes, this could end up being better for us than if the license change had never happened. I wish you guys the best in making OpenTF a clear winner! The sooner that happens, the less of a shakeup this will be for the whole userbase and the more teams will stay invested and stick with terraform-the-technology (if not the trademark). I'm optimistic. :)
Technically we mostly use an open-source library for generating CloudFormation, but yeah it still ends up being a bit more deeply vendor-specific than TF.
> Recently I (new to it myself) introduced Terraform on my team at work (my department historically mostly uses CloudFormation), and this whole license rug pull left me wondering if I'd made a mistake.
Personally I wouldn't sweat it.
TF as a tool (not original vs fork) has hit what I think is critical mass, it can't fail. If folks agree the fork is better from a licensing / management level then everyone will use the fork in the end. If not, the original will win. If both tools remain backwards compatible then most end users won't notice the difference.
I think this is a really interesting time. We're getting to witness in real-time how much individuals and businesses value licenses, or specifically what happens when a mature project changes its license.
NOTE: I don't have any tools or services that compete with TF. I'm coming at this as an end user. I contributed once but it was mainly around documentation.
No you’re fine, timing just happened to be at an odd moment, but TF generally has become the new standard and IMO has greatly improved our ability to deliver. I also worked with cloud formation in the past using a custom Python framework. Making changes was MUCH more involved and keeping track of breaking changes was very difficult. TF solves a lot of problems with tracking state and adding new components.
One thing that is always unclear with company-owned projects is what is exactly going against their business plans so you don't waste time preparing a PR only to get half-baked excuses about why it can't be accepted.
Point in case, the numerous PR's and issues to disable warning coalescing in Terraform that are always met with a dismissal but nobody understands why.
+1. Stale PRs, also communicate and welcome feature requests or bug fixes that may exist across SO and other channels -- TF-scoped, but perhaps even HCL (if possible) or HCL-adjacent libraries.
When Oracle bought Sun, there were a number of open-source projects they started fencing off and developing in a less community-oriented way. Usually, I think they didn't even actually change the licenses.
In every case I can remember, the community-centric forks outpaced or altogether outlived them. Hudson is dead. OpenOffice is basically irrelevant. Oracle ZFS sees little interest and is not what anyone thinks of when they hear 'ZFS'. But Jenkins, LibreOffice, and OpenZFS are all going strong many years later.
This could end up being a really good thing for Terraform users, would-be contributors, and Terraform itself as a technology. I wonder if any Terraform maintainers from HashiCorp itself will jump ship to work on it full time as OpenTF. A similar phenomenon ended up being a decisive factor with the post-Oracle forks of Sun software IIRC.
They definitely changed one of the licenses, and in fact in the most brazen way possible -- they relicensed OpenSolaris to proprietary![0][1] Your point stands, though: illumos very much outpaced and outlived it.[2]
I'm sure that was an especially painful change, being such a gratuitously stingy and shortsighted treatment of a dear and painstakingly refined core technology project.
My point with that observation about licensing was just that in many cases, it didn't even take something as drastic as a license change to get a project superseded by a community fork. Just changing the release process or removing some community maintainers' commit bits or whatever could be enough.
Maybe some of that was because seeing what was done to OpenSolaris made everyone else ready to jump right away as soon as they saw any shift at all in other projects, though.
I prefer to focus my imagination on picturing how angry Larry must have felt after becoming the owner of MySQL and learning that, even after spending a couple bullions on it, he still couldn't kill the thing.
And I always forget Hudson's name, but not the product.
This mention of Hudson was a blast from the past! I honestly couldn't remember whether Hudson sprang from Jenkins and failed to gain mindshare or whether Jenkins sprang from Hudson and took all the mindshare. We used both of these on a project I working on from '08 to '13, and I guess we must have started out on Hudson and then migrated to Jenkins. So you're right to be concerned about peoples' memories fading!
(I honestly thought both projects worked very poorly, but also better than anything else we could find at the time.)
> (I honestly thought both projects worked very poorly, but also better than anything else we could find at the time.)
Each of us must first earn the right to forget Hudson by finally decommissioning our respective companies' remaining, bespoke, janky Jenkins server. ;)
hahaha I took the coward's way out and left that company a decade ago. When I met with an old colleague last year, they were still on that bespoke, janky Jenkins server.
You have a blog too! Hurrah! I celebrate your entire catalog. If you start another podcast, please be sure to mention it on oxide and friends. It was crickets over there in the "On the Metal" podcast, but when you guys made that last episode I had 2 years of oxide & friends, so perhaps that was ok.
Definitely no third podcast in our future! Oxide and Friends itself wasn't even all that deliberate -- it was very much an artifact of the pandemic[0]. Glad you found Oxide and Friends, and sorry it took us so long to record an On the Metal episode to point to it...
Sun is a great example of a company that had excellent engineers but a terrible business model. Instead of fixing the business model, they tried to extract as much value out what they had and it ultimately lead to the death of the company.
Terraform is the gateway to Hashicorp and it feels like they're going to lose it.
I was just a kid when Sun died, reading about all the drama of the buyout and the series of forks on Slashdot whenever I had already finished my work in my high school computer science class every morning.
I didn't really know what or how to think about the mismanagement aspect of it, but I was already a F/OSS enthusiast by that time. I ran Linux at home and used free software everywhere I could. I even eschewed Times New Roman in favor of Linux Libertine (then the font of the W in the Wikipedia logo!) for my school essays because I didn't want to support the hegemony of a proprietary typeface.
What I did understand about Sun was that they had been the caretakers (and sometimes creators) of some of the most important and beloved technology I'd ever used. I didn't know anything about Solaris, Hudson, ZFS or SPARC, but I knew Sun for VirtualBox, OpenOffice, the MySQL database system that seemed to be everywhere in the open-source world, and the Java programming language I was writing in for school.
I remember a sense that something precious had died with that buyout, even though I didn't understand what led up to it. I worried a lot about OpenOffice, which I relied on and advocated to friends, and I practically cheered when I read about the formation of The Document Foundation and LibreOffice.
Yes. Sun became all about extracting rents from vendor lock-in (SPARC, J2ME, SUN DS, etc.). But Sun couldn't maintain that vendor lock-in, so Sun went under. Management just did not or would not see this, much less act on it. OpenSolaris was a good initiative for rebuilding mind-share, but it was a bit too little too late.
But the biggest mistakes Sun made were made in 2002-2004, and they never recovered:
- the Solaris on x86 "cancellation" (it
was more putting it on hold, maybe, but
the market took it as a cancellation)
- not making a deal with Google for Google
to use Solaris in their data centers
- killing off Sun PS (professions services)
The Solaris on x86 "cancellation" killed a lot of mind-share. No one wanted to get locked-in to SPARC.
Making a deal with Google would have reinvigorated Solaris' mind-share at a critical time.
IBM did the opposite of killing off its professional services division. IBM killed off their x86 systems division and beefed up its PS, and IBM went on to make a killing on PS.
The vendor lock-in issues totally compounded these terrible decisions.
> not making a deal with Google for Google to use Solaris in their data centers
Wow i had no idea that had almost happened!
Anyone care to elaborate further on that?
I wonder what could have been, if later solaris was open sourced and google could push their improvements… maybe we’d all be running containers based on solaris zones, and would have solaris on our smartphones? Who knows!
The story as I understand it is that Sun wanted to license Solaris on a number of hosts basis while Google considered the number of servers they had to be a critical secret.
> maybe we’d all be running containers based on solaris zones
There were a lot of nice things about Solaris. The Solaris engineering org was amazing, full of visionary giants. ZFS, Zones, DTrace, mdb, SMF (systemd, but done better and ten years earlier), the fault tolerance stuff -- all created by engineers as skunkworks.
On the other hand, for many years they were maintaining most of everyone's plugins on their own repos. It's not like Hashicorp hasn't been an honest servant of the community.
I understand people's dislike of the license change, but Hashicorp isn't Oracle.
Not really. I was a frequent contributor to plugins and it was actually incredibly frustrating getting changes merged for many of them. Many you couldn’t test your changes without licenses that cost hundreds of dollars. They controlled the repos, but they didn’t maintain them.
Not really. Oracle provides software that does not require any activation cose but will phone home and snitch on you so that oracle can send you threats if you don’t pay license fees.
I’m thinking of some additional virtualbox guest additions.
I think the only thing that wasn't forked off from the Sun deal was VirtualBox, mainly because Oracle never cared much about it and it's still open source, and partly because the container revolution was already brewing.
We are beyond excited to be part of this great initiative. We did of course expect a fork to be of significant interest to people; what we did not expect is this crazy level of support for it. 2k+ stars, 100+ companies and 400+ individuals pledged, and there is already more full-time engineering positions committed to it by pledging companies than the whole Terraform Core team at Hashicorp (source: terraform commit history)
I'm concerned that people who do not have a provable track record for contributing to terraform believe they can fork and be good stewards of the project. Looks like the top contributors to the foundation are:
Gruntwork and Digger do some decent opensource but the others haven't been great stewards of opensource. Looking at their githubs they don't seem to give much of anything back. So why should we trust them over hashicorp?
All those companies have very decent experience with Terraform. Even if some didn't contribute to the core, they all built decent software on top of or around Terraform.
In addition, we at Terramate also plan to contribute as much as possible. We have worked exhaustively with Terraform and related libraries such as HCL and are very well aware of the limitations and shortcomings we need to resolve with OpenTF.
I'd love to emphasize that many of us tried to contribute to Terraform in the past, but HashiCorp became somewhat hesitant to review and accept PRs which massively slowed down innovation for Terraform.
While I see the reasoning behind HashiCorps decision to switch licenses I strongly believe that closing up the ecosystem further won't do any good to Terraform and other of HashiCorps products, hence our strong buy-in for OpenTF.
Being good at using terraform is very different than being an opensource maintainer and community builder.
I'd love to emphasize that many of us tried to contribute to Terraform in the past, but HashiCorp became somewhat hesitant to review and accept PRs which massively slowed down innovation for Terraform.
Do you have links to PRs people from your organization pushed that weren't reviewed/accepted by HashiCorp?
I also encourage you to take a look at the Sponsor badge on our (Spacelift's) profile. Whenever a possibility exists, we give back to the projects we use. We tried the same with Hashi, too.
Re: trusting us over Hashi. DON'T. If there are any lessons learned from the great Hashi bait-and-switch trick it's that for-profit companies should not be trusted as the guardians of open source.
Trust the foundation that takes over the project. Trust the cash we'll endow it with.
Also worth mentioning that we support our employees working on open source during their Friday projects where they're free to do anything to grow as engineers. Many choose to do open source and we don't require them to do it under our umbrella. In fact the guy from my team who is the acting TL of the OpenTF project is the creator and maintainer of https://github.com/cube2222/octosql
we support our employees working on open source during their Friday projects
This is very different than hashicorps model of paying people to work on the opensource project during their working hours. Making it a core part of their job.
If we are to accept that the OpenTF foundation is going to be a better maintainer of terraform we need something better than "Hack on opensource if you want to!"
Spacelift doesn't have a good track record of contributing to opensource so the current model isn't working.
Also, Jacob started OctoSQL before ever joining Spacelift so its pretty odd to use that as an example of spacelift doing OSS well. Especially since his activity on his has tanked since joining you.
> If we are to accept that the OpenTF foundation is going to be a better maintainer of terraform we need something better than "Hack on opensource if you want to!"
The plan is for the foundation to employ dedicated engineers that we can sponsor.
Regarding open source vs closed source, I have nothing against closed source software. I think the outrage about Terraform specifically is caused by people seeing it more as an ecosystem, not a product (like Vault or Consul), and the whole thing looks like a bait-and-switch of reaping the benefits of open source (and others building a provider ecosystem) and then closing that down, once you start getting the cons of open source (healthy competition building inside of that ecosystem).
Honestly, I'm grateful for Hashicorp for the contributions they've made over all these years, and the libraries they've built. And as is the beauty of open source, in a situation like this, we can just fork, which we're doing.
In a dream scenario, HashiCorp would eventually join us working on OpenTF in the open, with the load much better distributed across companies, and the roadmap better reflecting the community's needs. I don't think, however, that it's fair to call these companies freeloaders who just now decided to contribute, as HashiCorp quite openly didn't encourage community involvement in the development process of their core open source projects.
All in all, I hope we'll do a much better job of involving the community in the core development and decision-making process (via public RFCs), while the foundation part means you don't have to trust any of the companies specifically. A healthy open-source ecosystem here is both good for the companies using it (partly due to no vendor lock-in, better competition, more innovation, lower prices), as it is for companies building products that extend tools in that ecosystem, as it is for any individuals involved. It's a win-win-win situation.
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.
Whenever we're extending an external library, we're submitting the change upstream. Whenever there's an opportunity to sponsor a project on which we heavily rely, we do that. But yes, we don't maintain any major open source projects as a company. And neither will we with OpenTF, because our active involvement with it is only temporary - we are just helping it get off the ground. Long term we will be primarily a sponsor of a dedicated team (see our pledge), not a core maintainer.
spacectl, vcs-agent and terraform-provider-spacelift are only useful with your proprietary product. prometheus-exporter is a very small exporter to monitor your proprietary product and has no utility beyond that.
I'm not sure what spcontext does because it has no documentation but I'm sure I could read the 10 commits to find out. The only none trivial project in the list is celplate which actually looks nice, but even there most of its complexity is in cel-spec.
> Whenever there's an opportunity to sponsor a project on which we heavily rely, we do that.
Come on, on Spacelift GitHub profile there is one project sponsored, and only since August 24!
> we don't maintain any major open source projects as a company
So you never had to handle project management of large open-source supports, community support, doing code reviews every day and handling feature requests from the community?
Yes, HashiCorp has been slow to respond to PRs and did not always commmunicate clearly, but so does many large open-source project like PostgreSQL, Python, Linux, etc. Managing a large open-source project takes more time than clicking on the merge button on GitHub. Users don't magically come fix all the bugs in the software, and develop complex new features.
HashiCorp did contribute a lot, and there is still a lot to learn from their projects, there is Raft, the autopilot, various library that are used a lot in the Go ecosystem, including go-plugin, a programming language with its specific type system and plenty more. After all, the change of license is making noises because we their tools defines part fo the DevOps world and we all use them a lot.
I wish HashiCorp would have kept using the MIT license, but using the BUSL is still miles ahead better than companies that are completely closed source.
From the outside perspective it looks like you support Free Software as in Free Beer more than in Free Speech. You are supportive when others are actually maintaining the projects for you, and you just want them to merge your contributions quickly. It is a very efficient way to externalize some of your costs, but you only step forward to actually to help manage them when your bottom line is threatened.
That's ok, it looks like you love money more than open-source ideas. That's perfectly fine but don't pretend otherwise.
Some others companies that signed the pledge have actually been maintaining open-source projects and have a leg to stand on. Spacelift loves having the high moral ground and the extra publicity.
Release the core of your product as open-source, like HashiCorp did for 9 years, and it will change my mind.
Nobody is pretending anything, and I don't care about changing your mind. We are donating engineers and funds to an independent project, whose value is entirely independent of what you think about Spacelift as a product or as a company, or any other argumentum ad personam you can come up with.
"Nobody is pretending anything" Wrong, wrong, wrong. The whole thing has been PR to cast hashicorp as some evil company who withdraws from Open Source. It's quite hypocritical to pretend to support Open Source from companies which don't even have source available!
Cut the bs please, this is just a self-serving fork. You all don't give a shit about open source, you just want to keep using TF in competing services for free.
I don't agree with your sentiment one bit but I will not try to change your mind. At the end of the day what matters is the outcome of the project for the community. So let's all take a deep breath and revisit this in a few months time.
But part of the value the foundation is pitching is that it has companies donating engineering time to keep the project well managed.
My concern is spacelift and some of the other companies have no track record of being GOOD at opensource. Not as part of the terraform community or opensourcing things they've built in-house.
I think your pushback here is appropriate and valuable. But I also really like their response to it, essentially: Don't trust us, watch us.
This group of people has taken a leap of faith. They aren't asking us to leap with them, they're asking us to pay attention to where they land, and come along if it looks like the water is fine.
env0 founder here. Thank you for your note. Important to mention Gruntwork, creators of Terragrunt and Terratest together with us here. Also, the CNCF/LF will assign some external members to the TSC (technical steerring committee). I honestly believe such a balance (external, Gruntwork, and direct competing vendors), all under a well experienced foundation is the ideal situation.
Yes. Worse than that, they changed to a license that prevents companies to use their product freely - if they chose some "cloud protection license" that simply handicaps possible competitors to their commercial offwrings, this fork would probably not happen, or at least it wouldn't have such momentum.
They really didn't. They changed it to prevent companies from building commercial products around terraform, which is what you've suggested as a cloud protection license.
Companies that use terraform to manage their infrastructure are not practically impacted in any way, except by this OpenTF effort (which I don't personally oppose either!) which will create a schism and leave us with competing tools that are not quite interoperable over time (thinking about ZFS/OpenZFS, MySQL/MariaDB, etc.).
It isn't the AGPL, but I am just sort of stunned at the uproar around this. Is Hashicorp supposed to just shrug and clap while a competitor takes (primarily) their work and competes with them using it? That's what the MPL allows, and they don't want to do that anymore, so they... changed the license to protect their interests. What do you expect them to do?
That’s a hot take. Considering their repos has thousands more contributors than they have had employees, ever. Giving the middle finger to literally thousands of people who have contributored to the Hashi core projects, not including the tens of thousands that have contributed to the plugin ecosystem. Many doing it on company time. Many more doing it for free in their spare time. Millions of dollars worth of contributions in developer time over the last 7+ years. That Hashicorp didn’t have to pay a penny for.
Also, I haven't read a lot about this, but I would be very surprised if the Spacelifts of the world could not work out a licensing arrangement.
The actual license at https://www.hashicorp.com/bsl says "provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp's products." To me this sounds like a self-hosted version of something could still work with terraform, and you just have to provide the binary yourself vs. it being pre-packaged. IANAL; it would be pretty shitty if they started going after products that support terraform as a tool that way.
Well that does suck. I would also wonder if that's a legal battle they would win.
I've never used Spacelift, etc. so I may be off base with the comparison. But I think about them like specialized CD tools that do nice things with/for terraform. Their value is that you don't have to implement these nice integrations yourself in e.g. Jenkins.
So replace Spacelift with Jenkins. There are some community plugins that idk, facilitate reporting plan impact from code changes. Is Cloudbees now in violation of Hashicorp's license?
It would kind of make sense though? When part of the product you are selling is made and supported by someone else, don't they deserve a part of your income?
I know that FOSS works differently, but that's also the reason why a lot of open source software is of questionable quality. When the development becomes a burden (is not fun anymore) and nobody is compensated, why would someone waste their time on it? Good will only goes that far.
Not suggesting that proprietary software is without faults, but maybe such licenses are a good comprise?
You certainly have to appreciate the irony of Hashi calling others freeloaders, having integrated Open Policy Agent into TFC/TFE and contributing nothing in exchange.
It's also ironic that most of the companies supporting OpenTF have closed-source products, yet they demand that HashiCorp keep their products open source.
env0 founder here, and core member in the OpenTF initiative. Thank you for your note. I wanted to mention that indeed env0 enjoyed Terraform being free, but also contributed back to the Terraform ecosystem, with github.com/env0/terratag OSS and TheIaCPodcast.com for education. Also important to mention another and probably a more important key member in the OpenTF initiative - Gruntwork, creators of Terragrunt and Terratest. I believe we all contributed nicely to the community, especially compared to our size / being small compared to Hashi. Just my 2 cents, in order to add a bit more context to "companies supporting OpenTF have closed-source products".
The core of every HashiCorp product is/was OSS. None of Spacelift is OSS, for example.
I’m not claiming it’s not the same monetization model, but with endless talk from these companies about the commitment to OSS and the virtues of OSS and the benefits HashiCorp has and would continue to receive by keeping their code OSS - it’s just ironic to see most of these companies have no open source code and aren’t actually willing to commit to an OSS model.
I can't say I'm familiar with the other companies/their tools but I assume they're all somewhat nebulous to terraform - did they not try to contribute back to hashicorp terraform?
In the TF scenario specifically it seems like it would have been smarter for hashicorp to open the core oss project to some outside contributors more directly (potentially moving to a different "ownership" on GH).
Maybe they'll relent and throw support behind the new project. Who knows.
Two sources (mariadb and fossa.com) claim that by BSL any production use requires a different (commercial) licence, while HashiCorp's explanation [0] indeed tells that there is no change except for those providing competitive offerings (I'll take their word for it). Which seems... more than fair? Not sure what the uproar is about either, if anything, I understand (and support) HashiCorp. Too bad about the split though.
The uproar is that people and coompanies contributed to the project without compensation, and are just now being told Hashicorp has altered the deal... unilaterally.
I for one would not have built my infra on non-free software, and I will certainly avoid it now.
Posting again because this is also misleading: if you sign a DCO (developer certificate of agreement) and CLA (contributor license agreement) you are almost often (it not always) signing away the copyright of your work.
In doing so, the receiving party is legitimised to to anything, including changing the license of all sources (including your contribution).
If that’s not okay with you then you should not sign that CLAs. If you signed stuff without reading them… it’s your fault.
I’ll said this in the past and I’ll say this again: this whole scenario could have been prevented by using a free software license like the AGPL. Which is what Grafana Labs did, and last time I checked Grafana (the company) is doing just fine.
This all reeks of “Embrace, Extend, Extinguish.” Encourage the community contribute and use your project, help them setup and become integrated with custom extensions and plugins, then rip the rug out from under them and make them pay or else destroy their business.
I don’t remember ever being asked to sign a CLA back in 2016 when I contributed. But they moved my code out to a plugin which was kept MIT. That code was there in the core product for 5+ years while they were building their business. My contributions helped them build their business, and in turn, I used their contributions to help the companies I was working for.
They broke the covenant of OSS: You make your source open and MIT license it, you are giving it to the community to let them do what they will. That’s what the license says. But, in turn, you get hundreds of thousands of people contributing back, for free. Hashi puts in to it, the community puts in to it, and we all make a great tool. We send back bug fixes and write training blogs, etc., and they don’t tell us what to do with the project because they’re getting a lot out of the community anyways.
Hundreds of millions of people every day depend on OpenSSL, but how many people have contributed to maintaining it? How much of the web we use every day depends on ffmpeg, yet I don’t know anyone who has contributed to that project. Many tens of thousands haved blogged and promoted Terraform (et. al.) for free? Many thousands more gave talks and training, without any compensation from Hashicorp. The naysayers act like Hashicorp has provided everything to the OSS community and gotten nothing back.
- Terraform is written in Golang and utilizes gRPC to communicate between plugins and core. What if Google decided to re-license Go and gRPC and say that Terraform couldn’t use it because it was a competitor to Cloud Deployment Manager or that Nomad and Consul are competitors to GKE? It’s all up to the license holders to decide who’s a competitor and tell them they can’t do that anymore.
- Hashicorp uses Lets Encrypt for their certificate authority for their website. Have they contributed back to that project, either monetarily or in dev time? Or do they just get free certificates for all their websites automatically provisioned from a public certificate authority supported and managed by other companies?
AGPL has nothing to do with it. Hashicorp wants all the contributions and bug fixes and blog posts and talks and marketing and promotion and support and training, for free, and also wants to be the only one to benefit. They should have never MIT licensed the code 8 years.
What are you talking about? The code is still there, the same version, under the same license - your contributions, if any, included. They just refuse to develop under the same license going forward, as is their right. And competitors are free to fork, as they did, as is their right. So what exactly is the problem? Do you feel entitled for them to keep developing under MIT license? Sorry, but you have no say in that, nor should you.
Legalized yes, legitimized, certainly not. This is not a copyright issue, this is a loyalty issue: Betraying the people who helped you get where you are is the kind of move a company makes when they no longer care to be perceived as ethical. This is not an important factor for everyone, but it's usually a pretty big deal in the open source world.
The FSF explicitly says so on https://www.gnu.org/licenses/license-list.html and the Mozilla project developed the license with the intent of it being used in other free software projects. The important difference is in its limited grant on patents.
The reason Debian avoids distributing Firefox is not because of copyright licenses but because Mozilla vigorously protects their trademarks, including "Firefox" and the various logotypes. You are not allowed to distribute them without permission, which Debian largely wants to avoid to have in order to not set a precedent which would impact further distribution of Debian and its derivatives.
Mozilla does this to avoid the risk of third parties offering Firefox with spyware-like modifications. One might ask why Debian itself do not seem to suffer the same problems. It seems like a problem mostly on proprietary software distribution platforms in practice, but it's certainly a possibility.
I just went through about 20-30 SRE interviews while hiring an SRE II for my team. Every single one of them that had state management at all used terraform cloud. I found that really interesting because I've never heard positives about it vs the others (spacelift, env0, terrateam, brainboard etc). Not a single one of them had anything other than tfc. Not even atlantis.
Never used terraform cloud and probably never will. Its too expensive and doesn’t really provide all that much benefit over using terraform with eg atlantis.
env0 founder here. What were the main reasons that they used TFC? was it the ability for Hashi to fix things in Terraform CLI/providers? was it their size / "nobody gets fired for buying IBM"? something better in the product? something else?
would love your insights here
I've only used Atlantis as well! We actually need to decide on a service next month. I haven't demoed it yet but I'm really aiming to use brainboard.co if it actually does what it says. It's priced per user, not some weird deployments a month price and it honestly looks amazing. Gives you a gui to move resources around, imports your current state, etc.
I have a dumb BUSL question- if you don't compete with Terraform, but you do with something else, like Boundary, can you still use TF? If Hashi releases a new product that competes with you do you have to stop/license TF?
> 11. What are the usage limitations for HashiCorp’s products under BSL?
> All non-production uses are permitted. All production uses are allowed other than hosting or embedding the software in an offering competitive with HashiCorp commercial products, hosted or self-managed.
I'm amazed and a bit dismayed by the general vibe in the comments.
I'll preface this with I don't know anything about Terraform, OpenTF, HashiCorp, etc. I couldn't even guess what Terraform is. I'm in mobile dev. However, I work on open source a lot and think about sustainability and revenue streams quite a bit.
I read the manifesto. I saw the "revert the license or we'll fork". What I didn't see is any form of trying to work with HashiCorp on their goals. It seems like very considerable resources have been pulled together to fork, but I didn't see the part where anything remotely like that level of effort and resources was on offer to HashiCorp to rethink the plan and come up with a better answer.
As I understand it (which is based off of some comments. See above about not knowing anything about this), a good chunk of the resources are actually from competitors. If true, it takes a lot of the sting out of the "HashiCorp are jerks" argument. I mean, I'm not saying they're not, but it's more like, "HashiCorp changed the license so they could push back on competition, so the competition forked the code". I don't really expect "right and wrong" from companies, or open source for that matter. But the spin and vibe feel a little misdirected.
I mean, don't get me wrong. Building up a community who contributes, then doing a rug pull, sucks. However, the "company does a risky thing and builds this awesome tool, then a bunch of others fast follow and exploit it" has become very common, and it is going to be a bad thing in the long run. You can say "We believe that the essential building blocks of the modern Internet, such as Linux, Kubernetes, and Terraform need to be truly open source", but to be fair, Terraform was not an essential building block until somebody built it.
As much as license rug-pulls damage user/community investment, fast-follow competition and the threat of forking will ensure far less investment in the very kind of open source everybody wants.
There is a financial sustainability problem involved in "big open source", and we are seeing the changes. In many ways, it simply has to happen. Going forward, I do hope new products like this start with a license that works rather than changing, as that is obviously not appreciated, but many devs reflexively avoid that kind of arrangement, even if it costs nothing to use.
Anyway, just thinking out loud. Hashicorp might be run psychopaths. I have no idea. In a general sense, though, the whole industry is going to need some new models. If it's just "fully open source or nothing!", there's a whole class of tools that won't exist. Building things is risky and expensive. I don't want to go back to when everything was closed source and needed a license, but open source without a reasonably protectable revenue model will definitely limit what gets built and why. And as we like to say, "if you're not the customer, maybe you're the product", or something like that :)
Not sure about others but at Spacelift we tried to partner with Hashi, especially that ours is a higher level platform that connects various tools (eg. Ansible, Kubernetes, CloudFormation etc.), policies and processes, and it would not be hard to imagine how it could work with TFC/TFE's remote execution. The answer was a very loud and clear "NO".
Fair. Like I said, I don't know the context. I would include rebuffed attempts to work with Hashi as this kind of changes the situation. I run a company that does publish several libraries, and we are trying to figure out revenue models and things going forward. We don't really have anything in this category, but the general problem is a problem. A lot of companies tried to monetize open source, then the obvious risk happened, which is a lot of competition came in and just tried to monetize the same thing. Now some orgs are changing licenses, and people are upset. I can see both sides, and the industry does need to find some kind of middle path for reasons I mentioned in the post. The degree to which Hashi is a bad steward impacts the perception and response. If the license change didn't impact users and only competition, and Hashi had been trying to work with everybody to figure it out, then it would very much change how this looks. If they're a bunch of aholes, well, same but in the opposite direction.
Regardless of how they behaved towards us in the more distant and very recent past, I still hope there's a way out of that, and I will not be the one to start the war. It's not my desire to portray HashiCorp in a bad light, and I much prefer the perception to be shaped by what we can and will accomplish as OpenTF.
> fair, Terraform was not an essential building block until somebody built it.
If terraform wasn’t available and wasn’t oss, its very likely that a competitor would have enjoyed the success and network benefits that were essential to its success and ubiquity.
Perhaps people don’t remember but not long ago there were many IaC tools to choose from, and it was a matter of taste as to which tool was adopted by a company. Chef, Ansible, Salt and a few others, all had pretty decent support as building blocks for infrastructure. Then terraform came along and was widely adopted, not just because it was better but also because it was oss (like its competitors).
Now that its won, Hashi feels comfortable to pull the rug and change the license.
Regardless of what the competitors think or do, this is a very unethical move from Hashicorp. I really want openTF and other clones to succeed and for Hashi to die. At the very least they should never again be trusted as good OSS stewards and any new product they come up with should be treated with scorn.
Which reminds me… when was the last time they built anything? Seems like all their effort is focused on commercialization. Which is…fine, they are a public company after all. They’re just not the same institution that they used to be. Just the name is same, and that is really fucked up.
> If terraform wasn’t available and wasn’t oss, its very likely that a competitor would have enjoyed the success and network benefits that were essential to its success and ubiquity.
Of course, but then would also have "enjoyed" the competition that didn't need to invest resources to build the thing.
> Now that its won, Hashi feels comfortable to pull the rug and change the license.
Not saying you're wrong. I'm saying the industry needs a better model for open source investment. The "tough shit, making money is your problem" view is not great, but the "open source until we're essential, then rug pull" is also terrible.
There is no such thing as an open source license that prevents others from doing something specific with the software, that's basically the point of open source.
Awesome to see how enthusiastic this community is! We at the LF are excited to work with the wider community to bring them under neutral governance like our many other foundations/projects. On the CNCF side, we welcome an application through the official processes when they are a bit further along with establishing their initial governance here: https://github.com/cncf/sandbox
If the maintainers can keep a good throughout of pull request approvals, I think there’s a good likelihood that OpenTF could outpace Terraform in capability. Terraform, IMO, has benefitted significantly from having some very smart people at the helm. Most recently I’ve been impressed with the declarative mapping of imperative actions including import, state moves, and more. These things required very carefully coordination, especially in large active configurations, and are now quite simple and safe.
One has to wonder how much collaboration was prevented by having Hashicorp refuse to hear their users or accept PRs. At some point, opensource or not, people just stop bothering.
A year ago most people probably wouldn't have known or cared about a small new fork and I don't think it would have gotten the community support OpenTF has now.
Was thinking the same. Would be great if they provided more "experimental" features with a flag or ENV. Then the default cli could be the "comparability mode". Cheers and good luck to the project!
I'll join the other members of the OpenTF Initiative who've already commented - I'm honoured to be a part of this effort and can't wait for us to push this forward in the following days, weeks, months, and years! It's been a pleasure working with everybody from all the companies involved.
Happy to answer any questions you might have!
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.
I'm hopeful on this news. Terraform has become the standard in the industry, but incentives weren't quite aligned. My attempts to submit Pull Requests for much wanted features (fully working documentation with new integration tests all passing) often took months-a year to be accepted and merged.
Hashicorp engineers were incentivized to work on features related to Hashicorp paid services, and Hashicorp competitors who would similarly benefit from a stronger Terraform codebase were disincentivized from potentially helping Hashicorp.
> My attempts to submit Pull Requests for much wanted features (fully working documentation with new integration tests all passing) often took months-a year to be accepted and merged.
Isn't this always going to be the case?
Honestly, are you signing up to maintain the feature long-term? Fix bugs? Update documentation? Review PRs with small changes to the feature?
The cost of accepting a PR can be much higher than the cost of creating the PR.
Even if you promise to maintain it! What if you don't? What recourse does the maintainers have?
They can't just remove your feature, since customers started using it and you don't want to announce breaking changes and scare everybody.
If you're building an open source product with lots of users, where compatibility is important. Then accepting a PR is a huge liability, you'll always want to be extremely conservative about it.
A PR is not always a gift, regardless of innocent it looks.
First of all, congratulations, I hope OpenTF will live and prosper, I really liked Terraform when I was working with it.
I'm wondering how OpenTF will handle the plugins. Will it pull from HashiCorps servers, or will it spin up its own plugin registry? If the latter, will the existing plugins be mirrored?
No immediate change with regards to how the plugins are handled. Longer term I think the entire plugin ecosystem could benefit from less centralized control, in the interest of all other projects that depend on it (eg. Pulumi, Crossplane etc). As OpenTF we are already working with other parties interested in keeping the ecosystem in good shape. But for now there's no change necessary, since the TF registry seems itself to just be a lightweight proxy to GitHub Packages, they don't seem to host anything themselves.
(Marcin, co-founder of Spacelift, and one of the members of the OpenTF initiative)
I think this should be addressed sooner rather than later. A lot of core plugins (e.g. AWS) are written by HashiCorp themselves and the license could be changed at any time, putting OpenTF users in danger of violating the license. Furthermore, to publish a provider one needs a HashiCorp account and agree to their Terms of Service.
I was under the impression that a lot of the cloud provider plugins were developed in close collaboration with, or even by, the cloud providers themselves.
At a glance, three of the top four contributors to the AWS provider are from outside of Hashicorp, and two of those three are AWS employees.
Thanks for the insight. I think it wouldn't hurt to have an open registry at some point, but we will need to figure out the governance model for that as part of the foundation.
Good luck! Governance is always one of the harder problems when setting up a project. Maybe the CNCF could actually help already, given the high profile nature of Terraform? I also have a few projects we set up governance for, but I don't think those would fit OpenTF as they were made for smaller projects.
The registry protocol is documented and pretty straightforward to implement (source: I implemented a private TF provider registry as static files in S3 at a past gig), and Terraform has config settings to easily point to alternative sources for plugins, so mirroring the plugin distribution should be pretty simple.
While learning Pulumi AWSx and TypeScript I created a proxy server for AWS Lambda [0]. As part of Spacelift we also have a private provider registry and I'm happy to turn some of that code into an equivalent open source proxy in Go.
Encryption of secrets stored in state and or support for a would be an excellent first fork feature to pull ahead. Maybe something that supports backends like AWS KMS (for encryption) or AWS Secret Manager (for storing/retrieving ) secrets.
Stringing together resources to pull from a secrets manager and still having the secrets stored plainly in state is enormously frustrating. We aren’t all living in a nirvana of fast rotating out of band secrets.
This is actually a feature I'd love OpenTF to have and am quite passionate about, personally. My plan is to submit it once we have the RFC process in place.
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.
> > Maybe something that supports backends like AWS KMS (for encryption) or AWS Secret Manager (for storing/retrieving ) secrets.
> This is actually a feature I'd love OpenTF to have and am quite passionate about, personally
You're probably already aware, but SOPS¹ kinda fits the bill for integration here perfectly.
It supports local secrets as well as encryption via keys stored with all the big cloud providers, and it's already battle-tested as it is used heavily at Mozilla (it comes from there).
Additionally, like OpenTF, SOPS is maintained independently of any single corporation, written in Go, and distributed under the MPL-2.0 license. On its face, it seems like a match made in heaven.
SOPS is a great tool and could be a pretty killer starting point for this stuff!
Pure personal speculation here, even though I'm one of the OpenTF folks. If one wanted to make secret encryption ultimately flexible, it could become a type of extension like provider, which would work as a sidecar and simply encrypt and decrypt each secret thrown at it. It should be possible to wrap SOPS into one such plugin.
I was kind of thinking the same thing, and that SOPS would be a good fit here because (as hacky as that mechanism is!) SOPS could be downloaded as a little static Go executable during `terraform init` just like providers are. `age`, too.
And yeah, a plugin interface would be great for lower coupling, and the provider interface seems like a model that could basically be copied here. :)
Yes, and perhaps backends could work in a similar way, too?
One other thing was that I was thinking whether these plugins really need to be local. A remote gRPC server could possibly work as well, I guess? Again, pure personal speculation.
In general, I think supporting local workflows is important for providing a good developer experience as well as maintaining a single source of truth (although I know being a purist about this is impossible when what we're doing is managing cloud environments!). So I think that it's an important option. Additionally, when you perform the encryption locally, you don't have to think so much about transmitting the cleartext secret to whatever server/program does the encryption, so that's nice.
But cloud-stored secrets are often an exception to single-source-of-truth and the preference for local workflows anyway, and some teams reasonably prefer other workflows. And a network boundary might be a natural place to put some secret sauce for a company like yours, or even just to give users the option of plugging into a centrally managed, shared environment— even if what is running on the remote end is self-hosted and source-available or even open-source.
I am seeing parallels to the Amazon vs. ElasticSearch fiasco.
Also interesting to mention that they haven't said a single word about CLAs. If this project would require a CLA (and copyright assignment) to contribute, then it doesn't matter that it's a foundation, they could also re-licence it easily, which is exactly what they are fighting against.
CLAs are very misunderstood in the software industry. Every open source project should make its contributors sign CLAs, and you should be wary of using one that doesn't. Why? Because any single contributor can claim at any point in the future that the project is using their code illegally, and that you/your company should pay them for using their proprietary IP for all of these years. Can you prove that you audited every single line in the project you used and were sure of its copyright status? A CLA ensures exactly that. There's a reason why Linux Foundation, Apache, CNCF, Mozilla, heck even FSF all have strict CLA requirements.
Relicensing is a completely separate conversation, and there are plenty of open source CLAs which don't have that provision.
> The Developer Certificate of Origin (DCO) is a statement that a software developer agrees to, saying that the contributor is allowed to make the contribution and that the project has the right to distribute it under its license.
CLA's required by some projects effectively require the contributor to give away all rights to the receiving organization. There are other forms of CLA that do not transfer these rights, but are still agreements (which is what the A stands for) between the contributor and the receiving organization. These might allow the recipient to change the license, or they might not. The DCO is just a statement and does not permit the receiving organization to distribute under incompatible licensing terms.
I'm sure some organization can write a "DCO" which says the exact same thing. Ultimately what matters isn't the title of the document but the content. There are plenty of CLAs that just ask for the bare minimum for the project to be able to function (see ones from any of the organizations I mentioned).
Unfortunately, many CLAs give the company that controls the code the power to take the project proprietary (and they will call this the bare minimum of what they need). It's repeatedly happened, so contributors need to be careful with what they sign. Perhaps for a small bug fix they won't care, but for anything larger, beware.
You are technically correct but I do see a difference between signing a CLA with The Linux Foundation, Eclipse, ASF etc. and a purely commercial corporate entity.
You are entirely right; it's nowhere comparable. I'm just talking about the theoretical impacts. (after all, the re-licencing was also a theoretical problem, i.e. Hashicorp might abuse the non-compete clause so Terraform is not safe to use) I just find it interesting to mention - after all, the important details the ones which are not talked about.
For at least a few years now, companies relicensing their open source software has not been just a theoretical possibility, but a plausible thing that could happen any time.
I don’t know what happens to the Linux Foundation when we’re all in the nursing home, but them relicensing everything is not currently plausible. It is a merely theoretical possibility.
Yeah this makes sense. "CLAs (or DCOs) assigning copyright to foundations are good, CLAs assigning copyright to corporate entities are questionable" does seem like a pretty good rule of thumb for contributors who care about this kind of thing (which I agree most people should).
For what it's worth, the CNCF is not big on CLAs in general although, as far as I know, they don't specifically prohibit them. In general, they prefer to use a developer certificate of origin.
That's a bit surprising to read because here's what @cra of the Linux Foundation said a couple years ago:
It’s [a Developer Certificate of Origin] how the Linux kernel works, where basically it takes all the basic things that most CLAs do, which would be like, ‘Did I write this code? Did I not copy it elsewhere? Do I have the rights to give this to you, and you sign off on?’ It’s been a very successful model played out in the kernel and many other ecosystems. I’m generally not really supportive of having CLAs unless there’s a real strict business need."
FYI 99% of CNCF projects use DCO, only a handful still use CLA but we give projects a CHOICE in the matter. I prefer DCO personally because it's a lower barrier to entry for contributors (e.g., not formal legal agreement to sign and be reviewed by lawyers)
yes, but it's not always malicious. one example is to dual license the code. for example, Qt is available under GPL (LGPL?) and has a CLA that allows them to commercially license your contributions to businesses.
a good CLA in this instance would guarantee you that your code cannot be relicensed exclusively away from GPL. the software is still free as in freedom, but the organization can fund itself by courting businesses that would not touch GPL software.
Another example is OpenStreetMap which changed its license because we learned that the old one wasn't adequate for the purpose intended.
We have no idea how today's licenses will be challenged in the future and I believe it's a good practice to keep a door open to being able to change the license going forward.
That it's not done maliciously is a purpose for foundations which serve as trust anchors.
I'm still skeptical of a CLA for such a project since a ton (currently all) of the copyright is held by Hashicorp meaning they would have to consent to the future license change as well, regardless of the CLA or foundation membership. In general I think that makes a point for a CLA, but in this case I don't think it does.
> Qt is available under GPL (LGPL?) and has a CLA that allows them to commercially license your contributions to businesses.
but that's exactly the case which happened in case of Hashi, so CLA allowed them to continue working on commercial product only, with all 3p contributions being under their proprietary license.
Qt's situation is different, since they have contractual obligations to continue releasing under (L)GPL - failure to do so allows the KDE Project to release Qt under MIT.
A CLA is not required for that. A copyright assignment is. A CLA is not necessarily a copyright assignment, and a copyright assignment is not necessarily a CLA.
Ideally we would want to let people contribute freely without the need to sign anything extra. However, for the exact process we will defer to the Linux Foundation / CNCF. The ultimate goal is to ensure that a rug pull like this is never allowed to happen again.
(Marcin here, co-founder of Spacelift, one of the members of the OpenTF initiative)
If they aim for making it a CNCF project, it may very well be a simple inbound=outbound licensing (everyone keeps their copyrights). The CNCF gives projects two options: DCO or CLA, where the CLA is a very simple formality. It being a foundation and not a for-profit company already makes licensing shenanigans unlikely.
Companies saying "we will let some FTEs contribute to this project along with their regular work" is very different from Amazon paying a dedicated team to work on it full time.
Marcin here, co-founder of Spacelift, one of the members of the OpenTF initiative
We provided a dedicated team on a temporary basis. Once the project is in the foundation, we will make a financial contribution, with which dedicated developers will be funded. At that point our devs will gradually hand over to the new, fully independent team. The other members of the initiative so far follow the same pattern but I can't speak on their behalf re: exact commitments.
When someone gets a well-deserved black eye, most folks would rather put on concealer makeup than be forced explain how they got their shiner. I doubt they want to draw attention to how badly they just pissed off the open source community.
Here's hoping other HashiCorp products go the same way, creating an excellent new suite of truly OSS cloud tools. HashiCorp shot themselves in the foot, but they may have accidentally benefitted the FOSS community greatly.
This repo [0] seems to still be licensed under MPL, so there is no need for an immediate action, but if there is a willingness in the community to take it over and improve, I see no reason why we wouldn't do it.
I can't imagine how much you all have on your plates right now, so this can definitely wait.
Tools, frameworks, etc all have to be written by people, and I'm sure you're already keenly aware that poor programmer tooling can be the death of a project.
This is definitely selfish on my part since I write a lot of Terraform and lean on the language server a lot, but I hope that terraform-ls can have a couple of people dedicated to it eventually.
Unfortunately (of course!!) my own pet issues are pretty niche, since I expect there aren't many people using a lot of private registry modules like the client I work for. Hence my desire to fix it myself :)
> but I hope that terraform-ls can have a couple of people dedicated to it eventually.
I 100% agree with your perspective on tooling, and hope the OpenTF team will take terraform-ls (and hcl-lang) under their org, too, because I will never contribute one more character to any repo in the hashicorp org.
Adopting hcl-lang is very low risk, but not zero, and it would be a spiteful move for Hashicorp to introduce some BuSL-only change to hcl for the purpose of hard-forking the language
I’d call that a bit of a stretch. Python can have full blown OOP, even multiple inheritance. It can be very functional (decorators are used a lot). It’s interpreted, not compiled. It has tons of syntax sugar. Properties, context managers are very Pythonic. Async also works entirely differently. Multi threading with channels is not really a thing in Python. Neither are interfaces (abstract base classes come close though). Python can have impressively expressive type systems nowadays, with mypy and typing. Generics, paramspecs and whatnot. Go is much more rigid in that regard.
That's very much it. At the risk of too much information I'm partially disabled, looking after my disabled wife, and working full time. I have a very limited "bucket" of stamina and strength that I can dedicate to anything else.
On the weekend I try to get through a couple of Exercism items, and rest. I really wish I had 10x the energy to contribute to so many worthy open source projects :(
In my book, mirroring what a friend recently told me about instruments: Learn Python. Ignore everything else for 1-3 years.
The main drawback is: You won't be able to write 10^3 - 10^5 events per second throughput services. That's where you need Go, (weird) Java and C++. But I don't think these are your current main problem.
However, Python exposes you to a lot of very valuable concepts in the programming language space. If you know Python well, you easily know 80% of the language fundamentals of Java, 70% of Go or Ruby and like half of Haskell or C++. And the other half of those languages is mystical wonderous ponderous voodoo magic.
TBH IDK about Haskell. I personally love it but it requires me to take a different view on problems than most other languages I've worked with. And I did all the things you've mentioned professionally at one point or another. That said, I'd highly recommend learning Haskell to anyone who's spent most of the career using "boring" languages like Go, C++ or Java. It's eye-opening.
How deep did you get into Haskell? Did you end up writing anything non-trivial in it? Do you feel that learning Haskell made you a better programmer in other languages?
I bought a paper copy of Learn You a Haskell many years ago and got about half way through it and got pulled away and haven't gone back to it, but every so often I get this longing. I love FP (currently maining in Elixir) so I wonder if I shouldn't just try to make it happen (even though I'm strapped for time for projects as is).
I never wrote a production system in Haskell, no, and haven't worked as part of a team writing Haskell for money. I wrote some convenience utilities for myself, and did more than a fair bit at codewars.
> Do you feel that learning Haskell made you a better programmer in other languages?
Exactly that. So while Haskell as such was not professionally useful to me, having learned Haskell made me much better in Ruby, for example. And more recently when I did a lot of Rego I still find the Haskell style of thinking very useful.
As fishnchips says, Haskell is more of an introspective journey imo.
Like, Monads are fucking weird.
Until you realize that monads are more about dealing with context and making data-dependencies and side-effects explicit.
And then you start realizing how most application server frameworks are about a simple task: The framework decomposes a possibly multi-threaded server into single-threaded request handling. And then does a shit-ton of other heavy-lifting for you.
Or, on the other hand, currying is a weird thing. Until we had a decorator which takes some initial parameters and transforms every incoming thing based on these parameters. It's a decorator in java, it's currrying in Haskell.
I've learned a lot from Haskell, ML, SML and OCaml as well as Haskell at a meta-level. We've written a full compiler in OCaml in one course, and then I made it assemble microcode instructions later on as well, because it made exercies easier in that other course.
But I wouldn't do that in production. Other languages are easier to comprehend for other people.
Is it? I think it's actually extremely elegant. A function has exactly one input and returns exactly one output.
If you need two inputs, you have a function that takes one input and returns a function that takes one input and produces the output. Functions that look like they have multiple inputs are just syntactic sugar.
Learn both, you will eventually have to anyway, and it won't hurt your learning curve, in fact will probably help to see similar ideas implemented and expressed very differently.
I'd really love to see this continued for some of the other hashicorp tools that have taken the turn to closed licensing. Vault is at the top of my list, but definitely some others like consul and vagrant seem like they have healthy communities that have been responsible for making hashicorp products successful, and stand to lose out with the whole switch part of this post-facto bait+switch.
But yes, please, to adopting Vault. I don't have a horse in the race about Consul but my suspicion is such an effort would only be worthwhile if trying to adopt Nomad, too, which I gravely doubt
I am very curious about the future of Nomad. It definitely has its place for some orgs and was exciting to see an alternative to k8s gaining traction.
I think this license change will hamper adoption and contribution when teams compare a foundation supported k8s vs a source available Nomad. I haven’t heard of anyone taking this on, but I’d love to keep my eyes on it if/when it happens.
Disclaimer: early supporter of opentf, cofounder of massdriver
When you have 1000 employees and multiple investors, this was bound to happen.
Wondering about nomad and it's future. It didn't have as much user base though and if only containers matter, nothing can beat the simplicity of docker swarm.
No sure about Vault. I found it over engineered and complicated to graspe.
I am hoping someone will take on Vault since it really has a unique perspective on credential leases that I don't believe its competitors currently tackle
I have been trying to help https://github.com/vaagrunt/vagrunt but there are 4400 forks so it's hard to know if one of the other ones is getting "community buy in" - I just saw that one in another HN thread and tried to help out (it's always bugged me that there was no $(make dist) for Vagrant so hopefully here's my ability to fix that bug since Hashicorp was for damn sure never going to accept any such PR)
Well done all! I'm really excited to see the community taking ownership of this tool and moving it to a foundation.
I'll definitely be using OpenTF for any future projects and if I find myself a part of or leading a DevOps team again, will work to migrate that team over.
The provider framework has been kept MPL, as have the providers, so we will keep being compatible.
We will see what the longer-term future brings, and we'll react accordingly. We also have a bunch of ideas for improvements to the provider ecosystem and their capabilities, but those will go through the public RFC process once it's in place.
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.
Do you have an idea on where you stand on incompatible changes that are strict improvements over TF? As a concrete example, https://github.com/hashicorp/terraform/issues/13022 - My only read on this is that Hashicorp arent doing this as this removes a key selling point of Terraform Cloud.
We're planning to stay 100% backwards compatible, and in the early stages we definitely want for it to be easy for people to migrate back and forth between both tools - even have teams with different engineers using different tools.
DSL changes are honestly the thing we'd like to limit the most, but as long as they're backwards compatible, the RFC process will welcome them, and they will be discussed and considered!
The important thing is for features to be opt-in. You should be able to limit yourself to a feature-set that is Terraform-compatible, if you want. If you want to use additional features on top of that, then that will be an option as well - we definitely want to innovate.
> currently temporary Technical Lead of the OpenTF Project
Assuming you aren't committing under pseudonyms, you've to date made a single contribution to Terraform, which was a fairly trivial change (b49655724d2f96f0e68196fb949a0d625abbd60e).
I pity anyone who underestimates Kuba. OK, most people I've worked with in tech are probably smarter than me, but this guy plays in a different league.
I don’t follow? You don’t think they can set up CI and migrate issues? It seems unlikely that there will be major breaking or architectural changes imminently.
Is Hashicorp were to change Terraform protocols without extremely good justification, the project would be dead immediately. The existence of Terraform in its current state is bringing billions in revenue to vendors like AWS and GCP. There's a lot of money at play. Hashicorp wouldn't risk that.
It would be stupid cat-and-mouse game because it would make all previous versions of legacy Terraform irrelevant, and is unlikely to cause much harm in the long run because the same protocols can be implemented by other tools.
HashiCorp manages a handful (~4 [0]) of the providers, while over 2,000 are managed by the community. I suspect breaking the interface would be further damaging to their reputation. The clouds will also have some influence around their provider.
Right, but lets be honest, I don't care about 1995 of those providers. Nobody is using Terraform to manage spotify playlists [0]. The big ones are what matter, and (and HashiCorp manage the big ones)
In one swoop, we've gone from 2k to 300. I had a look at a handful of them, and they're ranging from 10-100 downloads a week. Our CI pulls the AWS provider more often than that.
There's probably 20(?) from the 300 list that would actually cause enough outcry to be worth talking about, out of 2000 providers.
If they do, they likely have to give plenty of notice due to third party providers, and will still have to maintain the old protocol/a for older versions of all the providers.
I don't think there is a future for TFE in the face of OpenTF. It's a dumb Terraform runner, the competition has far more features and integrations with other tools. The BSL was HashiCorps way to curtail competitors and it has backfired spectacularly.
Ohad co-founder of env0 here, early supporter in the OpenTF initiative.
TFE and TFC have a future. Competition is good for everybody. It forces innovation.
However, only OpenTF will make sure to properly differentiate between the OSS layer and the commercial layer.
The competition is between real platform engineering tools like yours, spacelift, humanitec, morpheus, etc. Real innovation, not just running TF plan/apply.
It’s a traumatic birth for sure- but I think in the long run Hashicorp took the right approach. There’s already more FTE commitments from the community than Hashicorp actually had working on it.
Marcin here, co-founder of Spacelift, one of the members of the OpenTF initiative
I'm not sure. They voluntarily exchanged the throne of a benevolent ruler for an audience seat where they're one of many. I would personally think that this privileged position was worth more than the cost of the FTEs devoted to the project. But obviously I don't see the whole picture.
How did Hashicorp "take the right approach"? They wanted $$money$$, they didn't want Terraform to be bigger or better or to cede control of it to anyone else.
Having seen Spacelift, Pulumi, etc. over the last few years, I have little trouble envisioning them jumping to help with some alacrity if a Hashicorp that did want improvements did the work to move it to a software foundation themselves. But again: $$money$$.
100%. At Spacelift we have no beef with HashiCorp, and on a personal level many of us (me being the chief fanboy) admire their earlier work. We reached out to try and work together but the answer was a clear "no".
There is a difference between "making money" and "capturing the entire thing because our growth-addled brains cannot conceive of not allocating all of the money to us".
If you fire on the ecosystem, sometimes (not often enough, but sometimes) the ecosystem fires back.
> There is a difference between "making money" and "capturing the entire thing because our growth-addled brains cannot conceive of not allocating all of the money to us".
Trying to capture the entire thing, in a way that backfires and gets them a smaller piece of the pie. Which is, incidentally, and answer to the question: A business might be obligated to act in its own interest, but this is the kind of thing that can result in worse financial outcomes, not better.
> There is a difference between "making money" and "capturing the entire thing because our growth-addled brains cannot conceive of not allocating all of the money to us".
I feel like this post suggests a misunderstanding about how publicly traded, for profit companies work. Because it is literally their job to capture the entire thing and whatever else they can capture. That's why going public is not awesome for FOSS-oriented companies. We've been here before.
I mean, I understand it quite well. Through this understanding I am able to come to the conclusion that it bad.
Like, the very concept of a corporation is ordained by a society to provide benefit to the society, not (just) to the shareholders--otherwise there's no reason to enable its creation. Did Hashicorp's action better the society that granted its charter, or attempt to (and hopefully fail to) better its P&L?
I feel it is not unreasonable to simultaneously think that making some money in a sustainable fashion is a good thing--and at the same time also think that there is real value in expecting a functional moralism out of what are legally persons. I do realize that that is out of step with current (American) jurisprudence, but I also don't much care about that.
> it is literally their job to capture the entire thing and whatever else they can capture
Their job is to maximize shareholder value, and if attempting to capture the entire thing reduces shareholder value, which may occur in this case, they're not doing their job well.
When I read '$$money$$' I picture a cartoon character who starts excitedly staring at something with dollar signs in their eyeballs before embarking on some harebrained scheme.
I don't think it's supposed to indicate that profit seeking is somehow bad or unnatural for companies to pursue, but that Hashicorp got caught up in something foolish and shortsighted because of some excessive eagerness and carelessness.
Marcin here, co-founder of Spacelift, one of the members of the OpenTF initiative
As a long-term HashiCorp fanboy, and a true fan of the work of Mitchell this would be my dream come true, on a very personal level. It would also be a great day for the entire community.
Infrastructure state management is a massive opportunity, but I’m not sure that Terraform is the way forward (it should use an existing programming language, state should probably be in git, or even better it should be able to analyse state in real time, you should be able to spin up a local environment on your own computer).
Managing a cloud deployment in 2023 feels a lot like managing a Sharepoint server in 2010. You really want it all to be scriptable, but the tooling isn’t quite there yet.
It seems to be a bad sign that the repo for this isn't even open yet...
I understand that on day one the tests may not yet be green or some links in documents might not be updated, but I would feel a lot better if there was a public github repo that had a README.md with a big note at the top saying "Work is underway, don't use this yet".
Without such a repo, it feels very much like opensourceness and community involvement is an afterthought.
Tests are all green, actually! We believe we've even resurrected a large number of previously broken tests.
Really, there's a few things, like getting rid of trademark infringement and setting up some basic community guardrails, that we need to finish prior to publishing.
We're doing our best to get this out and public as soon as possible.
You can take a look at the new public roadmap repo[0] to track progress towards milestones such as publishing the repo.
Under normal circumstances - yes. But the moment one creates an open fork of the repo, they get a trademark violation letter from Hashi's legal team. So in order to make the repo public, we need to be absolutely sure we're clean.
> Who is maintaining OpenTF? Is there enough firepower behind the project?
> So far, four companies pledged the equivalent of 14 full-time engineers (FTEs) to the OpenTF initiative. We expect this number to at least double in the following few weeks. To give you some perspective, Terraform was effectively maintained by about 5 FTEs from HashiCorp in the last 2 years. If you don’t believe us, look at their repository.
That's great news. I wonder what will happen to ongoing roadmap and all the bug/feature requests that were pending.
I fear that sooner or later the APIs will become non-compatible so the only "good move" would be to jump to OpenTF as soon as possible to avoid complication. But for entreprise it will be a tough jump: updating all tooling, scripts, deployment flows, CI, compliance check, etc.
We have the tooling in place to ensure full equivalence of the most important parts, especially the statefile. We will also take great care to not introduce DSL changes that would make it hard to switch back and forth between OpenTF and the legacy Terraform. Looking at the codebase for the last week, and the list of PRs on the original repo ignored to oblivion I believe there's a lot of space for innovation that does not break the interop guarantee.
[Marcin, co-founder of Spacelift, one of the members of the OpenTF initiative]
I think that's a good goal, but it seems pretty out of your hands. After your first addition to the language or state file (which by the sound of it is something you hope to do soon, and I applaud that), TF could immediately introduce something making compatibility impossible without you reverting and breaking the feature.
They could I guess, but with that they'd be abandoning support for all their previous versions, too. Plus it's software, there are a great many options here. I won't get into the nitty gritty not to spare everyone else the gruesome details but I'm not going to be losing sleep over this just now.
I don't see any mention of migrating the issues over, is that part of the plan?
I also don't have a good grasp of what the situation is for the in-flight PRs for Hashicorp's repo: are those changes still MPL until they land in "main" and get BuSL-ed? IOW, is the PR author responsible for resubmitting or could any such patches be pulled in safely (err, DCO and CLA nonsense aside)?
> Even better would to leave a note rescinding permission for Hashicorp to include that work, though I'm not sure whether that is a viable strategy.
I don't see how that would work. If a valid offer has already been made under a license agreement, it's then entirely up to Hashicorp whether to merge it or not.
I would argue that the license change constitutes a material breach of whatever agreement existed previously, but IANAL so what do I know.
Regardless, the legality of it does not matter. The already significant social consequence would be compounded if they chose to merge something that the contributor no longer wanted to see merged. It might be legal, but it treads even further into an ethical gray area.
Their current CLA provides for a full copyright license grant: https://www.hashicorp.com/cla Assuming that's not new (and I have no reason to believe it is) then contributors were already signing away their right to be choosy in that regard; Hashicorp can already relicense any previous contribution, which is why this change is even possibly in the first place.
I agree that it might cause some bad vibes, but it's no different for these in-flight PRs than it is for all of the historical ones that were already merged.
While i rally appreciate the effort and am astounded that they already have the pledge of 10 Full time engineers for 5 years I am left wondering 2 things:
A) This could easily be a another reddit moment, where corps and actual bill payers will forget about it in two weeks since it simply -does not concern them-. The supporting list mostly consists of actual competitors.
B) If they can manage all these resources, why can we just simply not do something new along the way? Terraform has its flaws and everyone that seriously had to worked with it can name many.
For instance: Did you know that you cant easily shut down a server when deleting a terraform resource? At least not without hacks or workarounds?
Its time for a "cloud-init native" solution to all these problems and while appreciating the effort i think this fork will actually hinder future development by having things remain the same.
> why can we just simply not do something new along the way
Marcin here, one of the member of the OpenTF.
99% of the value of Terraform is its ecosystem - providers, modules, tutorials, courses etc., and millions of lines of battle-tested production code already written in that language. 1% of the value is in the tool itself, with the tool serving as the gatekeeper to all these riches. One of the things that I personally want to see is opening up the codebase to allow building new things on top of it, which then don't need to reinvent the wheel.
You dislike HCL? Fine, have something else give the tool an AST and we'll take it from here. You don't need/want to go through the CLI? Not a problem, embed some of these libraries directly in your app.
Not sure what you mean exactly about "shutting down a server when deleting a Terraform resource". But do you think that's something inherent to the design that OpenTF wouldn't be able to address?
Personally I think Terraform hit on a really good pattern for IaC, and while there are lots of rough edges that could be polished, the overall approach is by far the best fit yet invented for the problem it's aiming to solve.
I'm not sure what they mean by that. But one case where terraform's model doesn't work very well, is updating a certificate on a load balancer (to be concrete, say an ACM certificate attached to an NLB in AWS) to a new cert and remove the old one. The proper way to do that, without service interruption is the following:
1. Create new certificate
2. Update the certificate attached to the load balancer
3. Delete old certificate
But it isn't actually possible to do that in that order with terraform because of how dependencies work.
By default what terraform will try to do is:
1. Delete old certificate. this will either fail, because the certificate is in use (as is the case in AWS) or destroy a resource that is still in use and cause the load balancer to enter a bad state
2. Create new certificate
3. Update the load balancer
The only ways I have found to work around this is with targeted applies (which are discouraged), or splitting the change up into multiple code changes, with separate applies for each change.
I don't think OpenTF interop with future TF is important. Particularly if OpenTF just moves at a faster pace, and gets way better in time. I would say leave TF in the dust and just start innovating.
can we finally fix all the annoying lack of features like auto import of existing resources? and providers that are just executables, so we don't need to write them with Go or according to an ABI? I'm sure lots of users have things they'd like to fix
Only time will tell, but my mental model is that this new stewardship will be a lot more open to community input than the Hashicorp "we're too busy, pound sand" model has been
I dunno if "auto import" will ever be a thing, but I will be first in line to change its model to actually contact the provider during `plan`, which neither TF nor Pulumi do right now and it's a cause of endless wasted hours and borked deploys
This is a big question for me. The CDK style form of authoring IaC is way better than config files in HCL/YML/JSON. It has some rough edges and I do wish it wasn't so gung-ho on the magical object-oriented constructor side-effects, but it's still a net improvement.
CDKTF looks promising. How will OpenTF interplay with it?
I'm not into the CDK at all to be honest but isn't it a glorified preprocessor that generates the JSON representation of Terraform input, which is then processed as usual by your regular Terraform binary?
To each his own I guess. I personally like the HTML-like mental model that HCL gives me. In a sense not being a programming language is to me a benefit. If anything, I'd love to see an equivalent of CSS to Terraform - an idea someone smarter than me was floating not so long ago. Decoupling structure from specifics - I can see a well thought-out implementation of this concept actually taking off.
Re: programming languages... I love programming. Just not my infra.
> We completed all documents required for OpenTF to become part of the Linux Foundation with the end goal of having OpenTF as part of Cloud Native Computing Foundation.
Anyone know why this was done in this order? Why not apply for CNCF right from the get-go?
The CNCF process takes a bit longer and CNCF is part of the LF anyway. The project governance can benefit from being part of a reputable foundation as soon as practicable.
We need to raise awareness about this fork so people know about, join in, and support the move. Share it with your friends and co-workers. Upvote or star everything you can. Let's push this forward people!
Wow I had no idea so few people worked on TF. Where are the bulk of Hashicorp working?
> So far, four companies pledged the equivalent of 14 full-time engineers (FTEs) to the OpenTF initiative. We expect this number to at least double in the following few weeks. To give you some perspective, Terraform was effectively maintained by about 5 FTEs from HashiCorp in the last 2 years. If you don’t believe us, look at their repository.
Well five is about the amount of sales people I remember joining an absolutely awful call with them a few years ago, so that’s my guess (edit: lol at getting downvoted for relaying an actual experience that happened. been using Vagrant since the original hobo logo circa 2012-2013 and have always been a HC fan, get off your high horse)
env0 founder here - we are honored to collaborate on bringing OpenTF to the world together with our friends and backed by a massive support of the community.
Expect the repository to be published very soon, once we’re officially part of a foundation and have some basic community guardrails and processes in place.
Terraform has got to be the only tool I've used a lot and still don't like one bit. The license change was the last push I needed to look seriously into Pulumi. What a wonderful tool in comparison, I'm not touching TF again if I can help it. I also got the feeling this fork will be better than the original eventually.
We are super excited to be supporting this initiative. The community around Terraform has been pushed aside for too long.
Terraform is the most widely adopted tool for managing cloud infrastructure. Putting it in the hands of a foundation that can ensure it remains open-source, community-driven, and impartial is crucial to so many infrastructure and operations teams around the globe.
So I'm pretty new to Terraform, and in my team we were planning to set it up for our infrastructure next month. Does that change something for us? If it stays backward compatible I assume not much would have to be done for switching? I'd definitely prefer an open license.
We're still in the first inning. A little too soon to declare victory IMHO. They're moat hasn't even been tested yet. Adoption is what actually matters. FTE commitments will disappear quickly if the project isn't able to get adoption.
Also I know Hashicorp has a good moat (how good we don't know yet). Some very big companies pay Hashicorp and there's no way they're going to leave for opentf anytime soon. The support alone is plenty valuable enough to keep them there.
From the Hashicorp post: "Vendors who provide competitive services built on our community products will no longer be able to incorporate future releases, bug fixes, or security patches contributed to our products."
Does HashiCorp pose a patent threat to OpenTF? If Terraform implements a feature and OpenTF (cleanroom) copies it, could HashiCorp patent the feature to prevent OpenTF from using it?
You say you have 14 FTEs. But do you have at least one major developer who contributed to terraform. Could you give me the users list who will work on this project.
I’m not sure how I feel about them saying they’re forking something, that it will be truly open source, and then saying they’ll publish the repo in 1-2 weeks.
Making sure there aren't any trademark infringements left, that we have some basic community process in place, etc.
There's unfortunately a bunch of these things we have to do before we can publish. We created a public roadmap repo if you'd like to track the progress[0]. We're doing our best to make it public as soon as possible.
I don't mind it. If a business is providing good software, and they have to make a license change to prevent someone from wholesale copying the work and selling it as a direct competitor, I'm for it! I'd rather have the sound business _in business_ and continuing to provide good software.
> Your project gets to have top tier programmers as maintainers, make all the nifty features, fix all bugs etc. for free or at minimum cost
In most case in these companies it's maintained and created at the expense of the company. I would expect that 90-99%% of the code, product development (talk to users, understand needs, etc), even devrel (marketing) (be on every conference to convince people to use it, that codification is good)- it's a lot of money, resources, and effort.
> New startup, cool idea, not much budget to hire engineers
In this case a few folks build it from the ground initially (most likely founders) I think. Please, let's not forget about this.
The important thing here is to discuss why did they do this (I meant relicensing it). Most likely- trying t create a moat from a lot of other VC-funded companies that play in this space? Not sure. It would be great to know their exact concern.
I think HashiCorp, as an infrastructure company, made a big mistake by choosing the BSL license (which was primarily designed for databases). Ultimately, this will destroy their community which in the long-term will also mean a lot of lost revenue...
It is also very interesting to see how this will play out specifically for Vault vs Infisical. So far, Infisical has been growing incredibly, especially after the license switch.
This is handled by the providers, not by Terraform core, and these maintained fully or partially by cloud vendors. Terraform core is but a small part of a much larger ecosystem.
I wish Hashicorp all the luck in the world with their product, however they choose to license it, and whether they chose to keep it Open Source or not.
It's their product -- they can do as they wish.
Simimlarly, I wish the OpenTF Foundation all of the luck in the world in their efforts to persuade Hashicorp to keep Terraform Open Source.
I am not affiliated with either group; I wish both groups well in their respective positions; I am not affected in any way by Terraform going closed source, or forking, or whatever.
One of the paradigms that I think in is that of Software Engineer, and that of first principles.
The basic "problem" that TerraForm is attempting to solve is that of "cloud provisioning".
"Cloud Provisioning" is a fancy name for "Infrastructure Provisioning" which is a fancy name for both "Server Provisioning" and "provisioning other services related to servers" remotely.
Which in turn are fancy names for "create and configure a remote computer or VM" (VM's are effectively remote computers) or some piece of software or infrastructure related to one.
Which are in turn fancy names for "spin up a remote server and/or something related to one or more of them..."
Pretty simple, pretty straightforward, once we drill down through all of the layers of linguistic sediment...
Problem is, in general, no two cloud computing providers -- use the same API.
That is, AWS's API is in general, different from Azure, which is different from GCP, which is different from other providers, etc., etc.
Here, have a look at some of them (there are quite a few!):
The net effect is or should be, that Terraform creates its own abstraction layer, its own API -- over all of these other disparate and ununified API's, sort of like "The One API to bind them" (Compare to how Microsoft Windows in the early days created a unified API across disparate PC hardware components created by multiple vendors).
With this unified API (or secondary abstraction layer if you prefer, API of API's), now code can be ran to programmatically automatically ad hoc add/modify/delete cloud infrastructure components (which remember, is a fancy way to say "servers and things related to remote servers") as needed.
Thinking in this way then, it wouldn't be necessary to fork Terraform and/or keep it Open Source.
If one were a Software Engineer, and one were inclined to, they could start building a rival Open Source product which would match the above functionality, completely from scratch with no dependencies on Hashicorp's code or licensing.
Already there seem to be more than a few Open Source projects in this space; and while many may lack the depth and breadth of Hashicorp's product, it will only be a matter of time before one or more becomes dominant.
The key word to search for in this space, on GitHub, is "cloud management"
What follows is a list of early Open Source contenders in this space:
It would be amazing if these efforts included fully documenting Terraform's Go interface and developing it into a first-class library. Being forced to interface with Terraform through HCL really holds it back.
That's literally one of my personal goals - to open up the libraries so that new and beautiful things can be built on top of it without having to reinvent the wheel. You will see it being part of the manifesto which says:
_Layered and modular - with a programmer-friendly project structure to encourage building on top, enabling a new vibrant ecosystem of tools and integrations_
Looking forward to seeing what you develop or even just expose with documentation! Terraform felt artificially constrained by the UI when I used it to MVP my CD platform. I suspect Hashicorp uses an undocumented Go interface internally for Terraform Cloud.
Sorry I should have been more clear, I meant the Go code that underlies the terminal UI. Maybe I'm totally off-base on my assumption, but my assumption is they aren't booting up individual terraform processes behind the scenes.
I'm really curious what Hashicorp expected here. I supposed their next move is to control access to terraform cloud by versioning. But that'd chop off a big portion of their userbase. So maybe they grandfather old tfvers in and expect new to use 1.15 or 1.5 I forget.
My guess is they went IPO. There was a massive cash infusion. Good. Eighteen months go by and people will be exiting. That means cash and talent leave.
Now sales are tough. Pricing is terrible. We've been using their services for years while being offered enterprise services without much incentive.
Why the license change? Lock out competitors or perhaps a charitable assumption, secure future service offerings without being exploited by cloud offerings that use their code while competing.
The decision was a business decision. The amount they spend on sales to sell what I think is an amazing product, just over priced, is their doom. Why lock an enterprise into a three year contract tied to usage?
In the long run few will care about this licencing change (as much as they should!) – business users will accept it, individuals won't take any notice, and competitors would get mired in issues as expected. Basically a win for Hashicorp.
But winning on positive changes beats licencing issues. Business users will go where the centre of gravity in the ecosystem is, individuals will too, and will prefer free (in both senses) solutions, and competitors will push this hard to all their customers. It'll be interesting to see Hashicorp needing to build OpenTF support into their products to remain compatible.
> So far, four companies pledged the equivalent of 14 full-time engineers (FTEs) to the OpenTF initiative. We expect this number to at least double in the following few weeks. To give you some perspective, Terraform was effectively maintained by about 5 FTEs from HashiCorp in the last 2 years. If you don’t believe us, look at their repository.
Wow. Even if most of this doesn't actually play out in the long run, that's some good support.