> We completed all documents required for OpenTF to become part of the Linux Foundation with the end goal of having OpenTF as part of Cloud Native Computing Foundation
Imho the best possible choice, and one that was easy to see coming when they announced they were joining "an existing foundation".
I agree. I have been outspoken in my criticism of the LF -- but I would like to think that I have been fair as well, calling out cases where they are the right fit for a project. And in this case, for myriad reasons, they are indeed the right fit. Kudos to the OpenTF crew -- we loved speaking with some of them on Monday[0], and left enthusiastic for the future of OpenTF!
Personally speaking, I wish it had been OpenInfra.
I think they have a stronger sense of community building, and more more intentionally make space for individuals to take leadership roles that are truly independent from their employer (member company) affiliations.
As someone who has been involved in many open source foundations over the years, they all have their pros/cons. If you are looking for the most customized approach, the Linux Foundation (LF) is probably the most customizable as they have build hundreds of entities that many people may not think of being part of the LF... CNCF... GraphQL Foundation... R Consortium... OpenJS Foundation... Overture Maps Foundation... LF is really "foundation as a service" and they are best at ecosystem building from my biased perspective.
There are other foundations out there with their own advantages... ASF is very lightweight and pretty much accepts anything open source as long as you adhere to their fairly simple rules... EF is great if you have a need for a European base etc
Seeing OpenTF on the CNCF website would be glorious. TF has become too ingrained in the cloud operating model, CNCF is where it belongs. Maybe a Vault fork will join it someday.
Will HashiCorp remain relevant throughout this? Seeing a lot of parallels with Red Hat's recent mistake...
There’s parallels but it’s not the same. Red Hat’s projects are still all fully open source. And it’s not clear it’s a mistake yet.(I work for red hat but this is my personal opinion).
Also FWIW Red Hat license policy (as implemented publicly through Fedora) disallows software under the Business Source License:
https://gitlab.com/fedora/legal/fedora-license-data/-/blob/m...
Red Hat has previously worked to eliminate product dependencies on 'source available' licenses and we're currently having to do this wrt Hashicorp stuff.
Not sure how much you details you can provide, but I know RH products use Terraform under the covers for a few things (like in OpenShift). Are you removing this functionality because it's no longer FOSS or fears around the BSL verbiage?
Since Red Hat is at the earliest stages of grappling with this issue and I can't speak for the teams involved I don't think there's anything I can say, other than that our corporate policies on product licensing by default do not allow stuff under licenses like BUSL-1.1. The only case I am aware of offhand where Red Hat briefly had a 'source available' license in a product concerned some code that was transferred from IBM to Red Hat (the source available component was third party with respect to both IBM and Red Hat; IBM does not or at least did not have the same restrictions on use of such licenses that Red Hat has).
Just speaking personally, I'm happy to see this fork occurring and hope they succeed in joining CNCF.
For sure it will not update to BUSL-licensed versions of Terraform as mentioned above, but I can't say if it will stay on an older version, use OpenTF, use Ansible or something else.
Well, they clearly alienated themselves from the community, or a significant part of it. I'm not sure if it's a mistake from a business perspective but early leaders of Red Hat were very careful to collaborate with the community.
I can say that, the scientific computing community has been affected deeply because of this move. They wanted to eliminate "The Freeloaders", but the collateral damage was enormous, and they either didn't see or don't want to see what they have done.
The thing is, the big majority of these systems won't flock to RedHat, and won't continue to use CentOS.
Yeah a large portion of research computing standardized on Red Hat (see e.g. Scientific Linux). The stability is quite important when trying to run ancient/niche scientific code that sometimes would only support (or even build on) Red Hat, and when you have a large number of nodes that need to be essentially bug-for-bug identical you want the package churn and update cycle to be kept to an absolute minimum.
The licensing of real RHEL never could have made sense in the HPC space, and I'd be shocked if a meaningful number of deployments would be moved to purchase RHEL now.
When I was a "sysadmin" in this space I always kind of personally preferred Debian, which has similar longevity to its release cycle, but it could never gain much traction.
I hope at this current juncture the HPC community might rethink its investment in the RHAT ecosystem and give Debian a chance.
> a large portion of research computing standardized on Red Hat (see e.g. Scientific Linux). The stability is quite important when trying to run ancient/niche scientific code that sometimes would only support (or even build on) Red Hat
> I hope at this current juncture the HPC community might rethink its investment in the RHAT ecosystem and give Debian a chance.
Nowadays Nix and Guix are available, and they're fundamentally better fits for this problem. Traditional Linux package managers aren't really suited to scientific reproducibility or archival work. It would be much better to turn more towards functional package management than just swap in another distro.
> Nowadays Nix and Guix are available, and they're fundamentally better fits for this problem.
No, it's not. First of all, reproducible installations for HPC nodes is a solved problem (we have xCAT to boot, for example). However the biggest factor is the hardware we use on these nodes.
An HPC node contains an Infiniband interface, at least, and you generally use manufacturer provided OFED for optimal performance, which requires a supported kernel.
I wasn't talking about NixOS or GuixSD, or OS installation.
I was talking about tools that let you run ancient software on modern distros. With Nix and Guix you don't have to hold your whole OS back just to run 'ancient software'.
Well, if my memory serves right, this investment started with RedHat's support for CERN's ScientificLinux and snowballed from there.
Then this snowball is solidified by the hardware we use, namely InfiniBand and GPUs, plus the filesystems we use (LustreFS or IBM's GPFS) which requires specific operating systems and kernels to work the way it should.
It's not as simple as "Mneh, I like Debian more, let's replace".
While I strictly use Debian on my personal systems, we can't do that on our clusters.
Red Hat is also strictly against copyright assignment agreements in general, and keeps many under the GPL, so few Red Hat projects could realistically be relicensed like this to begin with.
IBM probably disagrees and as much as people expected RH to show IBM how to work, I think history is repeating itself and things are happening as they always did.
I understand why it's tempting to buy into this narrative but it is just not the case.
Aside from the fact that IBM had no involvement in the recent decision relating to git.centos.org (if I remember correctly, IBM found out about it when it was publicly announced), IBM has had basically zero influence on any aspect of Red Hat open source development or project or product licensing policy.
On the other hand, Red Hat has had some limited influence on IBM's open source practices. For example, IBM has moved away from using CLAs for its open source projects, I believe mainly out of a desire to follow Red Hat's example. I'm not aware of any use of copyright assignment by IBM projects.
Your comment dances around the point so avidly that it’s un-understandable to me. How have things been happening, and why would they happen, now, at Red Hat?
Allow me to spell it out: if IBM could guarantee themselves a maintenance or growth of market share in the short-term while simultaneously clamping down on licenses that are anything but closed-source, they would. iBM didn't buy redhat because they think it's doing things the "right way". They bought redhat because they thought they could make money with it.
Fully open source in the strictest possible sense, but with the added caveat that if you choose to exercise your rights under the GPL you'll no longer be able to do business with Red Hat [0]. I personally wouldn't categorize Red Hat's current position as compatible with the ethos of FOSS.
As with Sun in the old days, good luck actually collaborating as in a healthy open source project but the license doesn't specify there should be a community around anything so it's all good.
For them to get accepted into the CNCF would require relicensing a large amount of MPL work. What's always been confusing to me about Hashicorp's change and any subsequent relicense of OpenTF is that I know for a fact not everyone who contributed code to Terraform signed the CLA and allowed permission to relicense.
I suspect if OpenTF tries to relicense to a more permissive license like Apache 2 (rather than less in the case of BSL) license we might see some fireworks.
The CNCF has made exceptions on their license policy before, specifically for MPL based software. It'll probably be easier for OpenTF to go through that process than to relicense (which is likely not even possible for anyone other than Hashicorp).
Disclosure: I'm on the CNCF legal committee, which mainly makes recommendations to the CNCF board on things like exceptions to CNCF's fairly strict licensing policy.
This is correct, but I believe MPL has never been approved as a main project license for a CNCF project before, as opposed to a license of a third party dependency (the default rule is that such projects must be under the Apache License 2.0). FWIW I would not hesitate to support such a request for a policy exception.
That's great to hear. We at Oxide are huge fans of the MPLv2 and it's our default license for everything; I think it's reasonable that the default expectation for CNCF is Apache 2.0, but would love for MPLv2 to also be considered a first-class license!
In my personal opinion, there's no good reason to have a license policy at the CNCF, or any Linux Foundation directed fund, that makes using copyleft licenses so burdensome, especially when they are as "weak" as MPL 2.0 is.
I know that there are Reasons. I just don't think they are good ones.
Imho the best possible choice, and one that was easy to see coming when they announced they were joining "an existing foundation".