Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OpenShift is a fork of Kubernetes. Case in point. OpenShift has functionality called a Route. That isn't in Kubernetes. Kubernetes of course went and added something similar called ingress.

This means that anytime Kubernetes does a release that RedHat has to manage merging all those changes in with their local changes that aren't part of their project.

This is exactly the same sort of thing that RedHat does with their Linux kernels. It's exactly why you RHEL has been such a terrible platform to run Docker on. Because RedHat is only pulling in some of the upstream changes into their kernels rather than upgrading everything.

This sort of merging is slow, error prone and costly. If you use anything that they've added that isn't in the upstream it creates vendor lock-in. Sure OpenShift's Route code is open source, but you can't take that to any other vendor without building everything yourself. Want that new feature in the latest Kubernetes release, you better be prepared to wait.

As time goes on it will become harder and harder for them to maintain this fork. If OpenShift doesn't become the dominate Kubernetes distribution then RedHat may lose interest in it and then you'll lose your maintenance of that fork. For that matter, if RedHat loses the maintainers as employees they may lose their ability to maintain this fork.

It's my opinion that anyone buying OpenShift is playing with fire.

The fact that RedHat has failed to get their modifications integrated upstream and have to maintain themselves is a massive failure on their part. This sort of thing was understandable in the past, but we really should expect more of them now.



> The fact that RedHat has failed to get their modifications integrated upstream and have to maintain themselves is a massive failure on their part.

Not really correct. We treated ingress as routes v2. Some of the design choices for ingress (which is still beta, and may change again before reaching stable) were improvements, but others created more problems.

RBAC, most of the authentication code, a huge amount of performance work, podsecuritypolicy, egress network policy, and many others all originated in OpenShift, and then we moved or helped move them into Kube. When we did that, we worked in the community to improve those features. And then we do all the work in OpenShift to make them transparently (mostly) available to the early adopters. For instance, OpenShift RBAC APIs in 3.7 now sit on top of Kube RBAC, and you can use either API. We'll continue supporting that for a long time so that users can switch at their leisure.

It's just what we do.

Edit: templates are the only thing that didn't get upstream, and it was because Helm was good enough at that point that we didn't need it in Kube. We continue to support templates, and they are exposed under the new service catalog work as a broker so users can be completely oblivious to their consumption like we're hoping to do for Helm. Everyone wins.

Edit2: deployment configs also are an example of predating Kube deployments - the fundamental design choice is actually different (DC can fail and inform you something is wrong, deployments just try forever). We continue to add capability to deployments to make them better than DCs, and then add the same improvements to DCs. If we picked a new name it would be DeploymentJob - it can run hooks, have custom logic, and fail. It's not upstream, but will be an extension API soon.


This is easily confirmed. Just look at the companies upstreaming into k8s, and you'll see redhat is dominating. They have people in almost all SIGs, and are very active in the community. Thanks for all the contributions.


The extension API feature is cool, it will make k8s ecosystem grow more rapidly.

BTW, will all the extra features in OpenShift be ported as extension APIs?


Kubernetes owes its success largely to Openshift and Redhat's efforts. Without Openshift, Kubernetes would just be an interesting POC. Google doesn't dog food Kubernetes. Openshift has since the beginning, and contributed significantly to K8s as a result of actual production usage. Just take a look at the top contributors and you can see the kind of contributions of the Redhat guys.

While I sort of agree with you on RHEL, I don't think this is the case with Openshift at all. I wouldn't hesitate to recommend it to people looking for a full solution.


> Kubernetes owes its success largely to Openshift and Redhat's efforts. Without Openshift, Kubernetes would just be an interesting POC.

How did you arrive at this conclusion. Curious to know more about redhats role in this.


Red Hat have done a lot of the productionising and packaging and got it running at a lot of companies.

I don't fully agree that Red Hat has more ownership of the success of Kubernetes, though. They may have been necessary, but by no means sufficient. The aura of Google has probably had far more importance in the momentum to date.


CNCF owns the Kubernetes project. Google provides a lot of resources, and so does Redhat. Both are heavily involved in k8s, and both contributed significantly to its success.


My involvement w/ Kubernetes over the last few years. I've always considered k8s co-led by Google and Redhat. It's not hard to find this out for yourself - just take a look at the Github and the mailing lists, it's all open. K8s changed dramatically with Redhat's involvement.


It is important to note OpenShift predated Kubernetes. RH reimplemented the underlying platform by adopting Kubernetes only a few years ago.


That is a fair point. I'm only talking about the "Kubernetes" implementation within OpenShift. It's modified from the upstream one and includes things that aren't part of upstream Kubernetes.


Routes were created about a year before ingress.

Part of what we do is give users a path of adoption. For instance, routes will continue to work forever. The openshift routers have almost every bell and whistle possible for routes - and we've recently added ingress support. We also took all of the security features of routes and applied them to ingress - for example, if two different namespaces ask for the same hostname with a route, the oldest one always gets served. We also adapted the security rbac around routes to ingress, so you can set a role that controls whether end users can use custom host names or custom certs. We also are in the process of adding all of our extended cert validation to the router for both routes and ingress, so if a user puts in a bad cert other users aren't impacted.

Basically, you can use vanilla Kubernetes if you want. But openshift is a sundae with sprinkles. Come for the ice cream, stay for the toppings?

Edit: also, every add on in openshift is either something that will eventually be in Kube, or an extension to Kube. Red Hatters are the ones who added extensibility to Kube (with help from others) like api extensions, CRD, initializers, web hook extensions, binary CLI plugins. We did that so that OpenShift can extend Kube to solve real user problems, and also so that everyone else in the ecosystem can do the same.


My understanding is that OpenShift is more of a superset of k8s vs. a fork.

I don't think the situation is going to be as bad as you are implying.


[Full disclosure: I work for Red Hat as an OpenShift Consultant]

The truth is a bit more nuanced as Kubernetes and OpenShift are actually made up of dozens of projects and integrations. Our company contributes to (as upstreams) in support of our OpenShift Enterprise offering: Kubernetes, Linux kernel, HAProxy, Jenkins, Hawkular, Heapster, Cassandra, Elasticsearch, FluentD, Kibana, Jenkins, JBoss, Tomcat, Apache, Ansible, Go-lang, and probably many more. We do almost all of our work completely in the open (our docs, container images, templates, examples, blogs) via github and trello. In fact, you can run just about the same OpenShift (officially called OpenShift Container Platform (OCP or OSCP) or OpenShift Enterprise (OSE)) we sell by using our upstream project for it, OpenShift Origin [0][1]. If that looks complicated, you can try minishift [2][3] to start which also has an upstream Kubernetes project in minikube [4].

In terms of superset vs. fork: it's not quite a superset because almost everything we commit to OpenShift gets committed to Kubernetes and/or vicea versa. You can almost always say if it works in OpenShift, it works in Kubernetes; if it works in Kubernetes, it works in OpenShift.

It's not really a fork either (as we say often "Best idea wins!") and so our people (including management!) try to make sure we are adding value to Kubernetes so that our customers & the community can then extract that value. OpenShift/Kubernetes metrics is one place that affects me & my customers that we're following the community's lead on that and implementing new developments in OpenShift as Tech Previews when appropriate. Our code is not diverging from Kubernetes as much as you might think in supporting some of the "enterprise" features we've added.

So, I would say OpenShift is a Distro of Kubernetes in the same ways RHEL, SuSE, et. al., are GNU/Linux Distros. You might say Kubernetes provides the "kernel" for a modern Data Center (compute resource scheduling and management, internal/external data structures and interfaces to use such compute resources). OpenShift is intended to help provide everything else you expect your data center to do for you or to support your Application Development in Java, .Net, node.js, Ruby, Python, PHP, Perl, etc (UI & CLI management interfaces, simplified build and deployment processes (S2I), Jenkins integration, external logging integration, external monitoring integration, sample 12 Factor Applications, etc.). We partner with companies when they want help bringing on their storage systems, frameworks, databases, applications, etc., just like you'd expect when companies provide drivers for their databases, hardware, or storage systems for OS kernels.

[0] https://www.openshift.org/

[1] https://github.com/openshift/origin

[2] https://docs.openshift.org/latest/minishift/getting-started/...

[3] https://github.com/minishift/minishift

[4] https://kubernetes.io/docs/tasks/tools/install-minikube/


A fork can imply many things, but all supersets are forks. I realize that RedHat doesn't like calling OpenShift a fork, but it fundamentally is. The only people that will maintain their additional functionality is RedHat themselves.

If you manage to avoid all of their added functionality then you can avoid the vendor lock in. But you can't avoid the delays in getting upstream features/bug fixes.

If Kubernetes was super stable and you weren't likely to want any of that stuff I would agree it probably wouldn't be a big deal. But it's not. Significant improvements are coming in every single release.

RedHat is managing a decent pace in keeping up with Kubernetes for now. But all that it takes to upset that apple cart is for something to get added in a way that breaks their additional functionality.

If RedHat doesn't have the clout to get their improvements in upstream, why should I presume they have the clout to avoid their additions won't be broken by other changes?


The thing is some people want super stable, improvements are great but the ability to plan is also great. That breaking change that prevents Redhat from integrating Kubernetes into Openshift is just as likely to be a breaking change for Company X using Kubernetes

That's why they pay Redhat, so they can plan.


You're sort of twisting GP's point by saying that because K8s is not super stable, Red Hat will provide stability via OpenShift. I mean yes, Red Hat will certainly provide a more stable version, but it's a double-edged sword, because K8s might leave OpenShift in the dust with some incompatible change, and then a few years down the line you could end up with serious buyer's remorse when K8s is an order of magnitude better and OpenShift is left in some well-maintained purgatory.


I can assure you that openshift will always be Kube++. It's just a Kube distro. The fact that today you need to compile in those extensions is a detail that we and others spend most of the time addressing.

Odds are, most of the things you use in Kubernetes were because someone working on OpenShift wrote, tested, performance tested, and stress tested in production.

When LTS OpenShift is a thing, there will still be an OpenShift trucking along right behind the latest Kube. We always try to strike the balance between being on the bleeding edge and making sure end user clusters continue to work. In fact, a lot of the bugs in patch releases are found by the teams working on openshift and opened upstream right away. But an OpenShift user never sees that, because we only ship once it's stable.


You've got a mighty big crystal ball there then.

Sarcasm aside. A major part of the allure of Kubernetes is that it's not a single vendor project. It's unlikely to die if something happens to RedHat. Say someone like Oracle comes along and buys you guys. But that's not the case with OpenShift.

Maybe you're right that things will continue as is and OpenShift will always be better and that RedHat will always maintain it.

But not particularly a risk that I think is worth taking.


Yup, when I talk about stability I'm not talking about stability of function, which I presume is what people buying OpenShift want. I'm talking about stability as in few changes. Anyone doing things in the Container space shouldn't be expecting lack of changes, even if you're using OpenShift.


You could equally end up a few years down the line with Kubernetes being an order of magnitude better but also having required two orders of magnitude of work (over those years) porting your companies infrastructure to it.

or a few years down the line the zeitgeist has moved to Locutus and no one but Red Hat is driving anymore.


I don't understand why people are so fork-a-phobic, and anti-patching these days. Distributions (like Red Hat and SUSE, but also Debian, Ubuntu, Fedora, openSUSE, etc) have been doing this for decades. Forking a project is something that is unique to the free software community, and we're doing ourselves a disservice by not taking advantage of this freedom. Forking a fast-paced project like k8s is fairly ambitious, as you've said, but that doesn't make it a bad idea from the outset.

> If RedHat doesn't have the clout to get their improvements in upstream, why should I presume they have the clout to avoid their additions won't be broken by other changes?

That's not how free software development or maintenance work. Believe it or not, the engineers at Red Hat (or SUSE, Canonical, etc) are actually pretty clever. An upstream not accepting a change can be for any number of reasons unrelated to the technical aspects of the patch itself. It could be a conflict with their roadmap or scope, it could break something else they're working on that is of higher priority, it could require more discussion on whether the use-cases can be solved by existing features, it could require further research into whether the proposed feature is the best way of solving the problem, etc. I've seen all of those reasons (and more) for some of my changes not being merged upstream (and I also maintain some upstream projects, so I've used those reasons before too). Not to mention that usually "no" in an upstream review means "not yet, I'm still thinking about it".

If an upstream rejects a patch, but a customer needs the patch in order for them to be able to effectively use the project, then Red Hat (or SUSE, Canonical, etc) are entirely within their rights to add that patch to the packages they ship. And that's the correct thing to do. Upstreams generally are not good at release engineering, so in order for hotfixes a distribution would have to patch the project anyway. What makes a feature patch any different? Not to mention that Red Hat (or SUSE, Canonical, etc) also provides documentation on how to migrate to the upstream feature (if the upstream feature ends up being different).

Kernel development has worked this way for more than 25 years, with distributions carrying patches that eventually get pushed upstream asynchronously (usually with some improvements through discussions that make them more generic for all kernel users). While stable kernels have made the massive patchsets much less of a burden to maintain, this model still is in practice today.

[I work for SUSE.]


I'm against forks like OpenShift because as an upstream maintainer on a major open source project, distribution patches caused us nothing but headaches. Their well meaning patches almost always caused problems. Users routinely ended up at our doorstep with the issues they caused. We then either told the user to go bug their distribution, spent time digging into the issue, or got lucky and the distribution maintainers actually paid attention to our lists. The latter one was actually pretty rare.

You presume I don't have experience with open source projects and don't understand why things might not be accepted, which isn't really true. I used clout as short hand for doing the work to actually get changes into upstream. I'll admit I probably chose a poor word there.

I really can't fathom why Routes wasn't just adopted into Kubernetes directly instead of a complete rewrite of Ingress being added. I don't know all the details, but when I went digging what I found was RedHat folks explaining what they did and Google engineers writing Ingress.

Please also understand, I totally agree that RedHat and other packagers are fully within their rights to patch things. I think they really shouldn't. They cause at least as many problems as they solve in my experience. But it's also my right to say that I don't want to use their patched stuff.

I don't see the situation with the Linux Kernel as a success story. I see it as a failure. It's downright impossible to tell someone if something is going to work with their kernel because the version numbers are utterly meaningless since the distributions patch all sorts of things in and out. I have been running Linus' mainline kernels for the last several years and I've been broken exactly once and even then only very slightly.

I tend to think that if distributions avoided patching unless absolutely necessary and worked with upstream to get things included first we'd all be a lot better off. Those reasons why the patches weren't accepted quicker would get dealt with before things were in the hands of users.

But frankly the distributions incentive is to create value for themselves. Not help the project along. Helping the project is utterly secondary to any value creation they are doing for themselves. In fact you give an excellent example. You say that upstream is terrible at release engineering. So rather than just applying a patch to a distribution, it's really beyond me why distros don't work with the upstreams to improve their release processes if that's the problem with staying with a pure upstream.

That's not to say that distros don't create any value for the overall community. It's just my opinion that they don't create as much value as I think they should. These companies are taking in massive amounts of money off the open source projects. Sure, in some cases they have maintainers/contributors to upstream projects on their staff. But those are usually the cases where what I'm talking about isn't what is happening.

Now all of that sounds like I don't think distros should ever patch. Which is probably an exageration of my position. I think there are times when it's needed. Security fixes, unresponsive upstreams, etc...

But adding completely distinct functionality. I don't want to touch that with a 10 foot pole.

Edit: Forgot to say, your comment about the migration to upstream feature bit. I flat out asked RedHat how they planned to get people to migrate to Ingress. Their answer was they didn't have a plan.


Hrm, we've had a plan for a while, so whoever told you that may have been misinformed (sorry about that, not everyone always catches up).

https://github.com/openshift/origin/blob/master/pkg/cmd/infr...

Is in 3.6, and other improvements will continue to be added. The one downside is that you have to grant the router proxies access to secrets, which means if someone compromises your edge ingress controller they can root your cluster unless you are very careful about only giving the routers access to exactly the secrets they need. That's partially why Routes contain their own secrets, so that you can't accidentally expose yourself to a cluster root.

This sort of stuff is the details the OpenShift team spends most of its time on. Kube will eventually get most of this. But most people are running single tenant Kube clusters and so in Kube we spend more time focusing on making that work just right. It's pretty difficult to build a fully multitenant Kube setup without making choices that we're just not ready to do in Kube yet.


They do have the clout to get their improvements upstream

http://stackalytics.com/?project_type=kubernetes-group&metri...


You might be interested in CNCF's Kubernetes Software Conformance Working Group, which has been working closely with the Kubernetes's architecture and testing SIGs and providers of most of the Kubernetes distributions (including Red Hat) to ensure interoperability.

https://www.cncf.io/certification/software-conformance/




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: