This is awful. It's proof of the downsides to the IBM acquisition, which I think we all knew was coming.
Imagine if you were running a business, and deployed CentOS 8 based on the 10 year lifespan promise. You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed their users.
I personally run a CentOS 7 server (as do members of my family), and was planning on upgrading them all to 8. Luckily, I didn't get round to it yet. I guess I'll have to consider an alternative. For my server I want a boring, stable OS, so I'm definitely not using Streams. This is going to ripple throughout the whole industry, as CentOS is used all over the place, from regular home users to businesses (and things like CloudLinux).
It's very disappointing that Red Hat can't see the damage they'll do not only to the community, but to themselves too. Someone will come along and take the CentOS user base, and it won't be Red Hat :(.
>> Imagine if you were running a business, and deployed CentOS 8 based on the 10 year lifespan promise. You're totally screwed now, and Red Hat knows it.
The hypothetical you posed is the actual situation, I am now learning, I have apparently forced on my team. We've ramped up labor 3x revenue preparing product launch in 90 - 180 days. We created an image containing centos 8 , Java , postgres and tomcat a year ago and that what is deployed to beta clients and what we've been testing.
What's ironic is that I sort of went out on a limb with my team by forcing us to go with Linux over Windows and the way I allayed concerns was to ask them to just "wait and see" in hopes that the performance differential would make it a moot point.
edit: after a little thought it seems that moving to RHEL might cost us the least amount of money and downtime. Still sucks and not what we need to be working on right now.
You build software, you ship it on an OS so it works, cool, this makes sense (I assume you need hardware and VM support, not just VM).
Why would you accept additional risk on the OS if you can easily reduce the risk, and ultimate cost, by going with an OS that has vendor support written into the actual contract? RHEL is 11-13 years total, Windows is... I'm guessing my grandkids will be using some form of Windows 10. CentOS is, and always was a community "best effort", with some serious delays occasionally (not often, but it happened).
A RHEL server license starts at $349, I have to assume that's at least an order of magnitude (or two or three) less than the cost of your software based on the technologies involved (sounds enterprise-solutiony). In other words a rounding error overall.
As a sysadmin in academia, this is not so straightforward. Since number of servers/VMs are in ballpark of over one hundred, RedHat license costs will be over 30,000$/year. This is not insignificant amount of money and not easy to get the money suddenly.
$30k is a significant amount, especially in academic environment, I don't deny it. But it's not much for > 100 servers. In my previous job, I had licenses which were about 15k€ for one server (thanks Oracle...) and the Windows Server licenses were also pretty expensive.
$300-500 per server/year is not much more than a Photoshop license for one designer. And if we talk big business (1000's of servers and even more), you will get very special pricing anyway.
Not just price, but ability to function without constant access to the satellite portal. It's darn near impossible to upgrade a RHEL box from a work at home VPN environment.
> edit: after a little thought it seems that moving to RHEL might cost us the least amount of money and downtime. Still sucks and not what we need to be working on right now.
And that is exactly what IBM is counting on. Vendor lock in.
We are switching to Debian-based distros because, frankly, we don't trust IBM not to knife us in the back even on RHEL for more money. Of course we have the advantage of being able to take the time to convert.
This is not about IBM. It may not be a pleasant change for everyone, but it's a change that has strictly technical motives.
CentOS was acqui-hired because Red Hat's upstream for layered products (at the time mostly RDO/OpenStack and oVirt/RHEV) could not use Fedora because it was too far from RHEL a year of two after RHEL was released, could not use RHEL because upstream contributors would have to pay, and could not use CentOS because its releases had too large delays. The solution was to make CentOS releases happen timely by paying people to make them.
These days a RHEL downstream is not enough for the layered products. Some of them require the kind of bleeding edge feature that is backported every six months to the RHEL kernel, and corresponding userspace changes (BPF, virtualization, etc.) and cannot afford waiting for the CentOS release because development must be done in parallel with RHEL. So the solution was to move CentOS from happening after RHEL to before RHEL which is what CentOS Stream is.
I can confidently say that the reasons are technical because other CentOS downstream have the same needs (e.g. Facebook's) and they also want to send patches to CentOS for bugfixes or features themselves, instead of waiting for Red Hat to find out about that bug, or decide they need the same feature. Plus there's no reason for rebuilds to disappear. The SRPMs will still be released by Red Hat.
That in no way explains why they don't continue to have both. They indeed will have both for another year. There was no requirement the Stream product even use the CentOS name.
CentOS was a community project whose leadership and control was taken over (acqui-hired as you say) by Red Hat and then it's core use case for the majority of people actually using it was discontinued. That is a statement of facts that happened as I understand them, not some spin on my part.
If Red Hat had not stepped in, perhaps some of CentOS problems (trouble getting releases out on time) would have been worse, or perhaps some other companies would have stepped in. We don't know, but we do know that CentOS has not been changed to be something different than it was before. It used to be a free re-spin of RHEL. Going forward it's something entirely different.
Red Hat always had the option to stop funding/providing resources to CentOS and name their new thing something else, but they didn't, and now they've effectively co-opted CentOS to be something different than it was originally intended to be.
> That in no way explains why they don't continue to have both
Because they don't need it anymore. CentOS Linux or other rebuilds can still exist (just not using the name; I disagree with that but I can understand Red Hat doesn't want its name attached to something that might have large delays in security fixes in the future) if somebody funds it or volunteers to do it, just like CentOS still supports Xen but RHEL does not.
Also for what is worth there have been lots of engineering changes to RHEL in the past couple of years that make nightlies (and CentOS Stream) much more stable than they used to be, especially with respect to regressions. Running CentOS Stream is not going to be like Fedora Rawhide or Debian sid.
>>> it's a change that has strictly technical motives.
I understand the business reasons for doing so. I don't agree with anyone branding this as done for purely technical reasons. Having CentOS Stream may be needed for technical reasons. Stopping CentOS 8 is in no way a technical decision. They are unrelated in any technical sense.
If Red Hat just doesn't want to put resources towards CentOS as it traditionally existed anymore, that's their option, but they deserve any flak they get for taking over an open source project just to extinguish it, since CentOS is in no way really needs to be linked to their Stream product. They could just as easily called it RHEL Stream and said it's free, and it would be a less confusing and more direct funnel of people that want RHEL stability into RHEL subscriptions. Using the CentOS name is just a mind-share grab and screwing over an open source community. They control it so can do it, but that doesn't mean I'm not going to call them out for doing so.
The thing is, Red Hat never considered the distro more than a side effect of providing a base for developing "things" that will run on RHEL. It's even written on the centos.org home page, the distro is not why CentOS existed in 2020. So the fact that users (including myself!!) enjoyed a free distro as a result was not a part of Red Hat's RHEL strategy in any way.
That only makes sense if Red Hat started CentOS. They didn't. The fact that they took control of it then changed it, even the web page, and then are not effectively killed the reason people are using it, is the thing I'm upset about.
If I effectively took control of the EFF and then a couple years later changed the website copy to say that the EFF is a vehicle for litigating cases that kbenson thinks are important, and then actually changed its actions to do so, would you argue the same points? How is this any different? Something that was a net good for many people has been taken over and eventually killed. I think we're all worse off for that.
IBM is all about expensive support contracts. We cannot afford RHEL. So we went with CentOS 7 and recently CentOS 8. I migrated some machines from 7 to 8. Turns out I did that for one year. My boss wasted precious money there, and it isn't getting us closer to RHEL. My take is its getting us closer to Debian Stable instead. Which is RedHat's loss because a migration to RHEL is then more out of the picture. They know this. They thought of the above. And they are fine with it. That is not a technical decision, as you said. It is a business decision.
Fire up the wayback machine and find when the centos.org home page changed the mission statement. At this point it started to be about a different "us" than you think, an "us" that doesn't include you and me and presumably doesn't mind CentOS Stream at all.
In particular I started doing that and I got to the Sep 2019-Dec 2019 range, around the time CentOS Stream was launched. At that time this:
> For users, we offer a consistent manageable platform that suits a wide variety of deployments. For open source communities, we offer a solid, predictable base to build upon
was changed to this:
> CentOS Linux is a consistent, manageable platform that suits a wide variety of deployments. For some open source communities, it is a solid, predictable base to build upon.
Oh come now, this is Red Hat, a company based on Open Source software. We're not wrong for wanting to hold it to a higher standard. They have benefited from the OSS community just as they have contributed to it.
Red Hat didn't make CentOS they acquired it. This is very similar to a large corporation acquiring a smaller competitor, promising to continue supporting it, creating a new, different product using the brand, then dropping the original product.
Centos was always a free alternative to Redhat. I and many other people used it because it was a way to get the benefits of Redhat without paying for it.
This is a really smart more on their part to get people to stop doing precisely this, and getting them to pay money for RH. Pepople who are not willing to pay (me included) are now rightly annoyed, but we were never their customers in the first place, so we don't really matter.
I’m in the same situation, and I recently had started to plan on upgrading a small HPC cluster from CentOS 7. Before today, I had planned on CentOS 8.
Now? “¯\_(ツ)_/¯“ Probably Debian.
I’ve always used CentOS for clusters, but part of the reason for that is that there are some research packages that support RPM installation, but not deb. At least this gas historically been the case.
If a large amount (maybe even a majority) of users have to switch away from CentOS and RPM packaging, I think we’ll see an acceleration away from RPM as a default option.
So, in that way, I think we do matter, but just not on the balance sheet.
I disagree. Having a pool of sysadmins / developers who know how to manage CentOS makes RHEL a more appealing alternative than Debian for many companies. The three companies I've worked at have primary used CentOS for development boxes. Shifting to an alternative will change who is familiar with CentOS / RHEL significantly for the worse.
I'm not confused at all about that. I was responding to the point that said "it's a change that has strictly technical motives." It's a business decision that's based on profit and positioning and name mind-share, so let's not hide that.
Software doesn’t run on its own. They rely on bunch of other software that is typically provided by distributions. So unless there’s some magic piece of technology that can take CentOS RPMs and make it work flawlessly on any Linux distro, the entire software industry is suddenly going to have to spend significant amounts of time on repackaging work.
There is alien[1], but I'm not sure if it is flawless, because of slightly different ways of doing things across distros like /etc/default vs /etc/sysconfig for example.
Depending on how good is the source, its usefulness and dependencies, packaging it to Debian is pretty straight forward. The dh_* helpers[2] does the job automatically most of the time. There is also tools for helping with specific languages, like dh-make-golang, dh-python, etc...
From the looks of it, alien is just a tool to convert between different package formats and is not a replacement for proper packaging work. Very much like how you can’t take a deb package from different releases of the same distro and expect things to work properly, you can’t just take RPMs from CentOS to a whole different distro and expect things to work.
It’s not only the differences in the packaging format that you have to take care of. There’s also version differences, path differences, dependency handling, and many other stuff to take care of. These are the kind of tasks which can’t be automated away and require non-trivial amount of work.
For organizations that maintain tens, hundreds, or thousands of CentOS packages that spans multiple teams, moving to other distros would be time-consuming and costly. It would certainly pay off if it was driven by technical reasons, but for organizations that are forced to switch by this announcement, this is just pure overhead.
That says a lot about your packaging hygine. With alien, you're essentially dumping the contents of an RPM on hosts it wasn't meant for. It doen't matter that you converted it to deb format along the way, it's the same if you actually install it.
Did you ever read a single line of alien source code? It extracts the metadata and files from a package, and re-archives it. That’s about it. It doesn’t handle package version differences, path differences, distro-specific rules, nothing. If your package has dependencies, forget about it because alien won’t retain dependency information it all. In other words, it only “works” for the most simple of cases.
I suggest you read the source code too, because otherwise you can cause real damage if you use it without understanding how it works.
But surely if you depend on those packages, you can maybe track (or just collect) their versions, then get the same versions in another package manager (possibly via pinning).
Though my experience with package manager is that dependency management is hell (rife with potential conflicts), so I do see the problem a bit better.
Still, it shouldn't take that long to fix? Like a few days of sprint to setup Nix or something like that?
I think it depends on the number of packages that you maintain. If it’s only a handful of packages, the entire process of repackaging, testing, and deployment might be doable in a matter of days. If it’s tens or hundreds of packages that depend on each other, I think that’s a whole different story. If those packages are maintained by different teams, it could take more than an year to complete the transition.
As for pinning packages, that’s only practical if you’re using Nix. As much as I prefer Nix over RPMs, not all of us have the pleasure of using Nix at work. It’s kind of a bummer because Nix packages are so much easier to work with and maintain compared to the competition.
If you run a for-profit operation, and downtime is costly, you (or your VP of eng) want a way to pay for immediate assistance from the OS's maker / distributor, when (not if) things go wrong.
> If you run a for-profit operation, and downtime is costly, you (or your VP of eng) want a way to pay for immediate assistance from the OS's maker / distributor, when (not if) things go wrong.
There is a difference between asking "how do I ensure there is one throat to choke when things go wrong?", and "how do I minimize the potential for things to go wrong?"
RHEL is a decent solution if you are trying to answer the former question (or both), but many budget conscious orgs focused on the latter and chose CentOS. Red Hat has now pulled the rug out from under them by trimming eight years off the EOL previously committed to with little notice.
EOL in this case means "no security updates" so even if your org was prepared, technically, to deal with a zero-day for example by rolling out an update in a timely manner without relying on paying a vendor for hand-holding, that option has now been eliminated.
Essentially, you now only get the stability and problem-minimization if you also pay the vendor for support. Otherwise you're stuck with a (relatively) unstable rolling release that will keep your internal teams very busy with a constant stream of minor issues, or potentially trying to roll-your-own updates or backports after the EOL for anything serious.
It's difficult to see this as anything other than a naked, money-grabbing betrayal of users.
Not sure why I got downvoted. I’m trying to offer the poster a viable alternative to CentOS 8 with a minimum of changes needed. I get that a lot of people don’t like Oracle (and for good reasons) but really they are the best alternative (and free) EL8 disto now.
The dates on that page were always dependent on Red Hat. In retrospect, obviously we should have made that clearer, and we will endeavor to do so in the future.
Out of curiosity why did you decide to deploy your application as a VM image? Is there a reason you didn't go with a Helm chart or other container native deployment?
I was curious what drove the decision to deploy that way. I was under the impression most new applications being developed today would choose a more modern deployment method. There’s a lot to maintain in a VM image like that, containers just seem easier to me. Helm chart or otherwise
The amount of maintainance with virtual machines images or container images are mostly the same nowadays.
I will argue that there is less maintenance when handling virtual machines images, because it uses less bandwidth and need less tooling around it comparing with container image based infrastructure.
But in general both are nothing more than the golden image concept.
It's a bundle of jinja2 templates for YAML files defining Kubernetes API resources. There's an accompanying application, Helm, which applies values to the templates and handles the interaction with the Kubernetes API.
Wonder if the web hosting industry will rebel and build another RHEL clone project that just gets the 10-year supported patches. Red Hat still has to release the patches, right?
A really big chunk of the world's traditionally shared hosted websites run on CentOS, because most commercial control panel packages and hosting automation systems are built for that. A rebadged CentOS is also AWS's default distro.
Wonder of the hosting industry, AWS included, will build a new stable clone of RHEL 8's upstream security patches. There are some big companies, like GoDaddy in there, whose business models are unlikely to accommodate for RHEL support subscriptions.
This is truly a bummer, and if someone doesn't pick up the pieces and continue offer RHEL rebranded, there's no(?) open sauce operating system with a decade-long support lifecycle. I wonder if this might cause an increase in unpatched servers and appliances when the alternatives offer five years at best.
There's a silent but relatively big user base of CentOS in HPC and scientific computing.
ScientificLinux and CentOS rules all HPC clusters. Clusters are like enterprise servers. Big, monolithic, rarely upgraded. They're upgraded in one big-fell swoop and left to run.
There'll be another clone of RHEL since HPC can't accept CentOS Stream as the alternative. The whole infra is too big to move to Debian too.
So with today's announcement, a new distro is born. Also Greg (CentOS' founder's) domain (HPCng) is very telling...
We'll see. We're in for a hell of a ride. If you excuse me, I need to dust-off my XCAT servers...
Okay, this is a serious question. For me, not an official RH position. In my time in HPC, nodes were baked with a specific image and then that basically never ever got updates. As I came to that as a sysadmin from other areas, I found that somewhat horrifying, but it seemed pretty universal. Have things changed such that applying patches regularly (like, more often than once a month or so except in emergencies) is a thing?
Not much, but in our setup the image is not something which can evolve or change over time. This practice has some very practical reasons though.
Scientific applications can be very picky about the libraries they use or need, down to minor version since the results they produce are very, very precise. Even if not very accurate, you need to know the inaccuracy. An optimization in a math library can change this and, it's not something we want. Also program verification and certification generally includes versions of the libraries used.
Piecewise upgrades are a no go too. Your cluster generally can't work well in heterogeneous configurations (due to library mismatches) and draining a node is not a straightforward task (due to length of the jobs). If your cluster has a steady stream of incoming jobs, reducing resources also means queue bloat and recovering it is not easy sometimes. If you want to drain the whole cluster, it takes almost 2-3 weeks so, you lose ~1 month of productivity. When you start an empty cluster to churn its queues, its saturation takes time so, it doesn't go to 11 directly.
Also, worker nodes are highly isolated from the user's point of view. No users can log-in, only known people submit jobs, etc. Unless there's a rogue academic trying to do nefarious things, the place is pretty safe and worry-free. In past 15 years, we got two rootkit infections due to a server which can be world-accessible by design. Other than that, nothing ever got infected.
At the end of the day, this approach has some valid reasons to be alive. It's not that we're a bunch of lazy academics who refrain from applying good system administration practices. :D
Addendum: The images generally get updated when new hardware is added, since new processors tend to work better with newer kernels. Also sometimes we bit the bullet and update all the cluster at once. XCAT helps a lot in this space. If your image is sane, you can install batches of 150+ servers in 15 minutes while sipping your coffee.
We will certainly try. Need to mirror a repo, freeze it and update our installation infra so it looks to the local repo rather than the national mirror.
All repo settings will look to local repo so we'd have no dependency problem or version creep if we need to install an additional package.
Didn't completely think how to handle the occasional emergency update though.
Also, we need to compile in some packages. Hope they won't break. High performance stuff needs optimized/customized compilations.
I just want to add: Hope that the packages in CentOS stream won't end up too cutting edge for the scientific software community. These communities move slow due to stability requirements. We'll certainly see but it might be another potential problem.
I can totally reassure you on your last concern: everything that goes into Stream is approved for a minor release in RHEL. That's not changing at all. Cutting edge is still Fedora's turf. :)
To be clear, I'm RHEL and CentOS _adjacent_, rather than actively _in_ them. But I think (rough launch and more than a few communication issues) aside this is generally gonna be positive.
I think that's because HPC users are largely non-technical developers. We changed a DHCP schema at one point and had a bunch of angry academics in the IT office because their Matlab scripts were broken. Many of them had been hard coding IP addresses into the code itself.
When your company software stack is turned inside-out by Oracle reps to look for unlicensed JVM's on penalty of really big fines (sorry, opportunities to buy more Oracle software) all those nice-to-have features don't seem to matter that much.
> I hope they squash Android Java, Google had the opportunity to buy Sun after screwing them up.
Do you really think that would be a reasonable thing to happen, and good for technology and the world in general? It seems disproportionately punitive, and the "right" thing to happen only if all you care about is watching things burn.
And you haven't addressed the precedent that's been set that APIs are now copyrightable. Do you like that precedent? Do you ever use anyone else's APIs in your daily development, and do you like how that now opens you to huge potential liability? Is all of this worth it just because Google didn't acquire Sun??
They've done all of them because they think it'll allow them to earn more money with less effort on the long run, not because of the sheer love of computer science and research.
Oracle creates cool tech in the legacy of Sun because it impresses the right people who can influence the decision makers.
To recap:
"Hey, Oracle's these new toys are capable and fun to use. We can do much more with them. Can you buy these for us, engineers so we can be happy like children again?"
I don't think that's really true. My understanding from talking to people there is that Java funding was increased for so many years despite losing money because Larry Ellison just thought it was cool tech and they use it a lot. Likewise, GraalVM is so well funded largely because it's cool and Oracle doesn't have many cool R&D projects. It's not clear it's all that commercially driven when you observe that so much of it is open source.
That said, their supported versions of Java and Graal are expensive. Some things never change.
Do we need to have free beer JIT, AOT and GC implementations for every language?
If I understood it correctly, a programming language has some foundational design decisions (including its memory and execution model) to attack a particular set of problems?
> What we need are top level JIT, AOT and GC implementations, anything else is just going backwards.
Not always. As I aforementioned in another thread, we also need C/C++, Python, Perl, etc. as is since they fill different roles and attack different problems.
I've written Java, C, C++, Python, Perl, PHP. Had to abuse some of them to fit roles which they're not designed to do. At the end of the day, these languages satisfy different needs and solve different problems in different scenarios. Java wouldn't be able to do all of them. Neither C++, nor Python.
As I said, you may like Java but, it's not the king of every programming language. No programming language is king of everything BTW.
C and C++ development is sponsored by the corporations of Apple, Microsoft, IBM, Oracle, Google....
PHP was mostly driven by Facebook needs.
None of them is any different from Oracle.
And apparently you fail to understand who has contributed to state of the art implemetnations of AOT compilation to toolchains like LLVM, hint the companies that HN loves to hate, it weren't weekend and late night coders.
> And apparently you fail to understand who has contributed to state of the art implemetnations of AOT compilation to toolchains like LLVM, hint the companies that HN loves to hate, it weren't weekend and late night coders.
I'm pretty aware that nearly all clang/LLVM development is driven by apple.
On the other hand you apparently fail to understand my point of view about Oracle and Java ecosystem. I'm neither against Oracle nor Oracle's development of Java or Java's development in the interest of Oracle mainly.
I'm only against Oracle's motives about making Java a walled garden and usage of this programming language to extort license money from others.
On the other hand, I personally use OpenJDK runtime countless times every day, knowingly or unknowingly. I'm written Java in the past and have no reservations or bad things to say about it. Contrary to your view about other programming languages, I'm pretty neutral against every other programming language.
> C and C++ development is sponsored by the corporations of Apple, Microsoft, IBM, Oracle, Google.... PHP was mostly driven by Facebook needs.
There are no news for me here either. Development of a programming language or any tool with input from its users is a non-issue. Also, every user has needs from the products they use, so they will provide feedback and communicate their needs.
The difference, I want to highlight and highlight again, none of these corporations can use C++ or PHP or Python to extort license money from their customers. PHP is owned by Zend, so they may try. C++ is almost public domain now. LLVM is under apache license. Either way I use GCC which is GPL. Python is 20+ years old and is also almost public domain.
Contributing to a tool to get what you want is different from owning a tool and to use it to extort licensing money is different.
Either way, as aforementioned, I have nothing against Java, contrary to your views against other programming languages.
> I don't think that's really true. My understanding from talking to people there is that Java funding was increased for so many years despite losing money because Larry Ellison just thought it was cool tech and they use it a lot. Likewise, GraalVM is so well funded largely because it's cool and Oracle doesn't have many cool R&D projects. It's not clear it's all that commercially driven when you observe that so much of it is open source.
Development of programming languages by corporations is not something I object but, all of the stuff told about Oracle here is correct.
They're not a nice entity unless you pay money to them and they're greedy. They always want more. Also, their hardware can fail in strange ways and they'd shrug it off.
I've met with some nice people who migrated from Sun but they all say that the terms they work are draconian.
I like Java too but, developing a nice language doesn't make Oracle good. Don't get distracted [0].
> I guess the "community" would love to keep using Java frozen in version 6.
Python doesn't stop. C++ doesn't drop. Even brainfuck doesn't stop. It'd have prevailed. OpenJDK is one fruit of the project. After removing patent encumbered image processing stuff, OpenJDK just took off. Yes, it's still part of Oracle in a sense but, OracleJDK is compiled from OpenJDK, not vice versa. Again, don't get distracted [0].
It is naïve to think that the employees of the companies aren't driving their employers agenda, regardless how "independent" those governing bodies are.
Of course but, there are sub-committees which melt all the agendas into a single pot and create solutions which makes everyone happy. Also some of these languages have or had BDFLs.
Oracle's governance is different from this. C++ is an ISO committee. Python has a lot of working groups, etc.
Java is much more centralized when you compare with others.
Nope, IBM, Azul, Amazon, Red-Hat, Alibaba, Twitter, Microsoft also seat at the Java table.
Should I also start listing the dark sides of each company that seats at ISO C and ISO C++ table?
Python working groups also need money from those corporations, and Python is yet to provide the performance levels of Java, so much for free beer development.
> Nope, IBM, Azul, Amazon, Red-Hat, Alibaba, Twitter, Microsoft also seat at the Java table.
I know Java has stakeholders but, what I'm trying to say is the table is at Oracle's HQ, not somewhere else.
> Should I also start listing the dark sides of each company that seats at ISO C and ISO C++ table?
A primer would be nice, actually.
> Python working groups also need money from those corporations, and Python is yet to provide the performance levels of Java, so much for free beer development.
I've never alleged that Python takes no money from corporations and, Python doesn't aim the performance of Java. Their byte-code even doesn't get optimized. Instead Python prefers native libraries for performance. SciPy, NumPy, PyTorch and others obtain native performance on any system they run and, it's enough for Python.
No need to move the goalposts and compare apples to oranges. Python is never meant to replace Java. Java is not meant to replace system programming languages like C/C++. You may like Java and it might help you to pay the bills but, pushing other languages around just because they don't fill your needs from your point of view is not the correct stance.
Microsoft, the evil company over here, that keeps being compared to Oracle. Several C++20 features like Modules and co-routines were driven by their VC++ implementations.
Apple, the company hated over here by bringing the end of open platforms, without it LLVM and clang wouldn't ever exist.
Google, the spying company and forking Linux with Android, the second major clang and llvm contributor.
IBM and Red-Hat, with their own Linux agenda pushing stuff like systemd hated over here, major GCC contributors.
You are missing the whole point with Java, it isn't about Java, rather all mainstream languages just like Java only move forward with dirty money (from HN point of view), but hey it is cool to hate Oracle.
Hint they are one of the first enterprise contributors to the Linux kernel and have been ever since.
Do you also feel like removing Oracle contributions from the Linux kernel?
And none of the companies you're listing here have been at all litigious about those programming stack contributions like Oracle has been. You're disproving the point that you're trying to make here -- Oracle is uniquely bad about this.
All big companies have a number of dirty deeds in their history, that's right. But I'm not a person who generalizes this to overall companies, incl. Oracle.
I personally don't use Microsoft OSes, however I have several licenses since my family uses them. I also have a personal lincense (albeit it's booted once a year) for some odd application I may need if stars align on the Friday, 13th. OTOH, I always have praised them for their ergonomics research, resulting hardware and their choice for keeping Kinect open back in the day. I won't ever trust them but, I'm not delusional.
I don't use Android devices or Chrome. Only some Google services. However day by day, I'm using their services less and contemplating to switch over to something like Proton. Also I loathe them for making pseudo-open stuff and closing it later. However, they're pioneer of software defined network due to sheer size of their networks.
I have Apple laptops and iPhones but, my main desktops/workstations are vanilla Debian boxes and always will be.
> You are missing the whole point with Java, it isn't about Java, rather all mainstream languages just like Java only move forward with dirty money (from HN point of view), but hey it is cool to hate Oracle.
No, I don't hate Oracle per se. I only hate their money greed. Especially the money greed via Java. I've used their ZFS appliances after they acquire Sun. They were nice up to a point. I applaud them for the enterprise ecosystem around their OracleDB. I like how they managed to fuse Sun's hardware with their software. But I don't like their greed. Maybe this greed is required from their point of view, but I don't like it.
Similarly I'm not keen on nVidia's strong-arming everyone and pushing people around. Also I don't like their arrogance. Yes, CUDA is nice, it's the de-facto standard for now but, it doesn't justify bullying others around.
Microsoft also contributes to Linux Kernel, I'm aware who's doing what.
> Do you also feel like removing Oracle contributions from the Linux kernel?
No, but I feel like you may like replacing it with a Java re-implementation running on a bare-metal HotSpot VM.
Not liking a part of something doesn't need to spread all over that thing. Do you leave your car to a junkyard because you dislike the engine sound at a particular RPM? Do you change your PC because its USBs are a little slow to a similar model? Same idea.
I don't think many people who want to use something stable like CentOS, but don't want to pay for a RHEL support contract would want to pay Oracle for RHEL-but-with-Oracle-sprinkles-on-top
Not to mention Oracle is not known to leave money on the table, and if they see they can start charging for Oracle Linux because there's no large well known free version, I wouldn't put it past them.
Put another way, if you jump ship from CentOS because IBM caused Red Hat to change it into a funnel to pay them money, if you landed on Oracle, you might be setting yourself up to do it all over again fairly soon.
> Not to mention Oracle is not known to leave money on the table
You're underselling it: Oracle grab money in a way that I would describe as "aplomb ruthlessness". They've managed to fuck no less than 3 orgs I've worked for.
If they ask you for a license count or how many cores are in use, ignore them. Larry Ellison doesn't need another boat.
"Unlike many other commercial Linux distributions, Oracle Linux is easy to download and completely free to use, distribute, and update. Oracle Linux is available under the GNU General Public License (GPLv2). Support contracts are available from Oracle. "
never, Ever, EVER trust Oracle. Especially with something as important as an open source product. Evidence: Oracle's Sen. VP Glueck statement that "There is no math that can justify open source from a cost perspective." No chance you'll ever see me running OEL.
Been burned by them before. Not at liberty to give details, but the outcome is that I never choose Oracle for anything for the rest of my career. Even if it would save time and money.
The fact that you aren't comfortable discussing the details of how you were screwed by Oracle, even anonymously on the internet, is really all anyone needs to know about Oracle.
From their page, does this even read professional? Sounds like some startup wrote it trying to make them look bigger than they're.
Community based sounds better to me.
> But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?
My thoughts exactly. All our workstations and small-ish clusters run CentOS (we don’t maintain ourselves the large clusters so these are not our problem). It’s going to be a huge pain.
> Red Hat still has to release the patches, right?
I think that's a gray area. For example RHEL has some support branches where they'll produce security updates for minor updates. For example you can pay a lot of money and you'll get RHEL 7.2 with security updates. They won't release sources for those packages unless you'll ask for those packages (you, as a paid client, not you as nobody in the Internet). But if you'll ask sources and then publish those sources in the internet again and again, so other entity like CentOS or whatever could pick them up and build CentOS 7.2 LTS, they will terminate your contract.
So that's a weakness in GPL. You won't break any law, but they'll just terminate contracts with those who publish those sources. So those sources are effectively unavailable for a large public.
Currently they publish their mainstream branch sources to the public. But they could stop doing that any time and only provide those sources to their clients on request.
> They won't release sources for those packages unless you'll ask for those packages (you, as a paid client, not you as nobody in the Internet).
If the code in question is licensed under the GPL and Red Hat isn't the owner of the code, then I as a rando on the Internet can ask them for the source and if they don't provide it, the person who does own the code can sue them and revoke their license to distribute said code. And I'd say that the majority of code in RHEL is not owned by Red Hat.
That is not how the GPL works. The GPL only entitles you to the source code of software for which you have been provided binaries. If the software has not been provided to you in binary form, you have no claim to the source.
This is why the cloud providers can get away with custom in-house patches to the Linux kernel.
Yep, this is actually one of those things Stallman has been saying for decades and people like to ignore: the GPL doesn’t mean all code must be in the public domain, only that users of a given program should be able to modify it. There are a number of ways to allow for that while still keeping distribution restricted.
You can only ask them for the source if you already have the binaries and you've gotten them from Red Hat. If you got the binaries from someone else, you can ask that someone else.
Can you help me understand why GPL v2 3(b) doesn't obligate Red Hat to provide source code for the kernel, as an example, to anyone who asks?
>3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:
> b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
Really? When I do a yum update on my rhel systems to get the latest updates from rhn, they never download the source code. Now that I think about it, I don't think RH has even sent me any kind of medium which is commonly used for interchange.
GPLv2 was written in 1991. GPLv3 changes the wording in Section 3(a) to "durable physical medium", but at the same time it gives other possibility including "Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge".
Everyone was doing network distribution of GPL software long before GPLv3 came out, effectively treating a download as a medium customarily used for software interchange. Not a physical one, but GPLv2 does not say anything about that.
I think that access to the private repository is considered a distribution. You have access to the sources with `dnf download --source` or something like that. The fact that those sources originally are on the remote server probably is not significant in 2020.
Amazon Linux 2 could be a viable alternative. In addition to being available to run on AWS, it's also available as a container image and in various machine image formats:
AL has had enough work put into it over the years that, while it may have been inspired by CentOS/RHEL originally, calling it a rebadged CentOS is not accurate these days. A full and competent team maintains it. While it's clearly made similar architectural choices, those are also for compatibility reasons.
However I doubt that support is available for anyone not running it on AWS, at least not from AWS -- but then again folks running CentOS weren't paying for support from RHEL either.
I also wonder if the announcement is as bad as people make it sound. I'm not an expert in Linux distros, but my understanding is that AL2 also uses a streams-like model, in that it provides long term support (patches for existing software) while also making new software available. My understanding was that, while it is inevitably versioned by making artifacts like VMs and containers available over CDNs ( https://cdn.amazonlinux.com/os-images/2.0.20201111.0/ ), the expectation is that most users will always launch the latest version, relying on its backward compatibility. Perhaps someone who knows more about the specifics of its release model could comment.
We had some folks try installing Amazon Linux images in our network. They spammed the network looking for a nonexistent link local metadata service, which is how we found out about them.
I tried to find official information about Amazon Linux but I can't find. Is it based on what distro? (maybe RHEL7 but not mentioned) How it's compatible to EPEL and other software for RHEL even though they are different from RHEL? (at least they uses different glibc version)
I'm really curious whether Amazon Linux is accepted by Linux guru or not. It seems that there are very little docs for a distribution.
The creation of CentOS Stream provides a new mechanism for partners and community members to add innovation to the next version of RHEL as it’s being built instead of after it’s built. We also recognize that there are different kinds of CentOS Linux users, and we are working with the CentOS Project Governing Board to tailor programs that meet the needs of different user groups.
In the first half of 2021, we will be introducing low- or no-cost programs for a variety of use cases, including options for open source projects and communities, partner ecosystems and an expansion of the use cases of the Red Hat Enterprise Linux Developer subscription to better serve the needs of systems administrators and partner developers. We’ll share more details on these initiatives as they become available. For those converting to RHEL, there is guidance available today for converting from CentOS Linux to RHEL.
LOL at the corporate guff. The primary use case of CentOS is "I want to run RHEL without paying anyone anything". The best way to "serve that need"? Don't kill off CentOS 8.3+.
I mean, that's de-facto what CentOS was; RHEL but non-paid and with different branding. But I mean... they were built from the (exact) same sources, so similar that you could convert between them by un/installing a few packages.
The difference is that CentOS is really free. RHEL free will be whatever RedHat wants. Of course it won't be as free as CentOS. But if it'll be free enough to satisfy most CentOS users, that might be good enough.
I've heard the idea being mentioned from several sources. My guess is that unless Oracle does some kind of magic an manages to get anyone to trust them, then we'll see a new community project to replace CentOS very soon.
Either that, or Debian's user-base will grow a lot within the next few months. :)
Oh cool. As for CloudLinux, "not free" probably scale for some hosting environments, including non-managed cloud instances.
But something like Springdale, given resources, might be able to provide. They're still tracking RHEL 7, though.
Debian and Ubuntu, which offer five years of Long Term Support are the next best thing available, and that's already kind of tight for long-term deployments of self-hosted, old-fashioned business software.
Debian is particularly impressive, since they, on paper, aim to support all packages with security fixes, whereas Ubuntu's main repo is a lot more limited.
OpenSUSE Leap versions seem to get three years, which really isn't enough software that needs to just work for a long while.
> Debian and Ubuntu, which offer five years of Long Term Support are the next best thing available, and that's already kind of tight for long-term deployments of self-hosted, old-fashioned business software.
Remember that, in Ubuntu, the majority of packages are actually ONLY supported for nine (9) months -- not the full 5 years!
> Debian is particularly impressive, since they, on paper, aim to support all packages with security fixes, whereas Ubuntu's main repo is a lot more limited.
What are the track records of the claim?
I'm sure Ubuntu will patch stuff up if some vulnerability shows up outside of main that gets patched upstream or elsewhere.
I claim no deep expertise on this, and I assume Canonical has more money to throw at this. Or are there contributions to Debian security in the form of paid personnel?
This is actually quite interesting to me, anyone with real knowledge of the subject is welcome to interject.
Not only does this not "stick it to the man", it's directly addressed in the FAQ. If folks want to boot up another rebuild project, there's nothing preventing that. There are also several existing ones that you could go join.
But the point is indeed that there are resources and infrastructure, so one might be hopeful that there will be a good outcome.
One possible outcome would be increased demand and resources for Debian and/or Ubuntu and I definitely wouldn't mind that (five years of support isn't all that much in IT). Realistically though, a lot of people need RHEL for free and I suspect there will be a way.
> @syshum, yes, but it's not exacly RHEL, and it's not distributed outside AWS
On the first point you are correct. It's not exactly RHEL7.
On the second point, Amazon provides images for running on prem[0]. We run a lot of dev AmazonLinux2 VMs on prem so that the local computing environment matches the deployed EC2 environment.
Yeah, resources and infrastrucure are not some magic that only Redhat can provide. If sources will be released on https://git.centos.org/ or somewhere else, then it may work. Just like the old times [1]
Yes this hurts. My use case is specific: I'm a dev but since we're 2 in the IT service with barely any budget I'm also a (modest) sysadmin and they also call me when the printer or the TV doesn't work.
So a few years ago when Debian decided to have faster release cycles I migrated all my VM to CentOS: once the OS is installed I don't want to think about it for the next 10 years.
I still didn't finish my Windows 7 to 10 on all my desktops, I'm swamped with users wanting to do Zoom / Teams / Skype / Whatever visio conferences, I have 3 new dev projects for 2021 and now I have to migrate all my CentOS VM...
Yeah, thank you RedHat I won't forget / forgive that.
So let's recap, you don't have budget to spend on Linux but pay Microsoft for Windows and probably Office... And now you are angry for allegedly not having a 10 year support for your OS, so you can avoid some work on upgrading or migrating the VMs. I suspect you should be running VMware as well (and paying for it), instead of KVM or some other open source hypervisor. If the VMs are on a public cloud, you are paying for it indirectly as well. You may have some additional work indeed, but you saved a lot on not spending on a paid Linux distro, being Red Hat's or any other distro, therefore it should be worth it... take that into account
Before giving up on us just yet, I'd encourage you to check out our developer program for proper RHEL that's free. And I'd stay tuned for announcements that are coming in the first half of 2021 (as mentioned in our FAQ). You might find we've got a program for you:
If there truly is a free RHEL release for certain use cases being announced in H1 2021, there's absolutely no excuse not to announce it now. You've terrified a LOT of people. Giving a 'well maybe next year we'll announce a solution' is not a response.
You've burned people. Don't expect to then sell them Aloe Vera.
You don't get it: I have no budget, if it's not free it doesn't happen. And I'm certainly not going to trust Red Hat now that they pulled support 8 years earlier than promised.
I have nothing against you in particular, but if you know the guys that made this decision tell them the same words that Linus Torvalds said to Nvidia.
I will never touch anything Red hat ever again because I will remember how after a quite sucky 2020, Red Hat made sure my 2021 would not disappoint either.
Well, you've panicked people who are/were moving forward with CentOS 6/7 to 8, and not on RHEL because no budget. "Don't worry, sometime in the next 6 months there might be useful info for you, or there might not".
People aren't going to stick around waiting for that information. You've pulled the rug out from under us and we need to plan sooner rather than later. RHEL isn't do-able due to cost, CentOS isn't do-able because you've just killed it, so away we have to go.
I think people would panic less if the CentOS Linux cancellation were announced at the same time as these upcoming announcements. Without them, there's a lot of uncertainty and it's hard for anyone depending on CentOS 8 to plan.
RHEL licensing is really disappointing for the low end. I use CentOS for my home lab / self-hosted development setup and there's no way I would switch to RHEL. It doesn't matter because I don't spend any money, but I'll probably swap to Ubuntu next time I re-build my server.
Here [1] is an example of what I dislike. That page doesn't explain if a subscription is for 1 host or all my hosts and it doesn't explain if it needs to be renewed annually or if I get a perpetual license after the first year of support.
I currently have 11 (extremely light usage) CentOS VMs running, but almost everything is Docker containers. I could likely consolidate it onto a single, bare-metal host if I wanted. It would be worse for me, but I could get a single $800 license instead of $8800 for 11 licenses. It's a moot point though. $800 is already too much for the value I'm getting out of it.
I could use the developer program, but I use an issue tracker to track work I do and I back up the whole system nightly. Technically that's production (to me), and I've seen IBM bait and eviscerate someone for licensing using extremely unethical tactics, so I'll never use something that isn't very explicitly free for production.
I think RHEL is technically superior to Ubuntu, but Ubuntu is a far better product when it comes to support lifecycle, licensing, and support. I can spin up an Ubuntu server and unlimited VMs with the promise of a reasonable lifecycle and the option to click a button and buy support.
Where is that in the RedHat world? If RedHat would have released a product like CentOS Stream, but with RHEL branding and a dead simple way to go from a free, community supported version to a paid, commercially supported version then for people like me it makes sense to be the "beta" tester. I think it's a fair trade. Downtime doesn't have a huge impact on me and I'm willing to spend time bug hunting / reporting bugs.
TLDR; The licensing is a massive hassle and is a terrible value proposition for small users. You're not winning any mindshare unless it's as simple as Ubuntu makes it.
> It doesn't matter because I don't spend any money, but I'll probably swap to Ubuntu next time I re-build my server.
It kind of does matter, though, because at least for me, I am much more familiar with my home systems than what I use at work.
I started out using Redhat at work, so I migrated my home lab to CentOS to gain more familiarity, which meant that when new projects started, I advocated for Red Hat. But if I'm forced to migrate to a different production grade distro at home and develop expertise with it, the next time there's a question about what OS to use for a new project, I can imagine myself pushing for the one I will have spent the last few years tinkering with at home by that point instead.
Agree! It is the beauty of open source to build it you own way. Google or other giants build their own distro as well, but they have enough talented people to maintain it
> For my server I want a boring, stable OS, so I'm definitely not using Streams.
Have you considered Ubuntu Server? "Being boring" and "having no vision" are frequent critiques of Ubuntu, which(as we probably all know) can be the highest possible compliment in some scenarios, like servers. They also have pretty decent LTS.
Dunno. I just spent an hour trying to figure out how to set the DNS server on an interface on Ubuntu Server 20.04. There's no ifconfig, /etc/resolv.conf is out, no nslookup, /etc/network/interfaces is completely out.
It is like a completely new OS to me, and I've been hacking on UNIXes for 25 years now.
I'm not sure the things you find missing are Ubuntu's fault. For example, while it's true that there is no ifconfig by default in Ubuntu (it's provided by a package called net-tools), one can read this:
> In 2009, Red Hat decided to deprecate ifconfig as the default command line network interface management utility, because the “net-tools” package (which provides ifconfig) did not support InfiniBand addresses (commonly used interconnect in high-performance computing applications).
So, many of the things that make Ubuntu look like a new OS to you, were actually decisions made by RedHat years ago, and they will also be present in newer RHEL versions.
That is a great opportunity to switch to Debian, which can be upgraded in place between major versions and, by being completely community driven, does not suffer from this kind of surprises.
Last time I checked, nothing was actually confined by AppArmor out of the box (IIRC I was looking at the output of ps -eZ and found that AppArmor wasn't actaully protecting anything...)
Specifically in RHEL/CentOS/Fedora I like that everything in the base system is reasonably well confined out of the box - including random container images that users insist on downloading/running. I don't know if AppArmor is even capable of doing this:
Both containers are confined by the svirt_lxc_net_t domain, but since they have different labels, they aren't able to interfere with each other, or the host system, even if the process inside the container is running as uid 0.
Companies do this all the time especially when there's an acquisition or change in management. In some cases they may technically still support it but with ever growing bugs, security issues and so on.
You’re conflating “free software” with “community support”. It’s possible to run free software with a paid support contract backed by a corporation - thats what RHEL offers, and Ubuntu with their Ubuntu Advantage program.
CentOS offered free software, but with unpaid community support, which isn’t guaranteed at all as there’s no contract.
This is an unpopular opinion but this is why I prefer Ubuntu over Debian - there’s a corporation on the other end that’s being paid to update software, and if you choose, you can always upgrade to paid support that is backed by a legally binding contract.
If you're using open source software and not paying anyone, sometimes shit happens and you will be surprised or disappointed and have no recourse. Even if everyone starts off with the best of intentions.
We could debate forever about whether fault lies with projects overpromising, or users having unrealistic expectations, or whatever else, but I don't think that changes the situation.
If you have are paying someone for the software/support, shit still happens, but you have a relationship and ways to get recourse.
Yeah we have some now ageing CentOS 7 crap to clear up. The obvious choice was CentOS 8 but that is now uncertain. I’m glad I was too busy to take this on earlier in the year.
Running a business on a free product. That you knew RedHat acqui-hired. That you knew they had a history of using products such as to wedge people into a paying bracket.
I personally run a CentOS 7 server (as do members of my family), and was planning on upgrading them all to 8. Luckily, I didn't get round to it yet. I guess I'll have to consider an alternative. For my server I want a boring, stable OS, so I'm definitely not using Streams. This is going to ripple throughout the whole industry, as CentOS is used all over the place, from regular home users to businesses (and things like CloudLinux).
It's very disappointing that Red Hat can't see the damage they'll do not only to the community, but to themselves too. Someone will come along and take the CentOS user base, and it won't be Red Hat :(.