Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What's new in Red Hat Enterprise Linux 9 (redhat.com)
97 points by nimbius on May 20, 2022 | hide | past | favorite | 90 comments


Prior to going to cloud (ubuntu ended up being easier there even over paid RHEL) on-prem a lot of businesses are just structured around having a vendor, which basically means someone to call to both figure out what to use (unfortunately often dealing with sales folks) and then when issues come up (support / implementation consulting etc).

Given margins are often so good on subscription models in a redhat type world, I really wish they had gone a bit heavier on cloud focus (AWS / GCP / Windows in my case). Some free / base images container / vm focused pushed hard for everything from docker to cloud.

I thought ubuntu executed very well here (given their commercialization has always felt somewhat weak). They got images for docker / aws / gcp all first class and available at the basic level. They are also doing some deals with Microsoft and Windows Subsystem for Linux.

Not sure if anyone has done the docker image download weighted by base distribution analysis, but my own experience went from the centos / rhel was like everywhere to ubuntu and amazon's linux stuff.


Base images were a part of this release, and their developer subscription is free. Not sure if that applies to the various public clouds though. Red Hat's thing is hybrid cloud and being cloud agnostic. Amazon Linux locks you in to AWS (and by the way it's literally just a clone of RHEL) and Canonical is too sketchy for some enterprise customers. Also RH is focused on Podman over Docker, being the developers of it.


Where?

I just went on docker hub. I search for Redhat Enterprise Linux and RHEL.

I get ubuntu with 1B+ downloads. I get oraclinux, amazonlinux (?!?), alpine, debian and more. If they are releasing docker base images they are doing an absolutely terrible job getting them distributed. I'm honestly kind of shocked to see amazon here, are people developing against Amazon outside of just spinning up hosts on EC2 etc? Amazing. I might do that myself.

Edit: Amazon Linux looks fedora based for 2022 forward, but with 5 years of support - that's not a bad combo if they can deliver. I'm going to try it for my next project. Their Amazon Linux support has been poor in the past at times.


I think quay.io is run by RedHat, so I'd expect RHEL containers to be there if nowhere else.


Seems like it’s at https://hub.docker.com/r/redhat/ubi9 (though nothing pushed there yet, see instead https://hub.docker.com/r/redhat/ubi8 for RHEL8 for now, I guess). See also https://developers.redhat.com/products/rhel/ubi


you can get them from catalog.redhat.com i.e: for the rhel/ubi9 images :

https://catalog.redhat.com/software/containers/search?q=ubi9...

red hat has some sort of partnership with docker to have them available in docker hub, i guess it's just not there yet...

https://www.redhat.com/en/about/press-releases/red-hat-bring...


I'm curious - why Canonical is considered "sketchy"?


I just don't think they really figured out as a major priority the enterprise side of business delivery (training and more). Interestingly in business it's not London that is sometimes seen as their connection but maybe South African (it's the name with ubuntu or some background in founder).

Do you buy Red Hat Enterprise Linux? Or Ubuntu? Just based on the name I think RHEL as an advantage personally.


Basically, it’s a much smaller company than Red Hat so it’s uncertain if they can provide the same level of tech support for massive enterprise customers, something Red Hat’s been doing for 20+ years. It’s also privately owned and controlled by one guy so it’s hard predict the future of the company.


Red Hat is US owned (IBM). Canonical is headquartered in London, which is close, but not sufficient for projects where no foreign influence is allowed (e.g. nuclear work in Department of Energy)


Right, and I think they missed some major opportunities as a result.

Would be great to have an offering as the basic base image for docker containers (outside of UBI).

They let Project Atomic die.

Then they let CoreOS Container Linux die I think?

They have a Fedore CoreOS but don't seem to promote it at all and I can't find images on docker hub for it.

The RedHat stuff is generally paywalled. CentOS is gone.

I know they are pitching the IBM cloud - I don't care about the IBM cloud - though they keep on advertising Watson to me as the solution to every issue.

The OpenShift brand was hurt by the enormous mess of complexity it was compared to K8 and things like ECS and Fargate etc.


Project Atomic and CoreOS Container Linux both became Fedora CoreOS. Neither really "died", they were just folded together into a new effort.

Fedora Silverblue is also comparable to Project Atomic for the traditional non-container installed OS scenario.


There's a ton of bad changes in RHEL 9 too, which obviously weren't mentioned here, but were in the long-form release notes that most people don't read all of. Here's some of them:

* No more KVM virtualization on IBM POWER (e.g., Raptor's Talos and Blackbird systems)

* SPICE is removed in favor of VNC, even though VNC doesn't yet support a bunch of SPICE's functionality, such as audio, smart card sharing, and USB redirection

* No more taking snapshots of VMs

* virt-manager is removed in favor of a new Web console, even though it doesn't have all of the features of virt-manager yet

* SSHFS is gone (it's in EPEL now, but a lot of places have ridiculous rules that only official RHEL packages are allowed)


> * virt-manager is removed in favor of a new Web console, even though it doesn't have all of the features of virt-manager yet

Are you sure about that it was 'removed'? I have CentOS Stream 9 installed and virt-manager shows as a CentOS package (not epel). CS9 is not RHEL 9 but I believe virt-manager's support was depreciated.


Oops, I did indeed mix up deprecated with removed.


A few years back during a work lunch I spoke about installing Red Hat 9 from CD on my 350Mhz Pentium II with 64 MB of RAM. A 20-year-old sysadmin interrupted me to say how that wasn't possible as "they're only on Red Hat 7 right now". Explaining how Fedora and RHEL came to be from the original Red Hat made me feel old :-P. Now we've come full circle.

EDIT: At the risk of sounding like "that guy" I'd like to clarify that I don't fault my former colleague for what he said; for nearly 20 years "Red Hat" has been colloquially synonymous with RHEL (vice CentOS and Fedora) and I merely thought this was an amusing anecdote.


And I had one job candidate tell me she had "Linux 7.3" installed, where she meant Redhat 7.3.

And so many years later, I am yet to see Linux 6.


Thank you, I spent some time doubting my sanity about the RH version numbers.


I'd love to see a Red Hat CoreOS outside the context of OpenShift at some point.

Seeing a proliferation of container-first, immutable, and atomic distributions like Fedora CoreOS would be a good middle ground between bare-metal and Kubernetes.

All of my Fedora CoreOS hosts automagically upgrade themselves via rpm-ostree and Zincati, and the fact that the OS is atomic and disconnected from the containers I'm running on top of it gives me a lot of peace of mind in terms of stability.


it look like a lot of people here doesn't understand RedHat's business.

when billions is on the line, companies doesn't want you to google/stackoverflow for days. they want the problem solved in hours. that's where RedHat come in.

its like auto insurance, you are paying for nothing until a fatal crash happened.


I'm curious to see how many companies on here use RedHat for their server infrastructure?

What are the benefits and costs?


As someone who does both sysadmin work, and programming/modeling&simulation work, at a significant research nonprofit in the US...

RedHat is stable: Which has the benefit that it's stable, and the cost that it's always behind the times.

Security is deadly serious, and RedHat make it easy. STIGs are available for RHEL; they're also available for SuSE [very recently] and Ubuntu LTS releases, but we're heavily invested in RHEL infrastructure, training, etc, so switching to Canonical would be expensive and hard. A few years ago I earnestly looked at us coming up to speed on Ubuntu, but with Canonical's recent behaviour around packaging I'm kind of glad we didn't.

We rolled with CentOS in a few places for a while [all the benefits of the training and experience], but the recent changes in that make it useless for our needs.

I've been around for long enough that I remember when Linux sucked; I started using it around 1997. Nowadays, I find that all modern distros are in the same ballpark of "pretty much work pretty much all the time"; so while I don't love RHEL, it's been a safe stable choice and doesn't bite me in the ass.

I think this was linked on HN the other day, and it basically covers anyone using RHEL: https://boringtechnology.club/


  I think this was linked on HN the other day, and it basically covers anyone using RHEL: https://boringtechnology.club/
God, I think you can tell the real engineers that have gone through the trial by fire and the ones that haven't simply by how much they agree with that presentation. At some point in your career, at 2AM when your debugging that fantastic new opensource library, for the 10th time, that you rewrote everything because it did exactly what you needed, and made you a hero for a week, you switch from gabbing the latest cool thing to picking those old technologies that everyone knows why they suck.

Because knowing why something sucks when you pick it, is more important than knowing why its good.

RHEL sucks because in 10 years you will be running some horrendously "outdated" software. Its fantastic because over those 10 years you will have made a pile of money selling your product and adding features rather than spending cycles being an upgrade monkey for things you didn't know you needed until someone sold them to you. And your customers, they don't care what OS your running as long as its reliable. Which is something RH provides in spades.


Ten years, yep, sounds about right. We're finally getting rid of our last few RHEL6 boxes. It got pretty long in the tooth at the end there, but it never gave me any hassle.


Especially if you know why it sucks you can work around the suckage; but 99% of everything sucks, so if you don't know why the new shiny thing sucks, you're gonna have to find that out yourself, under fire.

(This can even happen between various old and established things; if you know MySQL and all its warts well, perhaps switching to the relatively unknown PostgreSQL "because it's better" isn't the best use of your time. You should be able to explain why and why not before making the switch. When in doubt, do what everyone else is doing - unless it's your differentiator.)


In the software world, RH is not just old, it's practically antique. Their kernel age can really hold you back. Compare it to the way SLES releases their minor versions. I've done alot of work with RH, but I do not recommend it or enjoy it.


If all you are doing is running a webserver, java app, and database the kernel version pretty much doesn't matter. This is the use case most companies are in.


I don't know about "on here" because HN tends towards startups and freelancers, but in the US Enterprise sector, Red Hat is what you buy when you want a vendor-supported Linux in your expensive lights-out datacenters.

If you take a look at the Fortune 500, it wouldn't surprise me if Red Hat was in use by at least 490 of them.


Working for one of the hyperscalers as an architect, this is exactly my finding too. RHEL is the standard, everybody runs it. Exceptions are for SAP (SUSE) and some container workloads. RHEL for all the rest.


Yeah it's common in the traditional, established firms. Something about compliance as well as the fact that some of the proprietary enterprise software packages they use only support RHEL.


I'm a NASA contractor and until recently we used CentOS a lot. With the changes made to CentOS by Red Hat, we are now moving to RHEL or Ubuntu. The Agency prefers to have an OS with official, paid support. (I do not speak for NASA, my employer, or anyone else, and I'm retiring in two weeks, so there.)


> The Agency prefers to have an OS with official, paid support.

How did CentOS fit into that? I guess I would have naively expected you to be on RHEL to begin with.


Where I was projects started on CentOS. Often once they got big / went to production, they were switched to RHEL.

And devs often used CentOS when playing around (ie, setting up VM's etc at home or just randomly) because you didn't need to talk to anyone to do so.


AlmaLinux with paid support on demand by Cloudlinux?


The agency tolerated CentOS but was never happy about it. The fact that it was directly derived from RHEL and could be expected to get timely security fixes probably helped make them comfortable with it. The agency is also concerned about Python2 remaining the default in both CentOS 7 and RHEL 7. Red Hat has committed to back porting security fixes for Python 2 proper, but that leaves a large universe of add on packages that might not get patched. Thus the move from both CentOS 7 and RHEL 7 to RHEL 8.


In my experience CentOS basically let you get all the security benefits from RHEL with none of the costs. So it has been popular to minimize budgets.


Sure, but that's the exact opposite of having paid support, which is why I was confused


Hope you enjoy your retirement :-)


Oracle is an option (especially for converting CentOS without reinstalling), but it has become stunningly more expensive.

There used to be a $99/yr "support in name only" option, with no ability to file service requests (community discussion boards only). The basic level of support was $500/yr.

It now appears that basic support is $1,200/yr. Of course, ISO downloads and yum support are still free, but the support contract is now much more expensive.

Oracle also still offers free KSplice kernel updates for Ubuntu, but it appears that Fedora support was removed.


A couple years ago I worked for a different federal agency and RHEL was used extensively, but was being replaced with OpenShift. The base images being used were all over the map and the whole process was quite a convoluted mess.


Many companies are conservative and want a "vendor" for support. Last time I saw RedHat being widely used was at a defense contractor and a state government agency. This was before Ubuntu was popular, in the late 2000's.

I never saw this "support" actually used. The benefits are more a CYA thing.


The thing is not about calling support, but having a contractual agreement so you got somebody to sue in case something goes wrong. That makes lawyers happy. And is useful when offering your own services as you can state that potential liabilities at covered by the vendor. How well that would play out in reality is a different story, but gives people some better rest at night.


Absolutely. That's why I said it was more a CYA thing.


It does get used, though. There are some very intractable issues with modern software implementations, especially with OpenShift container storage. Unless you've worked with it tons, it's not exactly something a first-year sysadmin can solve. Personally, I tend towards Debian for infrastructure because while I have experience with RHEL, I prefer the Debian layout, package management, and freely-available help from very knowledgeable people. Not saying that RHEL are pillars of intransigence, they're not, but I've found it easier to solve my own issues. As an almost 3-decade senior sysadmin/devops guy, I know where to go and who to speak with if I run into something really odd. Fortunately, that's rare. Once you've used Linux from a sysadmin POV for a few years, things begin to make sense from a file system, maintenance, and overall "keeping everything alive" approach. I was mentored back in the 90s by a Debian guru at our local LUG. He's since passed into eternity, but much of what he taught me and others still applies. The beauty of POSIX systems. Now... when systemd got released, I didn't know what to make of it, but now after many years, I've gotten used to it. I still prefer the old BSD-style init files and non-binary approach, but what to do with modernity? It will pass you by if you don't embrace it.


Most places I've worked ran commercially supported Linux on their on-prem production systems, with the "free / community" edition (read: CentOS) on dev/test.

Having a support contract in place was often a requirement for the clients, perhaps due to regulatory reasons.


We have someone to call who knows how to debug io perf


The benefit is support and access to their support pages, you can also make general administration easier through some of their tools. But it's pretty expensive


That's why we dropped it. It just got to be too expensive. Yes you got access to their support knowledge base, which was sometimes helpful but if you did a little work you could almost always find the answers elsewhere.

The main thing I liked about RHEL was its stability. Not a lot of churn, and for running any other vended software, support for RHEL was never a question.

Converted everything to CentOS. When Red Hat killed that distro, that was a bridge-burning event. We use Ubuntu now.


What tools? Cockpit?


Compared to RHEL alternatives like Alma/Rocky: possibly faster access to patches, support, insights (cloud based analytics and analysis of your workloads/security).

Compared to other Linux distros: some COTS software requires RHEL, risk adverse orgs probably need to layer RHEL support with their COTS support contracts to meet SLAs/support requirements.


Alma/Rocky are downstream of RHEL. By definition they will never have faster access to patches because they pull them from RHEL.


I guess you misunderstood the direction of comparison


Until we moved serverless, we did - business folks wanted someone to blame when sh*t hit the fan, so they felt it was worth it.

As a developer, I love working with it.

But now nearly everything we deploy is on AWS Fargate and our images are Debian based.


Just curious: why did you switch to Debian, especially with RHEL being more or less free for containers?


A lot of the base images we ended up using are Debian so we just wanted to be consistent, whereas before we were running software from the RHEL repositories.


We use a TON of RHEL, mostly for their support, but also to have a product that is supported. Helps with audits and compliance. I imagine most big places have similar rationales.


Don’t exactly run servers, but they’re FIPS certified and they’re the first (often only) distribution hardware vendors support


Normally it's because the application vendor supports RHEL/CentOS. So that's what you run.


Some software vendors annoyingly require particular distros (like rhel8) to use their support lines


> companies

plenty of activity each day that is not in that category


Looks like a cool release. Note there is a typo on the page saying Nodejs 6 when I believe they are really using Nodejs 16.. Nodejs 6 would be horribly out of date for a new release!


Are you sure thats a typo? Being ten revs behind seems like a very common RHEL practice.


Fair reasoning, but it’s a typo. The new features listed on the RH site correspond to Node 16.

Release notes: https://nodejs.org/ko/blog/release/v16.0.0/


> Python 3.9 gets lifetime support in Red Hat Enterprise Linux 9

Wonderful.


Could anyone tell me why Red Hat intentionally uses non-LTS kernels? I can understand why they didn't go with the last super LTS 5.10, but why go for 5.14 over the still supported 5.15? Sure it wouldn't matter in a year but that's still a year of support.


I'm a bit out of the loop on RHEL, but a large reason for the specific kernel versions is to maintain API/ABI compatibility.

One of the reason, at the place were we ran about 100's of RHEL/CentOS instances, he stayed with RHEL-ish installs was to maintain API compatibility through out the lifecycle of the OS version.

I've now moved on to places were that isn't that specific need. But I still have a few smaller projects that I use RHEL/CentOS/Rocky/etc. because of the stability that those built into the releases.


I don't see why that would explain why they picked 5.14 rather than the LTS kernel of 5.15, unless LTS support regularly breaks API/ABI compatibility.


It is because they don't need the community's help. They have a huge kernel team, and do all their own backporting. The RHEL 5.14 kernel is quite different from the vanilla 5.14 kernel, making the community's help not that useful.


That part I couldn't tell you, but I haven't look into it much lately.


Stability. The kernels contain tons of backported goodies and bug fixes. RHEL users dislike change. We use Debian where I work, and it's stable, but not as stable as RHEL. When it absolutely needs to be rock solid, RHEL. I've used Red Hat since 1998. Fedora for personal use. You'll find some people using Debian Stable for some important things, but with Red Hat paid support, you can get help with seriously intractable problems. With Debian, there is no paid support except for third parties. Loads of corporations like having a throat to choke when the SHTF. Forgot to mention that tons of hosting companies using Red Hat/CentOS/variations of RHEL use cPanel. It doesn't run on anything else.


mid-skill level admin here, mostly linux. I put up a Debian server on racked hardware for a consulting client, did almost nothing else but check on it from time to time.. uptime was past 800 days when I took it down (?) .. not one reboot. I updated the openssl and related, that was about it. AMD64


3 years of uptime might sound impressive, but this means you keep running the same kernel, and towards the end of that streak, that kernel is pretty much guaranteed to have a widely-known local privilege escalation vulnerability.


you sound smart and are probably technically correct, but the reality is, what replaced this setup was ephemeral VMs with outsourced India contractors.. there was spam coming from their setup within weeks of the transition.. Admin as in, I share my root password with the crew of contractors in my small company .. like that..


The very reason I will never work for a shop that outsources anything. Especially fintech stuff. You just never know who is going to have access. RBAC is huge where I work.


It would be interesting to see what percentage of meaningful security vulnerabilities couldn't be fixed via live patch. Of course that does require that you invest the significant effort required to get live patching to work, but it is possible.


I only recently learned to use RHEL (did the RHCSA) and from before I've used debian and Slackware. RHEL is really polished, more so than the competition. I asked around why more people aren't using RHEL in Scandinavia, and the reply I got was that it was mostly cultural.

In Europe it's Debian or SUSE, and in the states it's RHEL. I don't have a preference (but Slackware is awesome).


Years ago for kicks me and a few friends did the Gentoo thing starting at Stage 1 tarballs. Took my then Toshiba Satellite 12 hours to bootstrap. When I got it all up and running, it looked like anything else. I was happy for the experience, but even with all the tweaks for my specific chipsets, RAM, etc., it was no better than Red Hat or Slackware.


There’s a clear trend of migrating from SUSE to RHEL in Sweden at least


So the LTS kernels have back ported features as well? My understanding was they only got bug fixes, but my knowledge primarily comes from Wikipedia.


The notion that RH kernels are stable is patently false. RH constantly backports features to "stable" kernels, and it does cause problems.

I was on the forefront of this when they were backporting container tech to their 2.6.32 kernel.

I've also seen them break userspace repeatedly with kernel changes for selinux within a stable kernel.

But as you say, you can pay for support so you have someone to scream at when things go south.

When it comes to ABI stability, I'd trust the vanilla kernels more than RHEL.


Yeah, Suse has a better kernel method IMHO.

Yelling at RH is about as effective as screaming at paint to dry faster in my experience. Their support can be very effective, but it's so slow it's mostly useless. They put you through afew layers of "are you this dumb" before you can get anywhere.


What is the status of Rocky Linux?


> now supports kernel live patching

Never heard of this before. Looks like it's called kpatch: https://github.com/dynup/kpatch


Oracle snagged (developed) the first (for Linux) ksplice;

https://ksplice.oracle.com/

canonical pay walled the second (free for "personal" use):

https://ubuntu.com/security/livepatch

Kpatch was/is the open, not-patent encumbered version (IIRC) - i thought RedHat had offered some version for years?

Ed: Only rarely touch RedHat for the past few years - and generally prefer "know that system boots correctly after (kernel) updates" to "its running new (presumably secure) kernel, but might not boot if power fails". It's a trade-off - but rarely do I need kernel live patching. Services probably need a restart due to new SSL library anyway...


Does anyone know if Red Hat enabled landlock in this release? The kernel supports it since 5.13 and they are running 5.14. The kernel config should look something like this:

CONFIG_SECURITY_LANDLOCK=y

CONFIG_LSM="landlock,lockdown,yama,..."


# CONFIG_SECURITY_LANDLOCK is not set

CONFIG_LSM="lockdown,yama,integrity,selinux,bpf"

on kernel 5.14.0-70.13.1.el9_0.aarch64

So not on ARM at least


Thanks for that info. That's unfortunate.


I don't get this. They upgraded some software versions? What is the deal here, you can just use conda or containers or compile yourself and add to a PATH env var. What is this, the 90's? What am I missing


They promise that it will work well together and will provide support if it doesn't.

That's a big deal when you run a big business.


Why use any linux distro ever when you can make your own? /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: