Hacker News new | past | comments | ask | show | jobs | submit login
Red Hat Enterprise Linux 8 released (redhat.com)
289 points by lubomir on May 7, 2019 | hide | past | favorite | 146 comments



Wrote this comment a while ago for anyone wondering about this:

Just installed it in a VM, changes that jumped out at me:

• No Python (that you should develop against) installed out of the box. There's a /usr/libexec/platform-python (3.6) that yum (dnf) runs against, and then python2/python3 packages you can optionally install if you want to run python scripts.

• Kernel 4.18

• No more ntpd, chrony only

• /etc/sysconfig/network-scripts is a ghost town, save for a lonely ifcfg file for my network adapter. No more /etc/init.d/network, so /etc/init.d is finally cleaned out. It looks like static routes still go in route-<adapter> and you ifdown/ifup to pull those in (it calls nmcli).

• Pretty colors when running dmesg!


> Pretty colors when running dmesg!

Neat, but a great isolated example of the ancient software people who use RHEL have to deal with. RHEL 7 has dmesg from util-linux 2.23, the "colors by default" feature[1] first came out with 2.24 released on October 21st, 2013, which is around the time[3] the first beta of RHEL 7 came out.

1. https://github.com/karelzak/util-linux/commit/9bc2b51a06dc9c...

2. https://github.com/karelzak/util-linux/releases/tag/v2.24

3. https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#RHEL_...


Most things on RHEL 7 aren't ancient - there's plenty of backports, even major ones.

Some examples:

- OpenSSL rebase to 1.0.2k (for HTTP/2 support).

- overlayfs2 kernel support.

- Kernel eBPF instrumentation.

- Introduction of podman and friends.

- Ansible is kept up-to-date.

- GCC 7 and Python 3.6 via Software Collections.

This includes extensive testing. I have non-production systems on Fedora which run mainline kernels and have seen my fair share of performance regressions and crashes.

I'm assuming there was no notable customer demand for colorful dmesg output.


Some of the backports aren't nearly as fast or performant as using a newer kernel though.

For example eBPF was back ported (and also in CentOS), but running a syscall heavy work-load in a docker container on the older kernel about 50% of the CPU time was spent in the kernel filter.

I ended up moving our entire CI/CD platform to Ubuntu 18.04 and the performance issues went away and my workloads now run at full speed without slowdowns.

RHEL 8 comes with the 4.18 series of the Linux kernel that is already EOL. That's a shame and once again it will fall behind quickly :/


Seriously why did they not up it to 4.19? Do they hate bicycles? Do they make money based on the fact that upstream LTS kernels have short shelf lives compared to RHEL's own LTS kernels?


The kernel version was finalized some time before this release when 4.18 was current. Red Hat expends a ton of effort on long term maintenance and huge backports of new features to the kernel, so while I don't want to speak for the kernel team, I don't think the upstream stable kernels bring very much to the table.

Plus (my personal view) what goes into the upstream stable kernel is fairly random based on just mailing list NACKs, whereas what goes into the RH kernel has to pass a massive range of automated tests on a wide variety of real hardware.


Red Hat also certifies a whitelist of symbols that will remain identical in all future releases for the life cycle, stuff like that takes time so just grabbing the newest LTS kernel the moment they cut a release isn’t feasible.


> That's a shame and once again it will fall behind quickly

RH use kernel with TONS of patches, so, version isn't critical here.


Their frankenkernel is very hit and miss, backports or not. Docker had to disable a few features, because at first they seemed to work, but then they were buggy on RHEL.


But why wouldn't you use Red Hat's build of Docker? I mean if you're already paying them...


Can't remember any real problems with docker on RHEL/Fedora.


I mentioned my sys call filtering issue with eBPF taking up massive amounts of CPU time...


Colorful dmesg output was already available on Cent7, just not the default.


I've always felt that RHEL really excels at that old school corporate Unix feel of having to deal with stodgy tools that are either really old and/or lack basic ease of use features.

Reminds me of the time I wrote a script that called 'hostname -x' on SunOS instead of Solaris and it changed the hostname to '-x' and broke X11. RHEL is the nostalgia Linux.

But seriously, has anyone ever empirically verified that the Debian Stable/RHEL model of shipping a bunch of really old packages and then layering years of patches over top actually generates more stable, more secure code?

My intuition after a couple decades of software dev is that bugs will fester longer in the old version and the patches themselves will start having bugs as the top of tree diverges more and more from the shipped package over time.


The main source of stability for RHEL isn't that any one arbitrary version of a package they ship is better than another one, or that their patches on top don't suck. It's that they ship a long-term "stable" (as in "doesn't change much", not "sucks less") set of software for production use.

Thus, if you install some random vendor's shitty software you can rest assured that the version of libcurl and 50 other libraries they depend on is something they themselves have tested on RHEL.

The same goes for hardware that you buy. When you buy e.g. Dell rack-mounted servers you can safely assume that the open source driver version maintained by the vendor shipped as part of the RHEL kernel is something that's seen extensive production use, unlike the latest upstream kernel, or whatever "in-between" Debian et al are shipping.

Am I recommending you use RHEL? No, it's not the right answer for everything, and I certainly have my share of RHEL scars, including a couple of times where a mundane bug in my program turned out to be a kernel bug (one in RHEL's own shitty patches, another "known" bug with their ancient kernel).

But this is the reason to use it, and why some major commercial vendors say "we support Linux, as any distro you want as long as it's on this list of RHEL versions". They just want to deal with those kernel/library versions, not any arbitrary combination out there in the wild.


>layering years of patches over top actually generates more stable, more secure code

Well, I think your definition of 'stable' is different than what RHEL/Debian customers think. Stable isn't seen as "doesn't have bugs", its "works predictably". Which is a subtle but meaningful difference.


> But seriously, has anyone ever empirically verified that the Debian Stable/RHEL model of shipping a bunch of really old packages and then layering years of patches over top actually generates more stable, more secure code?

Debian has released a new stable version every 2 years for the last 14 years. RHEL/CentOS are the only ones on a 3-5 year cycle.


And yet they wait months between freezing the distribution and releasing, because a few troublesome packages have issues.

Someone needs to thaw Debian out.


No?

The fact that there's a freeze to allow for shaking out troublesome issues in a few packages (and possibly discover ones you didn't already find in older ones) without much risk of others newly breaking is a feature, not a bug.

Debian testing/unstable, backports and third-party repos exist if people really want the latest anyone's packaged, or the latest version of one specific thing on their otherwise stable system.

You may disagree with the philosophy, but every part of that behavior is working as intended.


Stable doesn't necessarily mean 'doesn't crash' what it means is that the API/ABI interface is stable.

EG: let's say libfoo.so.1 implements DO_FOO; libfoo.so.2 implements DO_FOO2, but not DO_FOO. In this case, anything you need that links to libfoo.so.1 and needs DO_FOO would need to be patched, recompiled, and shipped out to all your customers. For the distribution provider, this is not really a huge deal. But RHEL is merely the platform. The value-add is that 3rd parties can write software and compile against libs and know they're not going to break arbitrarily.

Similarly, if you've ever written a kernel driver, you'd know that kernel function names and signatures can change from release to release. The same example above applies to kernel code as well. So, compiled binary drivers would have to be patched and recompiled, and shipped out. If you're writing a driver for a network card, would you prefer having to ship your (non bug) driver updates every few months, or every few years?


It doesn't generate more secure code—as you say, patches themselves may have bugs, and way fewer people are looking at the patched branches. Active development happens on HEAD, and dodgy code is often rewritten before anyone goes actually looking for bugs (security or otherwise). Many years ago I helped with a paper on how the practice of applying only "important" security bug fixes doesn't work: https://arxiv.org/abs/0904.4058

But the goal of the long-term-stable approach isn't security or stability per se: it's striking a tradeoff between operator work and risks to security and stability. You could, of course, snapshot Fedora (or Debian testing, or Arch, or whatever) from 2013 and keep running it. Nobody is stopping you, and it'll still run on new machines. And then you have to do zero work to keep your system up-to-date, but you'll likely have tons of security and stability bugs. On the other extreme, you could run Fedora rawhide (or Debian unstable, or current Arch, or whatever) and update nightly, which would mean you get security fixes as fast as possible (they're almost always developed on HEAD and backported to release branches), and you get performance and stability fixes that people haven't deemed worth backporting, but you also risk API-incompatible changes that break the actual applications you care about. You'll need to set up really good CI to make sure you have coverage of everything in your application, and it's not just a matter of automation: you'll need a well-staffed team to respond quickly every time that CI goes red, figure out what changed, and update your applications to match. (And, of course, you have the risk of security issues in new code that hasn't been subject to public scrutiny yet—the inverse problem of security issues in old code that's no longer subject to public scrutiny.)

The goal of a long-term stable distro is to be in the middle of those two, to give you something that changes rarely (stability in the sense of "no surprises," not "doesn't crash in prod") but often enough that you get major, identified security fixes and particularly safe performance (and stability-as-in-"no longer crashes in prod") fixes.

And yes, part of the goal of a long-term stable distro is that it provides you measurable security and stability over unmeasurable but potentially greater security and stability. They don't fix every CVE, but they do fix the flashy ones. You can look at it cynically and say, this is the distro for people who want to tell their boss "Yes, we patched Heartbleed and Shellshock" but don't inherently care about security. But on the other hand, flashy vulnerabilities are more likely to be exploited, so it's not a particularly bad tradeoff.


.. booting a machine with Ubuntu 1404 as this is written, it is NOT so easy as 'snapshot and run it forever' because the OS people are trying to HELP you by FORCE to get a current version, plus so much of these machines success was networking, is network based, and relies on network to operate more things that anyone casually realizes..

It is a GOOD thing to run old versions, for purpose, by your personal choice. It is NOT GOOD to have help by force, and in the US law system at least, many individual rights are based on this assumption, even with some inevitable negative outcomes. Please note that in many parts of the world, and in many kinds of organization, this trade-off is NOT made, and quite a few fundamental technical decisions are going to be made along the lines of 'do it, there is no choice'


As you say, it goes both ways. Many kernel vulnerabilities are found and fixed within weeks to months of introducing them, with LTS distros totally unaffected.

And then you have bugs being fixed on master (sometimes silently), and the backport maintainers fail to backport them.


So I hope I can answer some of these [disclaimer: I work for Red Hat]:

Python: This is about the module system. Modules let you install different versions of parts of the stack. For example, different Python, different Apache version, different QEMU. These will move much faster than base RHEL because they're now decoupled. You can install one version of each module from a choice of several versions available at any one time -- it's not parallel install (for that there is still Software Collections). The reason for not having parallel install is basically because people use containers or VMs so they don't really need it, and parallel install brings a lot of complexity.

For Python we tried to remove all the Python dependencies from the base image, didn't quite do it because of dnf (although that is in the works with at least the base dnf 3 being rewritten in C++). So we need a reliable System Python which isn't in a module (else dnf would break if you install modular Python 2.7). Basically don't use System Python unless you're writing base system utilities, instead "yum install python3" should pull in the right module.

Kernel: As usual the version number isn't that interesting, as a lot of work will be done through backports.

ntpd: Can't say I'm very happy about this myself :-(

Network scripts: It's NetworkManager all the way. Again, mixed feelings about this, but I can't say I loved network scripts either.


I have experienced only joy switching from ntpd (and worse, openntpd) to chrony.

Why aren't you happy with the ntpd->chrony move?


I guess it's just what I'm used to. I'm also running my own NTP stratum 1 server from a GPS source using ntpd, but that's on Raspbian (https://rwmj.wordpress.com/2015/07/04/stratum-1-ntp-server-p...)


> For Python we tried to remove all the Python dependencies from the base image

Do you know why? I think it would be cool to not have any interpreted languages in the base image and FreeBSD manages to do that but I don't consider it that critical. For me it would be more interesting to not have Perl at all than Python...

I guess the situation with Python 2.7 on RHEL 7 was/is that painful?


There's been a huge effort to get the base RHEL image size right down, so obviously getting rid of Python would help there. As for why we need to reduce the size of the base image, the answer is - as always - because containers.


I agree with this. My personal opinion is that advanced scripting languages, outside of shells, shouldn't be installed by default.

(Of course, this usually gets killed pretty quickly, as dependency hell quickly brings in advanced scripting languages.)


I'm curious where the line for "advanced" lies, although in principle I'd agree that Python is certainly past it.

But shouldn't interpreters be good for reducing the overall runtime code size in principle, if enough system tools run on them? High-level bytecode can be very compact.

Or better yet, compile natively, but to threaded code, and share the stdlib behind it.


what about ansible? how does it fit into the "only system installed python"?


Ansible the client, or the target? Ansible doesn't need anything installed on the target (except sshd). For the client which presumably would be installed explicitly on far fewer machines it would bring in whatever Python it needs. I don't know if it uses System Python or a module however since I don't have it installed on RHEL 8 right now.


Ansible does require that some flavour of Python is available on the target hosts. Without a Python interpreter, you're basically restricted to using the raw module (which, of course, you may use to bootstrap Python by invoking the package manager).

Still, I guess most people don't bother with that and just assume the presence of Python (at least on Linux, the bootstrap-using-raw approach is already required on FreeBSD and others).

https://docs.ansible.com/ansible/latest/modules/raw_module.h...


> Ansible does require that some flavour of Python is available on the target hosts.

Support for managing windows hosts with ansible is implemented by replacing the use of SSH & python with winrm & PowerShell respectively.


• No Docker. But we got https://podman.io/ instead.


Like Docker, but less insanity. Looks promising.


It makes a lot of sense that OS-stuff points to a seperate Python interpreter, don't you think? I like this approach.


Red Hat Developer blog post on the topic:

https://developers.redhat.com/blog/2019/05/07/what-no-python...

This is an instance of what they're calling application streams, as explained in another post:

https://developers.redhat.com/blog/2018/11/15/rhel8-introduc...


Exactly, and with App Streams you won't be stuck on an old Python version! This is an awesome feature.

You already have that on CentOS/RHEL 7 with Software Collections, and App Streams make it a first-class citizen.


Oh it definitely does. I remember seriously messing up a Linux Mint installation a while back because I upgraded all of the 3rd party dependencies that were preinstalled (because hey, newer is better, right?).


Yes and no. It absolutely makes sense to ensure that admin scripts used by RHEL run in a predictable, tested environment; even more so since Python has dropped backward portability. OTOH not even being able to rely on Python's presence is exactly the kind of thing that makes Python unsuitable as the shell replacement it is being promoted as.


Installing a python interpreter is one package. If you just want to write a quick script using the standard library, it is cheap as all hell.


Is python being promoted as a wholesale shell replacement? There certainly are plenty of overlapping usecases, but shell is a better fit for many of these tasks. Until something like ipython can replace the interactive shell, I don't see python replacing shell scripts entirely.


Aye; this should have been done a long time ago. I wanted to say 'since the beginning' (whenever that was), but maybe there were good reasons 20 years ago that I can't recall.

It probably made sense when disk space was a lot more constrained.


It just wasn't the practice back in the day. Does redhat ship a "platform-perl"?


As far as I remember, there is no perl at all in minimal install of v7.


Do you think that there should be separate platform-sh interpreter for system shell scripts, and so on? That kind of strange for me. Probably difference is that shell is "complete" program, while Python is not.


The difference with system shell scripts is that you can't accidentally clobber libraries that the system tools require to work - shell scripts don't really have any notion of installable libraries.


It's not like you can't clobber PATH with ease and in diverse ways… It's just less easy to unclobber: huge hacks like Nix are born in this quest.


> huge hacks like Nix are born in this quest.

Would you mind elaborating on this? Maybe I'm using Linux wrong, but Nix seems like a huge step forward fixing a lot of my frustrations. Admittedly, I haven't used it in production.

One huge benefit of language-specific package managers is having multiple versions of packages on the same system and you can choose which you'd like for the project (without changing the OS). I feel like 10 years ago I heard a lot of grumbling about languages like Ruby or Python should just use apt/rpm, but I haven't see any OS package manager put much effort into this use-case (this is ignoring mac/win support). The closest I've seen is something like Red Hat's Software Collections.

Personally, I feel strongly that any software that's critical to your company should be decoupled from the OS. This is borne through painful and much delayed OS updates and following it makes things much easier long term.

My personal use-case is that different projects (working on multiple in tandem) need different stacks of versions. Also giving the ability to swap versions on the fly. Here's one package manager specifically designed for it https://github.com/nerdvegas/rez

Swapping versions in Linux is pretty heavy out of the box (with rpm/apt); download, remove old versions files, write new versions files. Only one installed at a time. a/b comparing libraries is a pain. For the things I need, I build things into folders like libjpeg-turbo/2.0.2 and nasm/2.14.02 and set the ./configure flags to point to these...basically a more ad hoc approach to what Nix does.

Where am I going wrong?


There isn't a history of issues with people wanting to update to a newer bash causing issues though.

Generally sticking with whatever bash your distro comes with is fine, whereas the services you deploy often depend on a particular version of python.


There absolutely is a platform sh interpreter. That is why when ksh came out, it was called ksh instead of sh, so that /bin/sh would still function as expected (same with csh, bash, zsh, etc).

And many of these will emulate /bin/sh behavior when called as such.


Ubuntu had dash - a minimal sh implementation.

For the system I think it is a good idea to limit library interactions with things related to keeping the system running or booting.


> • No more ntpd, chrony only

Not surprising. I've been preferring Chrony to ntpd on systems without systemd-timesyncd (like CentOS 6 and 7) for at least two years since I read this Core Infrastructure article about Chrony:

https://www.coreinfrastructure.org/blogs/securing-network-ti...


Anyone have any thoughts on why chrony vs openntpd?

Back when the ntpd security became a thing I evaluated chrony and openntpd as replacements and went with openntpd. It seemed to be simpler, used fewer system resources and had the openbsd teams reputation behind it.


For me, it comes down to the type of hosts I'm dealing with and how accurate I'd prefer their time to be. Years ago, I ran the reference implementation everywhere... but not anymore.

OpenNTPD's goals are to be "good enough" and provide "reasonable accuracy". On an OpenBSD laptop and several "play" VMs (running OpenBSD), it was indeed "good enough". For individual desktops or laptops and the random "standalone" machine, OpenNTPD is simpler and "just works" (I like that it can "verify" the time using HTTPS hosts of my choosing).

Nowadays, only my stratum 1 NTP servers still run the reference implementation. Everything else -- especially hosts which I may need to correlate events based on timestamps -- runs chrony.

A comparison of the three implementations [0] is available on chrony's website. From a quick glance, I don't see anything blatantly incorrect or "biased. The comparison was discussed here on HN ~18 months ago [1].

Basically, if accuracy to the second is good enough, OpenNTPD is fine. If you want more precision than that, go with chrony. It'll be MUCH more accurate and it really isn't any "harder" than OpenNTPD. You'll probably want to stick with ntpd if you're using reference clocks, although chrony supports a subset of them. If you're a nerd that wants the absolutely most accurate time you can get, Google "PTP 1588" as well.

[0]: https://chrony.tuxfamily.org/comparison.html

[1]: https://news.ycombinator.com/item?id=15324386


Thanks for the detailed response.


YMMV, but in my experience, if for whatever reason your clock is wrong by an hour in one direction (either ahead or after, don't remember), openntpd will take ages to skew it back, whereas chrony (and ntpd) do the right thing.


openntpd can set the clock on startup, but it requires a non-default `-s` option [0]. In chrony it's optional and controlled by an `initstepslew` parameter [1] which also considers a threshold to determine if the clock needs a large adjustment or if it's fine to just skew it as normal.

0: https://man.openbsd.org/ntpd

1: https://chrony.tuxfamily.org/doc/3.4/chrony.conf.html


> Pretty colors when running dmesg!

My most disliked feature. The colors in everything always clashes with both my background color (the best one for my eyes) and my vision in general. The first thing I do on any new system is to figure out how to turn of the colors. Otherwise I can't see any of the output.


The color choices are client-side. You can use something like base16-shell to customize it.


Wouldn't it use the colors your terminal is configured to use?


My terminal isn't configured to use any colors. It's just xterm. The only colors configured are background and foreground, and those colors various tools insist on as default are always clashing with them (and with my vision too). There are no color environment variables enabled, or anything else indicating that colors should be used, but even so colors are coming out of various command line tools lately.


If you run the script described in this post[1] you can display the colors that your terminal is configured to use. Just because you've overridden foreground and background doesn't mean you've altered the other colors xterm uses by default.

[1] https://bbs.archlinux.org/viewtopic.php?id=51818&p=1


I think that if you configure the rest of the colors, then the commands will use those. I’ve set my urxvt term to use the solarized theme and I don’t have any problems with viewing colorized output. I’d have to test it, but I’m reasonably sure of this.


> No Python (that you should develop against) installed out of the box.

Great news, don't like Python by default. All Linux basic services works great without it.


Here are the actual release notes (which don't seem to be linked anywhere from this marketing page):

https://access.redhat.com/documentation/en-us/red_hat_enterp...


Good news: They ship PHP 7.2

Bad news: ...without ext/sodium

That's a frankly irresponsible decision for Red Hat to make.


> That's a frankly irresponsible decision for Red Hat to make.

You say that without knowing anything at all about the situation? If you're a Red Hat customer, you could file a support ticket to get it pulled back in.

Historically speaking, Red Hat is rather conservative about the number of crypto libraries they pull into their system because of the requirement to validate the system for certifications. But if there are legitimate requirements to have it included and managed by the base system, then usually they'll work to fix this if they are informed that it's needed.

Again, if no one has officially requested it, then why would they pull it in?

It can also help to file bugs on RHEL 8 in the Red Hat Bugzilla: https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%...


> If you're a Red Hat customer, you could file a support ticket to get it pulled back in.

I have never spent $1 of my own money on Red Hat. After seeing this, I never will.


Just so you know, there's a bug report requesting this extension to be enabled and shipped: https://bugzilla.redhat.com/show_bug.cgi?id=1714591


You could also file a bug report in Red Hat Bugzilla even as a non-customer. But clearly you think that Red Hat is being malicious about this, which is definitely not the case.


Red Hat has a long history of harming cryptography.


That's not fair. If you want to blame something for that, blame software patents. Some stuff used to be a huge minefield because of that.


Is it perhaps in a separate package, at least?




The enterprises I've worked where we've used RHEL... epel is not allowed near a system, only officially sanctioned repos and updates.


Isn't EPEL the "officially sanctioned" third party repo?


No, it's maintained by a special interest group within Fedora:

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.


Ahhh. More "blessed" than "officially sanctioned" then. :)


One thing to keep in mind if you build a lot of C++ -- this is the first RHEL version to use the C++11 ABI. Be prepared!


In case someone from RH is reading this: “Get the Study” links to https://www.redhat.com/en/page-not-found


working link: https://www.redhat.com/en/engage/economic-impact-rhel-s-2019...

found by removing the query part


I have no appreciable Linux skills so forgive my naïvety with this question:

In the promo vid on their site, there are a couple of people gaming. Is this alluding to the fact that you can game on RHEL or that it powers the backend of games?

Just curious...


1) There actually are quite a few games available that run natively on Linux these days. Usually not AAA titles but lots of indie games. I've got (checks) about 550 games on Steam, largely through various bundle sales, and something like 30% of them run natively on Linux.

2) Steam now bundles Wine and lots of games are tested and semi-officially supported with it now, bumps the playable fraction to more like 60-70%—and you can enable it for all games with a settings checkbox, too, and more often than not it works.


Probably a little bit of both.

You can game on RHEL but it wouldn't be my first choice of distro for it - IMO, Ubuntu and Fedora are both better-suited for that task.


For gaming purposes, I also have to recommend Manjaro purely from how great it handles installing graphics drivers. I wouldn't necessarily recommend it for someone's first Linux install, but once you know the basics in case something breaks it provides a better gaming experience out of the box.


Mint has been doing that for a while.

And I believe Ubuntu has finally started doing it as well in the most recent release?


I wouldn't be surprised if Mint and Ubuntu are both in a better state in that regard since I last used them. Last time I used Mint on my gaming rig, the bundled Mint drivers had something funky with them. I don't remember what it was but I do remember I had to reinstall them. This was around Ubuntu's 15.04 I believe?


Why is that? Is it driver-related? Or is it that RHEL is more for stability rather than speed?


RHEL comes with a price tag (although I think they now have free developer licenses.)

Also RHEL development moves sloooowly. This is a feature and one of the main reasons to go with RHEL instead of not only unsupported distros but also supported-but-faster-moving distros, kind of like on Windows LTSB (I know to little about both to compare them, but enough to know that in certain organzations the promise that it will stay the way it is and by default only receive security updates is a huge feature.)


They do have their Software Collections with new major releases for nodejs, python etc., though. It is only the base system that moves slowly.


For gaming, Arch.

It's rolling release, including the whole stack that's supporting games. And they have a wrapper package that will install steam and its dependencies.


IIRC there has been some talk that it might be RedHat powering Google Stadia.



Woooooooohooooooooo!

We basically packaged our own RHEL8 on top of 7 and I’m glad we don’t have to do that for 95% of the packages anymore.


They don't appear to have updated their official Docker registry yet but it should hopefully be available soon for anyone who needs to test things:

https://access.redhat.com/containers/?tab=images#/registry.a...

Following the pattern of https://access.redhat.com/containers/?tab=images#/registry.a... and https://access.redhat.com/containers/?tab=images#/registry.a...



Thanks! I hadn't seen that before and it's definitely relevant, especially the free-to-share part.


I only saw one beta. Thats crazy if they released with only doing one beta.


Fedora is their alpha, and beta to some extent


anyone have any insights on how oracle does their intake of this to create OEL? it's always been a bit of a mystery to me.


Download CentOS

grep -rli 'centos' * | xargs -i @sed -i 's/centos]/Oracle\ Unbreakable\ Linux/gi' @

done


Why would you waste cycles invoking grep and xargs and memory piping data back and forth, where pure sed can do it? ;)


Oracle has some pretty beefy hardware. They can afford the cycles. ;)


It is a little bit more than that.


I know there's quite a bit of testing involved to verify their unbreakable kernel stuff and ksplice stuff is compatible. I would suspect there is also the spacewalk integration stuff that is fairly different than redhat's satellite stuff.


I didn't realize, until recently, that satellite 6 is no longer based on spacewalk.


DTrace esp.


I sure Love that "web consol".


looks like a toned down version of webmin.



Yeah, I know it's cockpit. I'm not sure what it brings to the table. It's already possible to lock down webmin pretty heavily if I want to trust a windows admin to do linux.


It's "cockpit"; I regularly give it a try, looking for a better structured, more elegant webmin replacement; alas, cockpit has like 5% of webmin features.


But does it at least do that 5% well?


It's pretty, but not that functional. It tends to map naively underlying functions to buttons, without much thinking in actual UI or UX. It's a better, cleaner base than Webmin but it remains inferior in all and every aspect other than that.


With the whole IBM thing going on, I bet CentOS 8 is going to take longer than usual to be released.


Probably Red Hat Universal Base Image would be good enough for development instead of waiting for CentOS; https://www.redhat.com/en/blog/introducing-red-hat-universal...


Nice. I'm currently building an operator and this comes in handy.

Quick question: how do I differentiate between freely available and subscription-only containers on the Red Hat Registry?


If they are in the ubi namespace, they are freely available. The Container Catalog also will tell you if you can pull without a login when you look at the details of a container.


Can these containers be run hosts that are not RHEL? It seems like it's allowed, but I'm not completely sure I read that right.

I'm also looking to know what packages are available in RHEL8 that are not available to UBI containers. I'm not able to find information as to what subset of the RHEL package universe is available to UBI containers. If you're aware of information on this I'd love to be pointed to it.


The article is clear on it being allowed.


If anything, CentOS 8 will be faster since they won't have to deal with a big migration this time.

IBM has zero incentive to interfere with CentOS, it's the best advertising for RHEL they can get.


The best advertising would be releasing RHEL 8 for free for personal usage. I wonder, how much workstation licenses ($300 per year) they are selling.



I would want to use it on at least 3 computers. I don't think that developer license allows that. And registering 3 different accounts probably is abuse of that system. Also I don't really do any development for RHEL, just using it for my personal computing needs.


I have a dev license and I can register 16 systems. Your mileage may vary, but it never hurts to try.


I have a single server in my house. Mostly used to back up all my different devices and to run Plex. I run CentOS today. I am not clear on the restrictions and if I would be allowed to use the free RHEL.

With the hassle (subscriptions, restrictions etc..) it isn't worth it.

I do wish RHEL would allow it for usage that doesn't make money, like personal servers.


Do you get updates on the dev license?


Yes


I don't think so - I know the CentOS folk and they are working very hard on a release. There is no interference from IBM, partly because Red Hat hasn't been acquired yet, and partly because why would they kill a cash machine that's proven to work so well? Despite some nonsense you read online IBM are not stupid.


> why would they kill a cash machine that's proven to work so well?

You haven't been around a merger & acquisition process, I take it. It's usually like the scorpion and frog parable:

A scorpion asks a frog to carry it across a river. The frog hesitates, afraid of being stung by the scorpion, but the scorpion argues that if it did that, they would both drown. The frog considers this argument sensible and agrees to transport the scorpion. The scorpion climbs onto the frog's back and the frog begins to swim, but midway across the river, the scorpion stings the frog, dooming them both. The dying frog asks the scorpion why it stung the frog, to which the scorpion replies "I couldn't help it. It's in my nature."

(from https://en.wikipedia.org/wiki/The_Scorpion_and_the_Frog )


And what is CentOS 8 ETA? I found nothing at CentOS web site :-( Do I miss anything?


There is no ETA, the CentOS team never gives one and they aren’t giving one for C8. This release has a lot of build structure unknowns, like appstreams, so there’s no telling how long it’ll take.


There is a telling. (Except for CentOS 6. Wonder why it delayed so much.)

https://en.wikipedia.org/wiki/CentOS#Older_version_informati...

(Expand the table for older releases.)


What is IBM doing with this?


They are buying Red Hat.


They acquired RedHat


The sale is not completed, yet.


Unlikely tbh.


With Shadowman gone and Redhat now a branch of IBM, this RHEL release is really dampened for me.


Can you expand on that?



Never a better time to try OpenSuse and pitch SLES at work.


OpenSUSE Leap looks interesting. They have a KDE option too, unlike RHEL 8.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: