Indeed, Btrfs is uniquely capable and important. It has lightweight snapshots of directory trees, and fully supports NFS exports and kernel namespaces, so it can easily solve technical problems that currently can't be easily solved using ZFS or other filesystems. Btrfs is also used now for ChromeOS's Crostini Linux application container.
It was never emphasized, and probably rarely used by their customers. Prior RHEL release used ext4, then xfs by default. People who run RHEL don't want to think about what filesystem to use. They want standard, stable, reliable -- not dozens of options and features.
When you're pushing the container stuff, Btrfs becomes extraordinarily valuable. Being able to snapshot a base thing and then layer on top, and do trivial rebasing, as well as cheap replication of container volumes is _very_ useful.
On my Fedora and EL7 systems, I use Btrfs for the OS and for my containers for those purposes. It's awesome.
Slight correction: we use btrfs for the disk image backing the user's crostini state only. The rest of Chrome OS, and the crostini VM rootfs, still use ext4.
Testing the limits :-) It's interesting because it shows you how things scale, and that information is useful even for normal-but-large filesystems. I've submitted a talk about this to FOSDEM next year.
Do they include other file systems which can provide data checksums? The linked stratis doesn't seem to have that feature, did they patch the underlying XFS? There don't seem to be that many filesystems with that capability yet:
I guess it depends on what Btrfs features you are using. I remember SUSE who are using Btrfs as a root fs in the enterprise distro restricting its use to a subset they consider stable.
On the Btrfs status page https://btrfs.wiki.kernel.org/index.php/Status you can see that the developers themselves consider several features "mostly ok", which is probably not an assessment most filesystem users will find reassuring.
I use subvolumes, snapshots, and send/recv functionality for cheap replication of data. Sometimes even on RAIDed disks!
There are only two file systems with that feature set today: Btrfs and ZFS. I personally prefer Btrfs because it's way more flexible and shipped with the Linux kernel.
As far as I know VDO only provides compression and deduplication. So no snapshots like in Btrfs and ZFS, although you can use LVM snapshots. Also no RAID features.
Interesting takeaways for me: They still use yum (instead of dnf) for managing packages, and default to nftables as the firewalld backend, while even Fedora (afaik) is still using iptables.
You can type either ‘yum’ or ‘dnf’, but they decided to keep ‘yum’ to avoid a lot of scripts/documentation from breaking. ‘yum’ is a compatibility layer over dnf that keeps the old command line syntax but uses the fast depsolver, weak dependencies and other new features from dnf.
Later versions of Python will be supplied as modules. So will Python 2. They are parallel installable with "system-python" which remains at 3.6 for the lifetime of RHEL 8. (Note that in general modules are not parallel installable, but in this one case it is).
As a module will it have as many libraries available as the system Python? For example does it come with scipy packages, twisted, pyserial, xlrd/xlwt/openpyxl, gobject (for GTK+ apps), etc?
AIUI the plan is to build everything for each Python module (plus system-python).
However there are two things that mean it's not quite as good as you would like:
(1) Base RHEL 8 will only have a fairly minimal set of Python packages (indeed, packages in general).
(2) Products on top - things like OpenStack - will settle on a particular Python version (ie. module) and only be built against that version, and will require the corresponding Python module to be enabled.
I think Perl and Python were only ever installed by default because there were management utilities (GUI and console) written in them, so they were prerequisites for what was considered a minimal system. RHEL has been pretty good the last few years at paring down the base minimal install to actual essentials. I view getting to the point that Python is not required for a base install an achievement.
Me as well, since someone had the clever idea to create GUI distribution management tools in Python, it is a pain to use them, given how slow some of them are.
It's unlikely Python is to blame for that, unless they implemented something very CPU bound in it. I wrote wxPython apps in 2000 or so and they were very responsive.
It is likely a combo of Python + someone not good at using Python with networking. When working with anything that talks a lot over networks you need to use threads or separate processes to keep things feeling responsive during network delays. Python doesn't make this easy, so lots of devs skip it and it makes the programs feel sluggish.
Hmm, that might make sense on a server, but this is talking about a GUI client. Also, Python has trouble with CPU-bound threads but not those IO-bound.
Actually there is a default python and it’s called platform python. That one lives on purpose in /usr/libexec however is mainly supposed to be used by platform tooling like dnf/yum etc
The big takeaway for me is that they consider Podman ready to use because it's honestly amazing. A fully featured container engine that doesn't need any elevated privileges or a central daemon -- using just Linux namespacing features. Once people realize the implications of having containers just being something you can run like any other program with zero fanfare I expect adoption to explode.
I suspect that Windows and macOS support going to be a big impediment to adoption of RH's container tools. Fedora has become a really good OS and Silverblue is going to be another level of awesome on top, but I don't see huge numbers of developers switching.
> It also includes a new TCP/IP stack with Bandwidth and Round-trip propagation time (BBR) congestion control, which enables higher performance, minimized latency and decreased packet loss for Internet-connected services like streaming video or hosted storage.
I suspect Red Hat didn't rewrite the TCP/IP stack, so can someone translate this from marketing into nerd? Google doesn't seem to know.
As I understand it there was a new TCP/IP stack added to the linux kernel back somewhere around 2015/2016 that couldn't be backported to the RHEL7 3.10 kernel without breaking the kernel ABI that red hat tries to only change with major versions.
So new to rhel TCP/IP stack not to linux as a whole.
BBR is just an algorithm that tries to figure out the best sending rate of TCP packets. So you don't have to wait for every ACK after a PSH. it was added the the linux kernel a couple of years ago, good to see it in RHEL.
The linux box sitting on my desk at work is running RHEL6 (which is really starting to get super annoying). My IT dept is just now starting to roll out RHEL7.
In our place we still have boxes on Rhel 5, some even on Rhel 4. But Redhat did keep the things more or less the same visually over all these years, unlike Ubuntu which seems to change its look and feel bit too much.
People can be snarky at Red Hat for being slow, but this is the exact reason why.
If your company has the engineering prowess to maintain the latest and greatest, that's great. Unfortunately most large enterprises just don't move at that velocity which is who Red Hat is trying to support.
Wow, they based on Linux 4.18! Good for them! I was worried they’d pick 4.14 or earlier, which would have been regrettable in terms of keeping the speculation attack mitigations maintainable.
Red Hat backports features aggressively and is one of the largest contributors upstream. The kernel version or whether it's LTS upstream doesn't matter.
gcc 8.2! Being on gcc 4.8.5 for RHEL 7 instead of 4.9 was a minor pain point for some c++11 or newer libraries when developing. Nice to see them jump all the way to the 8.x series.
Also great to see xfs gain COW support in RHEL, they are working to make up (in features) for dropping btrfs with xfs. We'll see how stratis works out.
There is more optimism about the future of Microsoft than the future of IBM. Evidence: Microsoft's stock price has nearly tripled during the last 5 years, whereas IBM's has dropped almost 40% during the same period.
The question about who will leave is always silly because we don’t have privileged info about IBM’s plans for what happens after the acquisition. Things that keep people with RedHat could go away without much notice.
That’s why the uproar and threats to leave over the MS acquisition was dumb.
I'm affiliated with a number of projects that moved from github after the MS acquisition, and I'm aware of a few others. So at least some of us decided to act and not threaten.
But how many paying customers? It's slightly different also because Github may not want to lose non-paying customers either, because of name recognition and not wanting competitors being seen much, but I don't think the IBM acquisition will sway most people that are using CentOS, since it's already somewhat divorced from RHEL proper.
No, it's incredibly different to compare the number of customers who might leave Red Hat to the number of customers who left Github. The services are not even close to being similar, and the customers are generally not even close to being similar.
> it's incredibly different to compare the number of customers who might leave Red Hat to the number of customers who left Github.
I don't think it's that different. Github makes somewhere around 50% of it's revenue from enterprise customers.[1] That said, my point was that they aren't very similar, so we seem to be saying the same thing, you just seem to be doing so rather aggressively.
Most companies are more pragmatic than this. Change adds risk, and companies which select RHEL are relying on it's stability and enterprise support, and will likely continue to do so, until there is a real, material reason to reconsider, such as IBM changing the pricing or service agreement.
(Note as a comparison, individuals who distrust Microsoft moved their projects off GitHub when it was acquired, but most businesses, even those in direct competition with Microsoft, such as Google, did not.)
So I work for Red Hat and the truth is no one will know until after the acquisition, but there are certainly no plans for that, nor have I heard anyone even discussing such a thing. CentOS is a valuable part of Red Hat and an important part of keeping users in the ecosystem even if they're not paying us, so it wouldn't really make any sense. IBM aren't stupid (despite nonsense that people write online), and Red Hat is a cash machine and they'd be stupid if they messed with a proven model which works.
Seems like supporting and incorporating RHEL support into IBM’s business is the reason they bought RHEL. They effectively grew their business contracts with the addition of all the RHEL contracts.
If anything, I would expect business as usual for their business customers. It was a really obvious market move in hindsight.
CentOS is being used by a lot of HPC shops, academia and research units like one where I administer a HPC.
For the number of cores that HPC shops run, the bill would become FAT real soon if we have to pay for OS. I'd imagine that if IBM messed with CentOS, it'd be called out for being incredibly dumb about RH business, and there would emerge an open fork of RHEL that is not owned by IBM like CentOS is.
To give you an idea, we run CentOS 7.5 on all compute nodes, but make use of paid for RHEL for our identity server, DB servers and the like. If IBM touches CentOS, the whole domino falls and we start looking for alternatives.
Regardless of who owns Redhat, they still have to provide the source RPM's for most of the packages. The best they could do is change the build format enough and often enough to make it painful to repackage.
I don't envision them doing this however. That would force a lot of people away from Redhat based distros and guarantee barring conversion to RHEL entitlements.
IBM has never struck me as that clueless about open source, and I imagine Red Hat management would fight that as much as possible realizing how much of a long term strategic blunder it would be.
From the people I know using huge amounts of centos in production, they're not going to pay for RHEL, if centos becomes non viable they'll be switching to debian-stable and debian-testing.
> Linux containers, Kubernetes, artificial intelligence, blockchain and too many other technical breakthroughs to list all share a common component - Linux
Erm what? Only two of those are built on Linux. If there's too many to list at least don't list unrelated technologies.
I'm still incredibly sad about that, especially as Btrfs has become a really solid filesystem over the last year or so in the upstream kernel...
Reference: https://access.redhat.com/documentation/en-us/red_hat_enterp...