> Improved Application Performance and Isolation. Run applications in isolated and secure lightweight containers utilizing SELinux and resource management. Linux containers provide a method of isolating a process and simulating its environment inside a single host. It provides application sandboxing technology to run applications in a secure container environment, isolated from other applications running in the same host operating system environment. Linux containers are useful when multiple copies of an application or workload need to be run in isolation, but share environments and resources. 
Looks like there is a major shift in that they will support containers out of the box now. Hopefully we will see some type of GUI to create containers and manage cgroups. There has also been major effort assigned to getting containers working with OpenStack and Docker. You can manually download/compile LXC today, on RHEL 6.4, but it seems like a bit of a hack, since you need to figure out networking and LVM on your own, never mind building base container images. Should be interesting.
I’m not sure what Systemd version they’ll end up shipping though.
The best part about Systemd is that the default framework for launching and monitoring services is essentially LXC without the padding (extra PID0 etc). This means every service can benefit and there’s no need for the unnecessary abstraction (the container) and all the (mental not necessary performance) overhead that goes with it.
Needless to say, I’m quite excited about what is happening on Linux nowadays :)
systemd-nspawn isn't recommended for production use. I can't tell you why that is, but if you search for it, you'll find a few places where Lennart Poettering recommends libvirt-lxc for deployment (which is completely unrelated to the other "LXC" project).
This means every service can benefit and there’s no need for the unnecessary abstraction
Of course you could run software on bare metal ;-) But containers are a nice way to ship whole projects including the dependencies. Especially if you deploy to lots of machines.
Exactly - if they support and test LXC now, and the market demands Docker a year into the lifecycle (RH generally don't add features between major releases, so it'd have to be worth it for them), it's still doable as it's a userspace addition.
Yeah, I have been doing some research for an upcoming screencast, and this type of idea is called Operating system-level virtualization , and there is a fairly good table with the various OS's and their take on OS-level virtualization. E.g. Solaris Containers, FreeBSD Jail, OpenVZ, HP-UX Containers, etc.
Yes, I have CentOS 6 on a Thinkpad x200s with hard drive encryption enabled as my work machine. I find it stable and quite fast. I may leave that machine on CentOS 6 and put 7 on the 'play' laptop.
My point was giving a choice of desktops is a new departure for Red Hat. Remember that they employ, or have employed, a number of the Gnome developers, and that I gather Red Hat has been a major sponsor of Gnome in the past.
Yeah, coming from the RH/Fedora world, it seemed weird when other distros starting spinning specialized versions for a different window manager on the desktop, such as kubuntu. I guess that's because they wanted to have more control over the "experience". RHEL is about getting shit done when you know what you are doing, not holding your hand and making you feel comfortable, so I guess with those different expectations it's a bit easier.
Did they always? I remember it usually being as simple as installing the RPMs, and running the switchdesk utility. I always had to do that (or generally something more involved because of my choice) to run FVWM2.
Ubuntu also allows you to just install the packages for a different desktop; the differently branded install CDs are mostly for people who come from windows or OSX and thus believe that the desktop == the OS and it's impossible to change once installed :P
Yeah - Gnome was definitely the subtle default. IIRC correctly, the Gnome option is called "Desktop", and the KDE option was called "KDE" (or something like this). So unless you had a known preference, you'd end up with Gnome. Contrast this with openSuSE that has 2 specifically-named download options.
From the draft of the new storage administration guide:
Btrfs is still being actively evaluated for stability during the Red Hat Enterprise Linux 7.0 beta. The following target use cases will only be fully supported if it passes our tests:
* The system partition only use case. This will allow btrfs only to get used for system installation, not only for a user's data. Currently it is unclear whether this will be restricted to this single disk or not.
* Use btrfs for desktop and laptop users including their data partitions.
* Use btrfs as the base file system under scale out "big data" file systems, such as gluster and Ceph.
I was struck by that too, and came here to make the same comment.
Why do you suppose they went with XFS? Ext4 seems like the defacto file system these days, but I know a lot of people seem to prefer XFS for one reason or another. It's a surprise to me that RHEL decided to default too it.
It's all about expected disk sizes in 10 years time (when RHEL 7 main support ends). Ext4 scales up to around 16 TB -- I know it theoretically can support a much larger volume size, but in practice it doesn't work so well. XFS handles tens of terabytes without a problem and Red Hat has been supporting huge XFS volumes for years with many customers (who in earlier versions of RHEL paid extra for that support).
From Chapter 11: Compilers and Tools in release notes:
11.1. GCC Toolchain
In Red Hat Enterprise Linux 7.0 Beta, the gcc toolchain is based on the gcc-4.8.x release series, and includes numerous enhancements and bugfixes relative to the Red Hat Enterprise Linux 6 equivalent. Similarly, Red Hat Enterprise Linux 7 includes binutils-2.23.52.x.
These versions correspond to the equivalent tools in Red Hat Developer Toolset 2.0; a detailed comparison of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 gcc and binutils versions can therefore be seen here:
Notable highlights of the Red Hat Enterprise Linux 7.0 Beta toolchain are the following:
- Experimental support for building applications compliant with C++11 (including full C++11 language support) and some experimental support for C11 features.
- Improved support for programming parallel applications, including OpenMP v3.1, C++11 Types and GCC Built-ins for
Atomic Memory Access and experimental support for transactional memory (including Intel RTM/HLE intrinsics, built-ins, and code generation)
- A new local register allocator (LRA), improving code performance.
- DWARF4 is now used as the default debug format.
A variety of new architecture-specific options.
How long were the beta periods for RHEL 4, 5, 6? I know there's no commitment, but I'd like to tighten my vague idea of how long the beta period will be. Right now, I guess more than a day and less than a year.
Can anyone explain a bit more about what happened to Red Hat? I'm about behind the history of this Distribution. Last time I read about it I found out that is paid and I never considered it, because of that. I'm using Slackware for most of my servers, but I don't know what is the target market or what is more special in Red Hat Enterprise.
There are some misconceptions here. RHEL does not cost a cent. Support for RHEL, which includes security fixes through the package manager costs money. Security fixes and other patches and updates are still released as source by RHEL, as required by the GPL.
If you really wanted, you could run RHEL with no subscription and compile your own updates from the source that they release. In practice, this is next to impossible to maintain as an individual, but it is exactly what CentOS, Scientific Linux, and other related EL distributions do. They remove the RHEL trademarked logos, compile the code released by RHEL, and make it available through a generic yum repository that doesn't require a RHEL subscription.
So, in short, RHEL doesn't cost money, support and packaged patches do. CentOS gives you binary and version compatibility of RHEL without the cost.
The biggest reason to get RHEL is certification (assuming you don't need their tech support or updates). A number of third party vendors will only support one of the Enterprise Linux distributions (RedHat, Suse, Oracle Linux, etc). Now the technical differences between RHEL and, say Fedora, is that RHEL gives you a long time of getting bug / security fixes that are "minimally invasive", that is you can apply them and have a very good chance that your existing configuration and apps still work without a reinstall ore reconfiguration. (This is what is meant by "stability").
For personal use, you can get similar stability from either CentOS or Scientific Linux, although the bug fixes will lag behind Red Hat by a few days (although some critical updates have been released only a few hours after Red Hat).
It's been necessary to pay for RHEL for a while now. When you pay your subscription, you do get support from RHEL as well. However, if you just want to try the OS, you can go get CentOS. Since RHEL is open source, the CentOS team removes the RHEL name from everywhere and recompiles it.
One market that it seems to be strong in is the defense world, since RHEL is one of the few OS' that get various certifications for safety and security.