Otherwise, things are somewhat different. No update-alternatives, vi is vim rather than nvi, Apache configuration is drastically different.
The packaging system is a lot stricter, packages may not touch each others files, checksums & permissions are stored in the package DB and used to verify on both install, uninstall and upgrades. there's no "recommends" system, there's no interactive upgrade packages. ( No offers to "merge your samba config" or anything like that ) as the installer system is made to be usable headless.
In "Enterprise" solutions, there's also support for Satellite ( http://www.redhat.com/products/enterprise-linux/rhn-satellit... ) which is quite nifty.
RHEL has older kernels, with a lot of features&fixes backported from newer development, causing them to be very strange beasts.
Overall, I've used both, both have their place, but I really can't stand the fact that .deb archives need extra plugins to simply verify that nothing has fucked with permissions or file contents.
But after a while you realise that rpm -Va is a wonderful tool that you'll miss.
Debian has used tiny-vim as the default vi for a while now.
In comparison, Ubuntu 10.04 released in the same year as RHEL 6 (2010) has aged much more severely.
e.g. EL 6.4 brings Openvswitch and improved namespace support in kernel (particularly interesting for openstack fans) among many other things (see tech notes).
While it may not be as flexible as Debian or have the huge community Ubuntu boasts nowadays, RHEL brings a lot of reassuring predictability and stability to the table. I have come to appreciate this a lot in my years working as a sysadmin.
Also a note on people's praising the "smooth" upgrades between Debian major releases and the less than smooth upgrades between RHEL/Centos major releases:
1 - please do keep in mind that RHEL releases are very long lived in the meanwhile technology changes a lot; transitioning from kernel 2.4 to 2.6 has been very tricky for example, or from SysVinit to Upstart (and to Systemd in RHEL 7). It's dificult to foolproof such migrations so the task is left to the sysadmins - plus you always have a LOT of time to plan and do this. And ...
2 - Also, while the upgrade process itself has been successful and I get a booting OS with Debian, after a dist-upgrade, there are APIs, ABIs, major versions of software etc that have changed in the new OS and potentially breaking a lot of stuff in very ugly ways.
Having said that both platforms have their place and their strengths and I think they complement each other very well to the point that through them we now have a server market massively dominated by Linux. Thanks god. ;-)
 - https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enter...
Having a RHEL install supported for 10 years and then upgrading in-place just doesn't make sense in 90% of the shops using RHEL (that choose it exactly because of those 10 years of support).
I do maintain both rpm and deb packages for the company that I work for, personal projects (only deb) and community. Working on Debian related projects is always nicer and enjoyable, at least for me.
I'm not sure how you were managing it, but I've installed very large bases if redhat systems, from dozens of individually configured systems to clusters of thousands of nodes. I would say that neither are too difficult for general deployment, but redhat's kickstart system had a leg up on Debian for years.
I'd also say that the yum/rpm repository system is far easier to manage. Setting up internal repos, and making custom packages takes so much less work in the redhat ecosystem, because the sets of tools are much better integrated. Setting up a custom signed apt repo takes a lot more work, and research into various sources of documentation. Don't even get me going on trying to sign and verify a deb on it's own.
> "Having to reinstall 100+ servers from CentOS 5 to CentOS 6 will be remembered as the lowest point in my work experience so far."
I've worked with both very extensively, I just don't see this as the case at all if one is well prepared, and wanted to point out that RHEL systems aren't lacking in that respect. I have numerous ubuntu systems that are lined up for a 10.04 -> 12.04 upgrade, and I don't see it as being any more or less difficult than the RHEL 5 -> 6 transitions I've done, just different.
The differences in packaging systems, and similar are largely something you can learn about. But the idea of reinstalling to upgrade has always struck me as something I'd rather avoid.
That's because that is the safest course of action. You can upgrade between major versions, and instructions on how to are given.
The long term support model of RHEL lends itself more to a decom and migrate OS upgrade schedule instead of follow the latest version.
For example some enterprise backup software I dealt with: installing on RHEL was easy: install the vendor's RPMs. On Debian (Ubuntu) systems, there was a lot of extra fiddling around with "alien" and "dpkg" to get the software installed.
For instance, for a while I couldn't update the kernel on an RHEL5 server that I have, because TSM I/O would drop like a rock. I could switch to newer kernels only after I updated TSM to a newer patch level. No idea what would happen if I happened to run TSM on an "unsupported" distribution...
For a large company with dozens to hundreds of servers, you generally want RHEL (or SLES). Usually you're running RHEL (or sometimes SLES) on Dell or HP servers.
If your servers are having stability problems (or if it is a decided company policy), you want to keep the kernel and packages up to date with all the latest security and other patches for your OS version. You also want to keep the server firmware up to date. So if there's a crash you can tell Red Hat and HP, or Red Hat and Dell (or Suse and HP etc.) that you're running a fully patched server with updated firmware. Of course, you want the kernel to try to dump core before it goes down.
As your servers go from dozens to hundreds to thousands, you will tend to see more and more crashes. At a company with a few servers, you might see a server crash every few weeks or months, at a company with thousands or tens of thousands of servers, you often see more than one server crash a day. You want to look for patterns - if, say, machines with a certain host bus adapter keep crashing, and the core dump seems to point to that, it's something you want to be aware of.
Debian has an advantage on package availability: almost everything you need exists in the official repositories. This is extremely handy for those of us that don't like to waste time installing things from source (because it is never about the time to install them, it's about the time to keep them updated).
RHEL repositories aren't as extensive. Even using EPEL (which I feel is indispensable), sometimes things just aren't there. But what's there has 10 years of support, which is very important.
In a setting where you have very little budget for specialized appliances and have to do everything on Linux (servers, routers, firewalls, balancers, ...), and can take the overhead of having to upgrade every 3-years, Debian is a nice choice. The repositories have all the nice extras for all kinds of uses.
If you are using Linux for enterprise servers, where you have either proprietary software or your custom applications require a lot of paperwork to upgrade, it's nice to be able to keep them running for more than a decade. The regular ".x" updates do a good job of keeping things fairly current in terms of supported hardware (and feel), so it's not like things get only security updates.
I don't like "yum" better than "apt-get" (both have advantages and disadvantages), but I do like ".rpm" better than ".deb" from a package builder standpoint. I'm biased, since I have packaged RPMs from scratch but never DEBs. However, when I build already existing packages from source, RPM just seems a cleaner format.
Your mileage may vary.
If you like to configure everything from the CLI, Debian's core configuration files are much nicer than RHEL's. Compare, for example, "/etc/network/interfaces" as opposed to "/etc/sysconfig/network-scripts/*". This difference is enough for me to have X11 on my desktop just to be able to run "system-config-network" - In RHEL5, because on RHEL6 is either editing those awful config files or using the even more awful NetworkManager... bleh!
The differences come out when you start dealing with lower level things. In those cases I find Red Hat's approaches are more pragmatic (work better in the real world) while Debian's approches seem to strive for theoretical perfection. RH is optimized for the 90% use case, Debian is optimized for nothing (or is optimized for the 10% use case). Some examples:
Kickstart vs Preseed:
For PXE boot automated installs Red Hat's installer (anaconda) uses Kickstart for the configurations. Debian (debian-installer) uses Preseed (it can actually use some Kickstart configurations, but not enough to eliminate Preseed).
Kickstart is easy, for example creating disk partitions and file systems can be done is a few human readable lines (zerombr, clearpart --all, part / --fstype=ext4 --grow).
Preseed makes bizarre choices. You are essentially automating the interactive installer, including having to confirm all those dialog boxes that say things like "Are you sure you want you overwrite your data", "The previous configuration didn't use LVM and the new one does, are you sure you want LVM?", "This is your last chance, your data will be lost". So along with the partition configuration which is more verbose (not really human readable), you need lines like (d-i partman/confirm_write_new_label boolean true, d-i partman/choose_partition select finish, d-i partman/confirm boolean true).
It makes no sense that an automated install is not automated, but some will argue it's more powerful.
Package init scripts:
Debian WTF?! Debian packages usually use a helper program called 'dh_installinit' to install and manage their init scripts. The defaults of dh_installinit are horrible, and most packages seem to stick to the defaults. It defaults to starting services when the packages are installed, which is rarely what you want to happen. Almost always you want to configure the service and then start it (optimizing for the 10% use case). When you upgrade a package it defaults to stopping the service before the upgrade and starting it again after. Very few services require that. I don't want my web server stopped are a few minutes while packages are upgraded. Conversely Red Hat actually have a hot binary restart option in their Nginx init script, so you can do a package upgrade and restart with the new binary with no downtime.
If you want to see how crazy it gets read line #132 of the iptables-persistent init script. They actually disable the stop function (required by LSB) because if they didn't it would disable the firewall during package upgrades: http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/precise/...
Ubuntu is half-backed:
Even Ubuntu's LTS releases have half implemented features and very little documentation about new/changed features. The most annoying case I encountered was their switch to Upstart as the init system for 10.04 LTS. Some services (like apache2) had normal init scripts in /etc/init.d, others were moved to a new Upstart configuration format (like sshd). The services with the new upstart format had symlinks in init.d that that printed a message that it wasn't a real init script (but didn't actually manage the service). They didn't document this, or tell you how to manage these new init scripts. None of their service management utilities were upgraded to handle the new init system. If you ran 'update-rc.d' it would symlink the non-functional message script the rc.d directories, so on boot or shutdown it would print a big message and not actually start or stop the service.
On the flip-side, Red Hat also switched to Upstart. But Red Hat did it in a totally seamless way, all the init scripts are still fully functional in init.d, 'service' and 'chkconfig' work as expected. And the system gets all the benefits of upstart (smarting ordering, faster boots) with none of the downsides. The one minor change that Upstart had (the contents of /etc/inittab) is clearly documented: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enter...
I think packaging and creating repos in Red Hat is more straight forward than Debian. Lots of people will disagree. Debian's systems seems over-thought, like they tried too hard to make is perfect for everyone, which just ended up making it more complicated for everyone.
All that being said, I still pick Debian-based systems a lot. Mainly when I want a specific package or package version. And when I write Puppet modules I make sure they work on both.
iptables-persistent is installed on 1.17% of machines that supply popcon data. Lets face it compared to pf iptables sucks, but it sucks a lot less when you use shorewall.
The apt repo format with pools and the indexes, it's a nice idea, I see how pools can save space, but ultimately they add complexity with little in return. The deb control files, all broken up instead of a single spec file, I understand the urge to modularize, but ultimately is makes it harder to work with while adding little value. What they do with apache and nginx with mods-available, mods-enabled, sites-available, sites-enabled. Again, I understand how that could seem better, more modular, easier to manage, but ultimately in my experience it's just a pain to deal with -- they tried too hard.
> I am not sure what "half-backed" means but it does not matter because you are talking about Ubuntu and not Debian.
Half-baked means unfinished or not well thought out. I think Ubuntu gets this because of their strict 6 month release cycle (not the "when it's done" attitude Debian and Red Hat traditionally have). Also they seem to lack attention to detail when it comes to server stuff. No, Debian stable is not half-baked. But the OP specifically asked about "Debian-derived Linux distributions" so I included something Ubuntu specific.
> Do you think debian comes with poor documentation for new/changed features?
While I prefer Red Hat/Fedora [online] documentation, no I don't think Debian documentation is poor. Debian actually updates man pages when they make changes, unlike Ubuntu. The reason I like Red Hat documentation better is that it has a more focused voice, it comes across as official and clear. Debian documentation often comes across as personal opinions, and appears more scattered to me.
> iptables-persistent is installed on 1.17% of machines that supply popcon data. Lets face it compared to pf iptables sucks, but it sucks a lot less when you use shorewall.
Ah, yes pf's syntax and features put netfilter/iptables to shame. I would much rather run pf as a network [gateway] firewall. But I also think you should have a host firewall, they are really needed in the cloud, so iptables it is. According to the popularity contest only 2.49% have shorewall, and 1.87% have ufw. I'm guessing a lot of people rolled there own with shell scripts. Which is another gripe I have. There is no default persistent firewall, to me that is crazy. iptables-persistent doesn't even support IPv6 in squeeze (current stable). It makes me feel like Debian lacks direction, no one cares enough to make it happen, there's no project manager insisting it get done because customers want it, people just do what they want.
I don't mean to pick on Debian, Debian is great, I just wish they'd do some things more like Red Hat :)
The short answer: the choice to use Ubuntu in particular was an incredibly bad decision, and we're ripping it out, but it's going to take forever.
A lot of the anti-ubuntu bias you hear (and my own strong anti-ubuntu feelings) are developed from trying to make these things work at scale and in an automated fashion. I'll be the first to admit that you can find lots of smart people (particularly here) who have made this work, but at quite a bit of effort. Woe be unto you if you try to do anything they didn't foresee.
* Their packaging guidelines and standards are quite poor--some packages start the daemons on installation (bad), and some don't, but you'll never know which do and which don't until you install them or inspect the debs, and the universe/multiverse repos are even worse. Additionally, some packages require interactive use on installation (NIS, I'm looking in your direction). Yes, you can preseed these questions away, but I find it astoundingly irritating.
* Their work on user experience is to the detriment of everything else you'll need to get a stable, predictable system, and they admit this. They half-finish major system changes, and don't give a shit if it works or is documented. Here's a rich one: http://manpages.ubuntu.com/manpages/precise/en/man8/mountall.... That's one of the most critical pieces of software on the system, and that man page has been unchanged for over 2 years. It does different things depending on what signals it traps, and the only way to find that out was to look through the source. I'd say I generally find something stupid like that every month or so.
* Canonical support is a complete and utter joke. The support contract we signed would have been more useful as toilet paper. My company also has quite a few kernel hackers on staff, and we damn well needed them while we waited for canonical to get off their asses and patch serious issues we had (that were, by the way, fixed in the upstream kernel). On two such occasions, I sent in bug reports pointing to lkml posts that pointed out the issues, I backported the patch directly to the Ubuntu release kernel, and still waited months for any movement on the issues.
* This is not Canonical/Ubuntu/Debian's fault, but it's still an issue. Any third party software that is sufficiently complicated is rarely tested on debian-based distros before it is released. In some cases, this is fine; in other cases, it's a disaster. There are subtle differences in kernel versions, libraries, etc. that will make some software just plain not work. Other software (Infiniband in particular) is a huge hassle to get working just because nobody who packaged it ever cared to make it work on ubuntu. This is less of an issue in the startup/web app world since a lot of that software really is developed and tested on multiple operating systems. My background is in HPC and research/academic computing, and quite a bit of that software is only used on Scientific Linux or some other RHEL derivative. This is changing a bit with the OpenNebula stuff, but not fast enough for my taste.
We don't get support from either Canonical or Red Hat, but I've worked in places that had "support" from both, and I agree with your assessment that support for Ubuntu from Canonical sucks and that support from Red Hat is actually quite good.
Yes, Ubuntu is not afraid to push out completely broken packages like it was Debian experimental. I don't know what to say about that: Test it before deployment on production?
We have a custom package pushing system for our cluster and it's nodes. Users don't have shell access to these things, so it's not a huge deal. I've never had the kinds of problems you mentioned above regarding user input. There are ways of getting around those kinds of things.
We will probably still be using CentOS, Rocks, and SciLinux for awhile since it's 3rd party and community supported, like you said. It's my personal opinion that the fundamental design, process, and community surrounding Debian is superior to Red Hat/CentOS, but sometimes you just gotta use what works.
I'm not paid to fix bugs all day to make it work. I'm just paid to make it work. And sometimes CentOS/SciLinux just works.
By contrast, Ubuntu promises that LTS releases will happen exactly every two years, and be supported for three more years after the next LTS comes out. That's an extra two years of breathing room to organise migrations and refresh production machines, which can be pretty important in a large enterprise.
The Precise (LTS) fix for those two iptables modules has still not been released.
I'm just going to ramble on incoherently for awhile until I stop...
My background was originally with FreeBSD, and then I switched to Debian GNU/Linux back in 97/98. There are still a number of things about FreeBSD that Linux doesn't have, and I wish it did; console scrolling behavior, getuid on directories, the network filtering tools, etc etc.
I am going to bash Red Hat/CentOS here a little bit, but I'm not going to hold back criticism against the deb distributions either.
I am not going to address desktop issues here at all, other than to say that I am typing this now from a Debian unstable, which is my primary home workstation. I'm mostly talking about server/shared-host system usage here.
For me, at home, it's all Debian. At work, we have a mix of Ubuntu LTS, CentOS, Rocks clusters, big Luster filesystems, and some OSX Server. We are in the slow process of installing more Ubuntu, and slowly getting rid of the CentOS. The exceptions is the Rocks clusters and where we use Lustre/Infiniband, mostly because of Scientific Linux (forgot to mention that one) and the ready-to-install distros that fit our research needs. Ubuntu/Debian has been getting better at supporting this area, and we will probably re-evaluate this in the future and switch over if we can.
For the uneducated, you can basically break the big Linux distros into two camps: .deb based (Debian, Ubuntu), and .rpm based (Red Hat, CentOS).
I can't really get around saying this any other way, so let's just get to it: CentOS is Red Hat for people too poor/cheap to pay for Red Hat. If Red Hat was free as in beer, CentOS would not exist. There is really no educated argument against this. The only people who I've ever seen bother to argue against this do so because they don't know anything else and take it as as personal attack, which it should not be.
Red Hat as a company is doing really great things for Linux, and almost always has. They employ a lot of top talent and everyone in the Linux/GNU world benefits from it. A lot of people who are using CentOS should be using Red Hat.
Red Hat and it's children have replaced their package management tools and formats multiple times now. They did this because they sucked and for whatever reason, evolutionary progress never happened within those existing tools. This is a huge negative to me and it's one of the reasons why I started using Debian over FreeBSD in the first place; I saw that Linux was really the future, and that compiling everything all the time was for suckers (sorry all ye Gentooders).
There are some specific software packages that were made to only run on Red Hat. I'm thinking about Oracle, but there are some other big ERP systems and such thing. In these cases, it's either Red Hat or CentOS. Fighting the developers who wrote these big(dumb) software packages to run on just Red Hat isn't worth your time, so just go with am rpm distro and get on with life.
Debian is a huge complicated professional community. I don't just use Debian because of the base distribution itself. I use it because of it's awesome bug tracker, the users and developers who are involved, the mailing lists, the design decisions, the tiny netboot image that I can install from, the huge number of packages and the sanity of the package management system, it's security notification and release system, and the online documentation resources.
This reminds me of the fact that on my home server, I installed it off a Debian CD back in like 2000. Since then, no CD or outside media has ever been involved in upgrading it (even for the transition from i386 to amd64). Debian/Ubuntu upgrades are all over the network and in-place. Contrast this to Red Hat/CentOS where even today, most admins have to back up everything, wipe, and re-install with a CD. I think modern releases FINALLY have the ability to upgrade between major revisions in place, but this has ALWAYS been the case for Debian. "apt-get dist-upgrade" and that's it. Some people will fight me on this, but this happened at work just two weeks ago: Our top CentOS sysadmin upgraded a system using a CD because that's the way he had been doing it for the last ten years. In-place major version upgrades is not well supported under Red Hat, and before a couple of years ago, it wasn't even supported.
Ubuntu came along as an upstart and has created it's own community. Ubuntu is a vehicle/tool for Canonical, so you need to understand that it's not driven by the community like Debian is. That being said, Ubuntu's ease-of-use is why we use it at work, because I don't expect our low-level sysadmins to suddenly become awesome. They need something that was built with ease-of-use in mind, even at the expense of flexibility.
I have been disappointed in a few things about Ubuntu. I've ran into some weird kernel stability problems with Ubuntu that our CentOS hosts never ran into, all on the same hardware. Their insistence to mess with motd in every damn release bugs the hell out of me. The Ubuntu guys spend a lot of time fixing things that ain't broke, so core tools and daemons get replaced every once in awhile and I have to re-learn something for what are dubious benefits.
I was one of those people who was unhappy with Ubuntu in the early days, for taking and taking from Debian and giving nothing back. But I also was very sympathetic to why a lot of Debian contributors went over to their side, because of some of the slow progress and procedural community complexity that Debian has.
Debian has a bad name for itself, in that people associate it with being old. This is true, in the "stable" distribution, and in the past, I've been really frustrated in just how long it took certain core packages (kernel, Samba, Apache, etc) to get from unstable into stable, to the point where a lot of people, myself included just ran unstable on our servers and dealt with the occasional breakages. You learn a whole lot about how things really work when you install a bad version of mdadm and lvm.
I just run Debian unstable on my desktop and servers. Yes, I've had a few buggy accidents, including one time in the last two years where a remote server lost it's networking because of a bug in the ifrename package, but problems like this have been very very sparse.
My personal experience is that higher quality admins use deb-based distributions over rpm distributions. Almost all of the noob/idiot admins that I've ever met were CentOS/Red Hat people. It's not so much that they had chosen rpm distros, so much as it's the only thing they had ever known and didn't seem interested (or maybe capable) in learning anything else. I often find that rpm distro users think RedHat = Linux and in some cases, they are not even aware that anything else exists.
As for CentOS, it's community isn't that great. Again, they are just leaching off Red Hat. You can't make design decisions or anything else, because each release is to take what Red Hat puts out, scratch off the Red Hat label, and re-publish as CentOS.
At one point, Lance Davis disappeared FOR MONTHS and seriously threatened the CentOS project and community. That's pretty much all you need to know about CentOS:
I've heard this quote multiple times: "I've heard a lot of people say that Red Hat is the most popular distro, but I don't think I've ever heard anyone claim it was the best."
With Red Hat, you are a customer. With CentOS, you are a user. With Debian, you are part of the community. With Ubuntu you are using Debian for lusers.
ps: I have good things to say about Arch and Gentoo, but I'm an old fart Debian guy.
Use Ubuntu by default. Use Red Hat only if you have to. Use CentOS because you are too cheap/poor to pay for Red Hat. Use Debian if you are a pro. Use OpenBSD if you want to be secure.
We are doing a lot of infiniband, 10Gig networking (with full multi-interface throughput for days at a time), huge filesystems, and crazy other stuff that a regular web server would not do.
Notably, almost all of our web servers have gone Ubuntu for ease of management, since the web guy likes Ubuntu.
But we don't use Ubuntu for their commercial support. My experience there, in the past, has not been good. We ultimately support ourselves.
The answer will be exactly 0. Canonical's focus is on phones, tablets, and desktops. Mark simply does not take the server as a serious use case. Clearly redhat doesn't know what they are doing, they've only built the first billion dollar purely open source company.
Since RedHat employs a very very large number of upstream kernel.org Linux kernel hackers, don't you think that maybe, just maybe they backport some of those newer features into their old stable kernels? As one of the parents alluded to (open vswitch and namespacing support), yet again the answer is yes. Pure Debian isn't bad granted, but it simply doesn't compare to Redhat. I just don't see them as the same league. Canonical is a sales organization parading (poorly) as an engineering organization. However, they have done absolutely wonderful things for marketing Linux before Android came out so they do deserve respect where it is do. What do I know, I've only worked in large multithousand node environments my entire career </snark>
"You can't make design decisions or anything else, because each release is to take what Red Hat puts out, scratch off the Red Hat label, and re-publish as CentOS."
That is the stated purpose of the CentOS project. See the CentOS Overview box on the CentOS Web site. As an end user with a desktop PC, I find that limited scope and predictability reassuring.
It's not a bad thing, but if you already know Debian, I don't think there's a huge reason to switch to CentOS.
Also, no upstart crap ;-)
I have a better experience with actual pure debian.
Anyone know the kernel version bump? It's always hard to figure with the backporting.
Okay tried it on one box