
Tech Choices - Why we use Centos instead of Debian / Ubuntu - ceyhunkazel
http://www.chinanetcloud.com/blog/tech-choices-why-we-use-centos-instead-debian-ubuntu
======
fps
I've been supporting production RedHat and CentOS servers since RH 7 (15
years), and while RHEL 6 is a huge improvement over the bad old days of
up2date and RPM dependency hell, RedHat still has a long way to go to catch up
to where Debian was 10 years ago. The quality of packages on RH is still piss-
poor compared to Debian (regularly missing man pages, broken default
configurations, etc), and if it's not in RH core or EPEL, you're pretty much
building it from source because finding up to date third party rpms that
target the right version of RHEL is nearly impossible. yum is enormously
slower than apt (I've clocked it at 10x slower on average on identical
hardware), and rickety as crap. You get into dependency deadlocks regularly
that require you to remove whole swaths of rpms just to clear out a
conflicting dependency. Yum's use of an ancient version of Python mean that
supporting modern python applications on RHEL means you're building all your
own Python RPMs and installing in non-standard locations, or you end up
breaking yum.

At my current job, we pxe install thousands of CentOS compute nodes and never
touch yum again. When it's time to update, we build a new kickstart and re-
install. That's the only reasonable way to run RHEL/CentOS and not lose your
mind.

I have debian systems that are still running dist-upgraded versions of 10 year
old installs, and the dpkg database is sane and clean. You can't do that with
RHEL.

The unfortunate fact is that, in the enterprise world, people other than
system admins make the decision to run RHEL at the expense of sysadmin time
and sanity.

~~~
mpdehaan2
I take the contrasting view (vastly prefer Enterprise Linux) but write
software that has to support many distributions.

Generally speaking, there are _better_ package review standards applied to
Fedora (Red Hat and CentOS upstream) and you don't have things like debconf
that make figuring out how to do a non-interactive install a little confusing.
This means that in general there is less "randomness" that will occur than in
your typical apt package.

Kickstart is also easier for users to bootstrap installations than preseed
files, which in many cases require scripts to all be written on a single line.
Writing preseed files causes me immense pain.

.rpmnew is also a nice system for replacing configuration files. By contrast,
I've had apt purge _fail_ when files were deleted prior to running the purge.

Ultimately I'm not sure what kind of repositories you are working with, but
that may be the problem.

I would choose CentOS or RHEL every time.

Many slowness issues can be solved by maintaining a local mirror and also by
disabling the fastestmirror plugin -- while perhaps good for slow connections,
the mirror speed checks cause added delays.

------
languagehacker
I see no discussion of package management in here, which is kind of a bummer.
My experience with yum has been pretty crummy and I don't think it's improved
over the last several years. That alone should drive someone developing
software away from working on that platform.

Centos & Red Hat are great if you have a single package you need to deploy to
a machine and all the dependencies are already there or easy to pull down from
yum. You get the advantage of not having a capricious OOMKiller process that
can randomly take down your system. But if you're actually in the process of
building something, working on Centos is just a pain in the ass.

~~~
SwellJoe
Just a counter point: I love yum. It is, by far, my favorite package
management option. Managing yum repos is a breeze vs every other option (the
toolchain is one command, vs a half dozen commands and hours of manual labor
to setup a new repo for Debian/Ubuntu). I also prefer it from the end user
perspective.

apt-get is fine, and I wouldn't be at all unhappy with servers running Debian,
but yum is my preference.

~~~
mercurial
As a Debian user, I agree that you can't beat the ease of setting up a repo on
RHEL/CentOS. On the other hand, I'm not fond of the .spec format and working
with yum is not particularly pleasant. Not to mention that Debian ships with a
lot more packages.

~~~
shredfvz
I've provisioned and maintained dozens of servers all running Arch, and
currently have four home PCs running it. I've found Pacman to be one of the
easier and more pleasant package managers to live with.

You can sort through tens of thousands of existing PKGBUILDs on the AUR [1],
which typically makes it quick and easy to start packaging software for Arch.
You can even sync a flat text file database of all official Arch packages with
abs [2]. The ability to reverse engineer every PKGBUILD for a wide variety of
software is a major plus in my book.

Writing a PKGBUILD takes roughly the same effort as compiling software from
source with a bash script. PKGBUILDs are simple to write, and there's just
enough "magic" in Pacman to keep things sane.

Nine times out of ten, the PKGBUILD writing process boils down to copy/pasting
directions from README or INSTALL files. It's like a bash script, except more
'done-for-you'. Finding exact dependencies is typically a cinch with the AUR
and tools like packer.

Maintaining a rolling distro can be a labor of love, but if you love the
system, you'll find it may significantly increase your overall sanity. It's
another way of doing things, but I consider choice of distro to be one of the
more important decisions to make, and IME, Arch has been such a significant
departure from other distros that not trying it in a serious capacity is
roughly equivalent to not trying Vim / Emacs ever in your career. Which is to
say, I think it's a mistake not to at least see what it may offer you,
especially if you're in any doubt.

I hope my positivity is only seen as that: positivity, and not overzealous
dedication to one specific toolset. Arch probably isn't a panacea, nor will I
claim that it's a perfect fit for your way of doing things. But I have never
found something as pleasant as Pacman to work with. In addition, I never
would've even tried Arch if I hadn't been slightly flustered by the seemingly
irrational exuberance random sysadmins displayed for the system.

1: [https://aur.archlinux.org](https://aur.archlinux.org)

2:
[https://wiki.archlinux.org/index.php/Abs](https://wiki.archlinux.org/index.php/Abs)

------
nsmartt
> _[W]e need reliability and predictability over a large variety of systems
> over many years. We need strong support by most of the world 's software
> vendors and open source project. We need documentation, tools, and global
> resources for the most commonly used systems._

In which of these does Debain fall short?

~~~
skrause
Compared to RHEL/CentOS the Debian release cycle is actually very short, with
support for an older version ending just one year after the release of a new
stable version. So in average you will only have support for about 3 years for
each release, compared to at least 10 years for RHEL. That's why there has
been some discussion about a possible Debian LTS release:
[http://lwn.net/Articles/565007/](http://lwn.net/Articles/565007/)

~~~
SwellJoe
Lifecycle cannot be underestimated. The average life expectancy of a server is
over 3 years. I'm pretty much running on a five year cycle, right now, for
most of mine. It's a huge time cost to have to deploy a new OS more frequently
than you deploy new servers.

------
enduser
FreeBSD has a very stable core distribution with an up-tp-date rolling release
"ports" system for additional software. It's great for these purposes. It also
has excellent documentation, stable native ZFS, and compatibility with Linux
binaries.

~~~
SwellJoe
I've found FreeBSD to be extremely fragile, with regard to updates. It can
easily become unusable if you initially install from binary packages and then
move some things to ports builds (library dependencies aren't handled
appropriately, as far as I can tell). System upgrades (from 8 to 9, for
instance) are also scarier and more prone to failure than any Linux distro
I've used.

While I have a lot of respect for FreeBSD's developers, I'd be unwilling to
deploy it for production on servers; this is at least partially my own
inexperience with the system...but I'm simply afraid to trust a system that
makes it so easy for me to shoot myself in the foot.

Even with my own very limited use, I've never had a FreeBSD system that didn't
end up utterly trashed eventually (in such a state that I chose to reinstall
rather than try to fix it, because I had no idea where to start on fixing it;
I'm sure a more experienced FreeBSD user would have been more capable of
getting the system working again). I can't imagine what would happen if I were
using it heavily, without first spending months or years learning how to avoid
the pitfalls I run into so readily.

I had a similar level of experience with Debian (which is to say, not much),
but never had a problem keeping it running. My experience is much higher on
Red Hat based distros (I've managed CentOS and RHEL servers, and before that
Red Hat Linux servers, for almost two decades), so I can't really compare it
to FreeBSD, but I know our customers rarely run into OS issues on RHEL or
CentOS, and we have a lot of them. CentOS represents more than half of our
user base.

~~~
rgbrenner
I used fbsd for 15+ years, 10+ years in production... and found it to be very
stable.

I never experienced the problem you mentioned with packages... but packages
are frozen at release.. so they're never updated... so you end up having to
use ports for everything anyway. I always disliked having to compile every
single package from source. If you have a large number of systems, it's worth
setting up a build server and creating your own packages.

Major upgrades ARE scarier than on Linux. They have a fairly new binary
upgrade tool that I never used.. upgrades using build world generally work
ok.. just update your source tree, wait a few days for anyone to report bugs
on the mailing list, and then run build world. You can't do this over ssh
though.. you need concole access. Again, if you have a large number of
machines, setting up a build server is worth it.

With that said though... FBSD has been slowly losing out in my company. 10
years ago, I used it everywhere... then fbsd neglected the desktop (not enough
resources in the project), and Linux got so far ahead, that I was pretty much
forced to switch to Linux on every workstation. Then a few years ago, found
fbsd wasn't stable when virtualized.. ran a custom build on Xen for a while..
but eventually moved to Debian.. so it's now about 90% Linux.

------
redhat-reallyp
This is a content-free article. It boils down to "we've committed to CentOS,
so that's what we prefer to use."

The claim that "in our experience, [Debian/Ubuntu] are not nearly as stable or
trouble-free as RHEL/CentOS" says more about their lesser experience with
those systems than it does about Debian or Ubuntu.

~~~
larrys
"Many people ask us why not use Debian-based systems such as Debian or Ubuntu
server. We do support these if there is no other choice, but in our
experience, they are not nearly as stable or trouble-free as RHEL/CentOS."

So while they do have "less experience" they do have _ongoing experience_ with
Debian or Ubuntu. They say. We don't know what that means it could mean 1
machine per year or it could mean 10 per month.

Now if they hadn't needed to support essentially any Debian/Ubuntu that might
mean they are stale in that area. So without qualification of their exact
experience "how often" it is really hard to tell the bias in that statement
wouldn't you agree?

~~~
redhat-reallyp
"without qualification of their exact experience "how often" it is really hard
to tell the bias in that statement wouldn't you agree?"

No, because many people have experience which contradicts theirs, and they
don't provide any support for their claim: no information on what the
stability issues or troubles they encountered were, no information on the
causes and how they were specific to the OS in question as opposed to e.g.
user error, no information about how CentOS would do better on that issue,
etc.

All the indications are that they're making a very typical sort of claim that
people make when trying to defend a decision that they're not qualified to
defend. They provide no reasons to take it seriously. As I said, it's content-
free.

------
stormbrew
Really the only major things that annoy me[1] about using ubuntu LTS as a
server are:

\- The fact that packages which install server software autostart the server
on install, and the mechanism to disable this can vary package to package. I'd
rather install-configure-then-start than install-disable-configure-then-start.

\- The mechanisms for setting options for packages to be installed are geared
towards interactive use. You have to disable it popping up a curses dialog,
and I don't think there's an easy way to pre-configure them so you just wind
up with defaults if you do.

\- The version of upstart in 12.04 is really limited, and the mix of packages
using old and new style init can be frustrating at times.

[1] Perspective check: I'm mostly a software developer, but I also admin my
own servers for personal projects as well as having it as a peripheral aspect
to my job at times.

~~~
riskable
Wow. I am the author of a popular app-that-runs-as-a-daemon and I just went
through a HUGE debate with some folks about whether or not my package should
auto-start on boot by default.

I eventually settled on this: If you're using Upstart you get auto-start by
default. For everything else you'll have to turn on auto-start at boot via
whatever mechanism is normal for your distribution.

That means for Ubuntu you get auto-start on boot the moment you install the
package. For Debian you'll use update-rc.d and for RHEL-based distros you'll
use chkconfig.

I have no idea if I made the right decision though since there's no standard.
Sometimes it annoys the hell out of me that a package configures itself to
start at boot and other times I wish it _did_ auto-configure itself to start
at boot. It all depends on the package and what I want the server to do
(primarily).

~~~
stormbrew
One of the nice things about upstart is that it helps this situation out a
lot. Disabling a service before you install the package is as simple as
creating a /etc/init/servicename.override file with "manual" in it. Unlike
some other solutions (like placing a full configuration file before install)
this doesn't trigger a conflict with the package installation.

The thing is that for a desktop user, starting the service is fine. A lot of
the time it's what you want. And that's Ubuntu's main raison d'etre.

Also note that autostarting when _installing_ is different from autostarting
when _booting_. It's the former that I find frustrating, not so much the
latter. Generally speaking, I have plenty of opportunity to tweak the boot
behaviour between installing and rebooting (assuming the server ever reboots
at all).

------
dandrews
Redhat puts a lot of effort into building its enterprise distro. Yeah, it's
mostly out of FOSS parts, I get that, but distros are nontrivial and costly
undertakings.

I've always had a bit of a problem with Centos, which as I understand takes
Redhat's work, removes the proprietary parts, and redistributes for free-as-
in-beer - perhaps depriving Redhat of a sale or two. It's not exactly theft,
but not sporting either.

Would somebody straighten me out? Am I wrong to avoid Centos based on social
principle? What does Redhat think of Centos?

~~~
samarudge
RedHat loves CentOS. To quote their CEO

> CentOS is one of the reasons that the RHEL ecosystem is the default. It
> helps to give us an ubiquity that RHEL might otherwise not have if we forced
> everyone to pay to use Linux. So, in a micro sense we lose some revenue, but
> in a broader sense, CentOS plays a very valuable role in helping to make Red
> Hat the de facto Linux.[0]

[0][http://readwrite.com/2013/08/13/red-hat-ceo-centos-open-
sour...](http://readwrite.com/2013/08/13/red-hat-ceo-centos-open-source)

~~~
toomuchtodo
Somewhat why Adobe didn't care if you pirated Photoshop. If you weren't making
money with it, they'd rather you know it than something else (maybe gimp?).
Having mindshare (uggh, I hate that word) definitely helps when it comes time
to get the credit card out for tools you'll need for a paying project.

------
jerrac
At work I run close to 30 VM's on Ubuntu. I chose Ubuntu for two reasons. I've
been running it on my desktop for years and thus know how it works. And CentOS
did not have the newer versions of various packages I needed in its default
repos.

For the couple years I've been doing this, I can't think of any downtime that
wasn't caused by something other than the OS.

Heck, even my desktop hasn't had OS caused downtime.

I would bet my experience would be the same if I was using CentOS.

Also, there are so many different ways you can automate your server and app
deployments, that your choice of OS should really depend on what app you are
running. A cutting edge app might need the latest Ubuntu release to get the
latest packages. Another app might need the stability of an LTS release. And
another might only work with an RPM based OS and specific versions of
dependencies that are only found on CentOS.

Anyway, that's my opinion and experience. I am curious if there's
statistical/scientific evidence supporting the idea that CentOS/RHEL is more
stable than Debian/Ubuntu.

------
dave1010uk
Is it generally considered better to go with a better supported OS (eg CentOS
or an Ubuntu LTS) and use extra package sources for non-OS programs or to use
a more up to date OS that's already tested with the latest programs?

We made this choice recently, choosing Ubuntu 12.04 and a few (popular but
unofficial) PPAs, rather than use Ubuntu 13.04.

~~~
kevinxucs
Wise choice, because non-LTS version of Ubuntu is basically piece of shit.

~~~
SwellJoe
It's not that non-LTS releases are "shit"...Just that you'd be a fool to sign
up for replacing your OS every 18 months on a server that will be in service
for more than 3 years (which is the average lifespan of a server, and that
number seems to be growing).

~~~
toomuchtodo
Depends on the environment.

I manage a large AWS environment at my current gig, and all of our code is in
git. With configs in puppet, code in git, I can deploy to new OS version
instances in the background, do testing/QA, and then swap out the old OS
environment for the new one all transparently to the user.

Now if you're talking bare metal, yes, it makes sense to go with LTS releases.

It all depends on what your equipment lifecycle is.

------
Glyptodon
I've usually hated using RPM/Red Hat based distros because they tend to create
giant headaches whenever you need to use a relatively recent release of a
library or application and because I just don't like their package manager.

I think most people's problems with Ubuntu having too short of a release cycle
come from people installing the current version of Server instead of the LTS
version. I've known shops that stupidly used non-LTS Ubuntu server releases,
and then kept them running after their support cycle ended...

I think maybe that's what happens when people who don't really know how the
Ubuntu ecosystem just start using stuff.

------
sorin-panca
Why not have a tested Gentoo image and all servers with two root partitions to
alternate between them on upgrades? The image can then be updated, tested and
deployed (using pxe) when you feel the need. Also, using puppet or chef or
even emerge of binary packages (by an in-house built deployment solution), one
can install supplemental packages on a running system if needed. From my
experience, having a fully source based Linux system is good for performance,
security and - last but not least - stability.

------
fbueno
It seems that you are not familiar with debian's testing program.

------
ceyhunkazel
I respect RHEL/CentOS but for ease of use (packaging system,vast amount of
packages, easy upgrades) I prefer Ubuntu/Debian for my server and desktop.

------
luikore
"Stable" sometimes means people are more familiar with the bugs than bug-
fixes... Conservative tech choices: I'm only familiar with X so I don't choose
Y like you guys who are familiar with both. -- This is OK because the benefit
of better tech doesn't matter so much for their business.

~~~
omtinez
For a second there I thought that you were gonna start another Wayland
flamewar...

------
pdfcollect
Isnt Redhat more expensive than Windows? I hope Debian gets better than
Redhat, if its not already :).

------
ilaksh
Has anyone had any reliability or other problems with Ubuntu 12 04 LTS?

~~~
jerrac
I chose the wrong option when installing a grub update yesterday. Repairing
from a live cd didn't seem to work. So I'm currently backing up my data and
will install from scratch. Something I've been planning on doing eventually
since this was a system upgraded from 11.10.

I also run 30ish VMs with it at work. Only real issue I've ever run into is
how the /boot partition fills up with old kernels...

So, I have yet to see any proof that Debian is less stable than RedHat. Or the
other way around. Heck, how would you even measure that?

------
mesutillegal
Thank you very much :)

