
​Ubuntu continues to rule the cloud - reddotX
http://www.zdnet.com/article/ubuntu-linux-continues-to-rule-the-cloud/
======
codewithcheese
A big reason why it dominates the cloud is tons of web developers use Ubuntu
as their desktop Linux of choice. It's really very convenient to develop and
deploy on the same OS. I don't think its accurate to say "The desktop is a
nice add-on, but it's not Canonical's focus nor should it be."

~~~
mreiland
I run Ubuntu locally because it tends to work better as a vmware guest. You
can get other distro's to work with the quality of ubuntu, but not without a
lot more hassle.

If I get to choose I'll run arch linux, but inside vmware it's ubuntu hands
down.

~~~
otis_inf
I use Mint 17.2 inside VMWare and didn't have a single problem. Not sure what
VMWare version you're using, but 'hands down' suggests other distros have a
lot of problems, but I don't see them.

~~~
tribaal
That would be because Mint is pretty much Ubuntu (it uses the Ubuntu archives
and packages).

There is added UI customisation on top, but that's irrelevant to it working
better or not as a VMware guest.

~~~
otis_inf
I'd still be interested what massive probs other distros have on e.g. VMWare?

~~~
mreiland
Your wording there 'massive probs' is disingenuous. What I said was simply
"works better", I never said anything about major problems. In fact, I
specifically said the other distro's can be made to work just as well, but it
takes more effort.

But to answer your question, I've found desktop performance to be better in
Ubuntu for the same specs, many distro's don't play well with pause/unpause
(it can cause time skew that messes with things like https certs), and things
of that nature. Then of course there's the "easy install", although a few
other distro's feature it as well.

Nothing that doesn't have fixes, but as someone who is more interested in
getting the work done than tinkering with the linux install, I just go with
Ubuntu.

------
dijit
So, before I knew what linux was (and was teased on various forums) I ordered
some free CD's from Ubuntu. (I didn't have the internet at home). (eventually
I got them; ubuntu 5.04 I think [Horny Hedgehog from memory])

When I received them I was pleased, everything worked.. well, not everything,
but it sorta worked! I had a desktop environment and a command line and I felt
a small sense of accomplishment because I'd navigated the strange menu's
safely before anakonda or full-framebuffer installers... Because of the peer
pressure I learned about how to do my bits, and I carried on.

Later in the year I found fedora, and Blue is a nicer colour than brown (I was
young and fickle) but it was less user friendly, so I committed to learn that
and get off the "Noob Friendly" Ubuntu OS.

Many years later I got a small laptop for my mother, at this stage in my life
I was "awoken" and I knew the power a machine could hold if it ran linux, so I
put ubuntu on it- She's not the most technically apt lady in the world but was
able to do most things with ease, and I put that down to having a "Good UX
outside microsoft" (since most people who learn the microsoft way are
generally committed to a mindset and anything outside of that is pushed away).

A few issues with Flash, some performance hiccups on some websites that seemed
to try and avoid supporting linux in strange ways (that I take for granted I
know how to bypass) and eventually the machine gave up the ghost.

I bought a new machine and put ubuntu on it (13.10 I think) and she was
somewhat less than pleased, the UX had changed, she didn't know what was
available anymore, nothing was organised in a way she understood.. and so I
installed mint, she's now happy.

So I'll say this for Ubuntu, they put linux in the hands of people who we
should really be targetting, it allowed me access to linux acting as a base
plate and later acting as a full blown system for someone who was not
interested at all in computers. And they pushed a trend for that, so we should
all be thankful.

~~~
dijit
A follow on from this story and many moons after my "fickle" switch to
Fedora/RHEL.

At this point in my life I'd been involved in a half dozen large companies and
used linux on enormous scale.

I moved to a company that was using ubuntu LTS (10.04) (old at the time) in
production, it was heavily invested and I expected that wouldn't change as
Developers were very hesitant to change to debian (which is too old/doesn't
make things easy enough) or centos/RHEL which suffers the same issues and has
the added benefit of having SELinux (which I'm an advocate of understanding
rather than disabling).

I go through my daily security advisories and a local privilege escalation
means all our virtual machines and virtual machine hosts are affected, luckily
it's patched as 10.04 is still supported so I apt-get update;apt-get upgrade
and send out an email saying the server will be down for 30 minutes while it
receives patches.

I was wrong, it was down for 6 hours.

unfortunately someone upsteam caused that particular kernel update to rebuild
all initramfs' on the machine, and had also named lvm2 to lvm, so now my
drives wouldn't mount.

On any kernel version/initramfs version..

Normally you can drop to shell load the module, mount the drives and continue
startup, but unfortunately that stopped a lot of things from loading such as
the bonding we had in place on the nics.

obviously I didn't know why it broke at the time and was attempting to get
help from #ubuntu on freenode. the response was:

"Sometimes it's better not to know why it broke"

that server was smoothly running CentOS that same day, and I managed to get
all the Ubuntu VM's back up and working.. and changing apt-get with a shell
script which simply echoes "don't do that".

So in my opinion support and enterprise is where it falls down.

~~~
slgeorge
It's a lovely story, thanks for sharing it.

On the "support and enterprise" not being ready. The only comment I'd make is
that what you tested was the community of users support e.g. on IRC.

In Open source there's fundamentally a trade-off between "money for time, or
time for money". In a production enterprise environment where it's urgent to
always be available getting professional support makes sense. Then, when you
have an issue, rather than the uncertainty of a community channel you can get
hold of professional support from the experts in the software.

That's true of any of the major distributions and a lot of other important OSS
software used in production.

Note: I have a bias on this point since I set-up Canonical's professional
support and consulting organisation.

------
MarionG
Debian still dominates the web server market, but Ubuntu is catching up there
too: [http://w3techs.com/technologies/details/os-
linux/all/all](http://w3techs.com/technologies/details/os-linux/all/all)

~~~
olalonde
I highly doubt those stats are accurate.

    
    
        > Unix is used by 67.1% of all the websites.
        > Linux is used by 35.9% of all the websites.
    

This adds up to 103% so I guess Linux is included under Unix? That means 31.2%
of servers are running a non Linux Unix? Seems a bit high...

~~~
X-Istence
The wording on the site is bad. That is supposed to represent 35.9% of the
sites that are hosted are using Linux. Linux is considered a sub-category of
Unix on w3techs.

See also: [http://w3techs.com/technologies/details/os-
unix/all/all](http://w3techs.com/technologies/details/os-unix/all/all)

~~~
mikeash
45.2% of UNIX web servers are of an "unknown" variant? How can you detect
"UNIX" but not be able to have any idea of what UNIX it is? I'm going to
hazard a guess that you'd be able to get numbers of similar accuracy by
inspecting /dev/random.

Edit: poking around the site a bit more, I notice that they count Darwin under
UNIX but OS X is a separate top-level category alongside Windows and UNIX
(albeit one with less than 0.1% share). WTF are they _doing?_ This makes no
sense.

~~~
realityking
Technically you could run a non-OS X Darwin. Maybe that's the explanation?

~~~
mikeash
Could be, but it still doesn't make sense. It's the same underlying stuff, so
either they're both UNIX (what I'd vote for) or neither is.

------
geoffroy
I'm not an expert about Linux server-side distros. I'm using Ubuntu Server, I
haven't seen any cons for now. Any hint ?

~~~
lucaspiller
One issue I've found is support for old releases. Ubuntu only has a 5 year
support life cycle, where as CentOS / RHEL have a 10 year support life cycle.
For most people this isn't an issue, but in the enterprise it definitely is.

I recently had to move a bunch of Ruby 1.8 applications (where it didn't make
financial sense to upgrade them) to new servers. They wouldn't even run on
Ubuntu 10.04, where as CentOS 5.5 is still receiving security updates.

~~~
pmontra
But Ruby 1.8 is EOL. I also got some customers running Rails 3.0 with 1.8.7. I
told them they have to rely on good luck not to be hacked and they also made
the financial decision not to upgrade so they are running an unsupported
language version on maybe an unsupported OS (can't remember which OS they're
using).

~~~
krunaldo
Assuming they are using RHEL/CentOS 6.

You can get supported ruby 1.9.3 on RHEL6 or CentOS6
[https://www.softwarecollections.org/en/scls/rhscl/ruby193/](https://www.softwarecollections.org/en/scls/rhscl/ruby193/)
or
[https://wiki.centos.org/AdditionalResources/Repositories/SCL](https://wiki.centos.org/AdditionalResources/Repositories/SCL)

Unless they are unwilling to upgrade their rails and using the ruby version as
an excuse :) Best of luck to you!

~~~
darkr
Software Collections aren't "supported" in the same fashion that core RedHat
packages are (i.e timely security fixes, backported if needs be, for the
lifetime of the OS release).

From the ruby193 SCL page you linked to:

"Community Project: Maintained by upstream communities of developers. The
software is cared for, but the developers make no commitments to update the
repositories in a timely manner."

~~~
krunaldo
Yeah, if they are in CentOS land it will be a bit touch and go (as usual).

But if you have a redhat subscription they are fully supported. I should have
pointed that out in my first comment though, thanks for bringing it up :)

"All Red Hat Software Collections components are fully supported under Red Hat
Enterprise Linux subscription terms of service. Components are functionally
complete and intended for production use. " [0]

[0]
<[https://access.redhat.com/products/Red_Hat_Enterprise_Linux/...](https://access.redhat.com/products/Red_Hat_Enterprise_Linux/Developer/#rhscl=&dev-
page=5>)

------
jafingi
I'm sticking with CentOS / RHEL on my cloud servers. Have tried Ubuntu Server,
but liking CentOS more. Also, 5 years LTS is just not enough for production
environments IMO.

------
brillenfux
I will never forget how the handled the Oracle Java license change debacle. No
matter what they are saying now that was a terrible show of ignorance.

They might be a solid choice in the future but as of yet I haven't seen ANY
reason to use anything else but Debian or CentOS.

And cloud deployments will abandon Ubuntu for something smaller soon enough.

------
dogma1138
Why is this surprising? it was and for the most is the only Linux distro that
provides actual LTS releases with 5 years support guaranteed for free.

Sure you can buy RHEL, SUSE and some other commercial releases but it costs
money, other things like Debian Stable are very new initiatives and only
provide support between stable releases 6-18 months.

Ubuntu guarantees security updates for the OS and common components like
Apache other's don't.

If you are an organization that's very important especially if you need to
comply with various regulations e.g. PCI-DSS.

Ubuntu is also one of only few distro's that is supported across all Cloud
providers, it was the 1st distro to be supported on Azure and many of the
smaller cloud providers start with Ubuntu or use Ubuntu as the core of their
in-house linux guest.

Amazon's AMI might have been able to topple Ubuntu with their Amazon Linux but
as it is not available for download and you cannot run it outside of AWS it
will never reach any true leadership position, if you can't have it in house
for development you will choose something else and the 1st rule of dev op's is
deploy what you develop on....

In a few years CoreOS might get a big enough market share but currently CoreOS
is too complicated and it locks you into using containers which is an overkill
for most cloud deployments these days unless you are huge enterprise. If you
are running a small web portal on a 1-5 servers Docker and other containers
will just get in the way.

------
jerrac
Other comments have touched on most of the reasons why Ubuntu is a good
choice, or why it would be a bad choice in some situations.

For me, it was the OS I was used to. And, as I've had to deploy a few CentOS
and OES servers as well, I much prefer how Ubuntu/Debian configures things.

Apache, PHP, networking, cron, etc. All much easier to configure and harden on
Ubuntu than on CentOS. Only thing I've found CentOS does better is starting
and stopping iptables, and that's solved with a quick apt-get install
iptables-persistent.

Most of this opinion comes from writing Ansible roles that work on both Debian
and RedHat systems. Ubuntu was always easy to get right. CentOS always had
some weird thing that required an annoying amount of work to work around.
(Like it doesn't run Postfix smtpd in a chroot while Ubuntu does. Meaning I
had to have different Postfix settings in master.cf on CentOS than I do on
Debian.)

~~~
antod
I tend to agree with all that.

My first Linux was Redhat 5.1 (not RHEL), but I ended up switching to Debian
Slink and OpenBSD for whatever reason. Since then I just personally find
Redhat based distros a bit wierd or clunky (based on lack of familiarity), and
prefer to stick with the Debian/Ubuntu side of the fence.

Reason to prefer Ubuntu LTS over Debian Stable: fixed 5yr support period
instead of a variable period.

Reason to prefer Debian Stable over Ubuntu LTS: all packages in repo are in
the same security patching regime. With Ubuntu you have to be a bit careful
using packages from Universe or Metaverse.

~~~
jerrac
I did not know Debian didn't have fixed lifetimes for Stable.

Though they do have some form of LTS. [0] I do find it odd that the Debian
Security team doesn't manage it.

Part of my RedHat dislike is unfamiliarity. But when I sit down and think
about why I prefer how Ubuntu does something vs. how CentOS does something, I
usually find Ubuntu methods make more sense.

For example, how the default apache mods are enabled. Ubuntu has the mods-
enabled directory. CentOS has the LoadModule lines and mod config in
httpd.conf. So to disable an unneeded module, you have to find the LoadModule
lines, remove them, and the find all the various default config that is now
broken. In Ubuntu, you just remove the symlinks in mods-enabled.

[0] [https://wiki.debian.org/LTS](https://wiki.debian.org/LTS)

~~~
antod
A new Debian Stable comes out when it is ready - after a long freeze.

Originally (from memory) Debian stable was only supported until the next
stable release which could be anywhere from 18-36 months later (release
timeframes seem more consistent these days).

Then they started supporting an oldstable (the previous stable release) for an
extra 1yr timeframe.

Debian LTS is a newer initiative, but it doesn't quite have the same security
service level as say Debian Stable. Not all patches make it to LTS, and those
that do arrive noticeably later. Debian is trying to find ways to improve this
though.

------
Zigurd
There's also Android development driving use of Ubuntu as a desktop OS for
developers. I do a enough embedded work that I need to build Android-based
embedded systems for some projects. Increasingly, mobile software projects
need to be "full stack," too, with purpose built servers for app projects.

------
geggam
When interviewing candidates and you correlate the "only Ubuntu" folks to
skill sets it becomes obvious that anyone can use Ubuntu. Even in the cloud.
The lowest threshold to entry to the cloud obviously should be the most
common.

------
hharnisch
With containerization growing in popularity I suspect this will change. Ubuntu
based images are huge and include more features than needed to run basic web
servers.

~~~
dopamean
Last I checked there was a very pared down version of Ubuntu. I've used it for
running VMs that didn't need a gui or anything like that.

~~~
hharnisch
Could you share a link to docker hub? I'd love to check that out.

------
toddan
I dont get why cannonical dont invest in development experience. forget the
ubuntu mobile bullshit it wont grab the market anyway.

But there are tons of developers out there that use ubunut. They have a great
opportunity to create a complete IDE to cloud platform much like visual studio
and azure, but with open source tools and a great linux system.

------
wtbob
It's funny how choice of distro can be so revealing. For example, when I see
RedHat, it's pretty obvious that the software is from or the person works for
a Big Enterprise, with Big Serious Enterprisey Stuff, and doesn't really care
about open source, let alone free software (as an aside, that's kinda sad:
RedHat was the first distro I used, and I loved it for years longer than I
should have).

Or when I see Debian, I know that the system or person actually _is_ serious,
is likely to have a good operations and sysadmin mindset and will probably
Just Work™.

Or when I see Gentoo, I see a kindred, albeit younger, soul.

When I see 'FROM ubuntu' in a Dockerfile, my heart sinks: it's likely written
in JavaScript or Ruby; it likely Greenspuns heavily. For just about any server
load, vanilla Debian is going to be a superior choice: better-engineered,
smaller and lighter-weight. As far as I can see, there is almost _never_ a
good reason to use Ubuntu; it's just that in the eyes of so many folks it _is_
Linux.

Oh well, at least it's not OS X.

~~~
SEJeff
For not caring a lot about open source, they have many MANY more open source
developers than Canonical does, or likely ever will have. Also, I suspect
quite a few Redhat employees will take strong offense to your comments on them
not caring about Free Software. Funny, Redhat never had issues such as this
that Canonical did about licensing terms not being GPL friendly, which they
have since fixed: [https://www.fsf.org/news/canonical-updated-licensing-
terms](https://www.fsf.org/news/canonical-updated-licensing-terms)

I'll totally agree with you on Debian vs Ubuntu, but on Redhat vs Debian... We
can agree to disagree. Here is a paste of an email I wrote up to some
coworkers on the subject:

======================================================

RPM starts off with the concept of pristine sources. It is vehemently rejected
that a maintainer of a rpm package use a non-upstream tarball or change
_anything_ without a patch that is in version control. This is not the case
with dpkgs

RPM

===

• Stores ownership, permissions, and checksums in the rpmdb. This allows for
tools like [1] which are entirely impossible to re-create a dpkg equivalent.
There is no equivalent of the insanely handy “rpm –V” in dpkg.

• The checksum is part of building a rpm package. The “debsums” functionality
is not actually required for all packages. In fact, until Ubuntu got their act
together and started fixing a LOT of stuff, many debian packages, didn’t have
their checksums in the db.

• Changes are atomic. They use the bdb transactions and rollback. Either
something was installed (via CPIO) or it was not. Debian package manager uses
flat text-files! Those flat text files live under /var/lib/dpkg [Figure 1
below]. There are about 3-4 of these files per package, and they often times
corrupt, resulting in impossible to uninstall packages. This simply doesn’t
exist with rpms.

• The Debian package format lacked multiarch support until Jan 31, 2012[2]. Up
until this version, installing a 32 bit deb on a 64 bit operating system
involved creating a full system 32 bit chroot (I shit you not!), with hundreds
of megabytes of silliness. Rpm has had multiarch basically since the majority
of Fedora compiled with 64 bit compilers (around early 2005).

Package creation o RPM packages have 1 command, rpmbuild, for creating a
binary or source rpm. You have a single file ${package_name}.spec, and the
source tarball. That is all you need for a rpm. If you want to build the
package in a “clean room” chroot, you can use mock, which runs as non-root

o For creating a deb package, you have dpkg-source, dpkg-buildpackage, dch,
etc. For debs: Do you use debhelper or do you use cdbs? What version of
debhelper, what version of cdbs? Which is deprecated and which is the
“preferred” way? You have to edit the control file, the rules file, the
package list file, the changelog has to have the perfect format or all hell
breaks loose, etc. If you don’t put the exact same info in the control file
and the package.dsc, woe be unto you! Once you’ve got that all done, you have
to create a “debian source package”. Don’t get me started on that stupidity,
seriously, it is worse than this entire thread.

• Multiple utilities. There is rpm. Then there is dpkg, dselect, dpkg-query,
dpkg-reconfigure, dpkg-deb. dpkg-<TAB><TAB> results in 35 results on my test
Ubuntu box and almost 30 for deb<TAB><TAB> mostly debconf stuff and almost 70
for dh_* for debhelper grossness. There are a couple of helpers like rpmquery
(shortcut for rpm -q), or rpmverify (which is a shortcut for rpm –V), but they
are symlinks back to rpm for convenience. One utility, one man page, less
ambiguity.

• Templating. A rpm spec file is simply a shell script with some
substitutions. Debian packages are all built using some extremely customized
autotools and autotools like macros, each with conflicting versions and
competing implementations (try to figure out if you’re supposed to use cdbs[3]
or debhelper[4]). Both debhelper and cdbs exist because it is so impossibly
hard to build debian packages by hand without some serious pain. There are
macros for rpm spec files, but not even remotely the same complexity or
necessity.

• With dpkg, it is possible to get into states which are impossible to resolve
with the cli utilities (even dpkg –-configure –a). This always results in
having to manually edit the pre/post hacky script under /var/lib/dpkg and is
serious voodoo black magic that only experts should ever do. The problem is
that it isn’t uncommon. If the pre/post scripts do ever fail bad enough to
where you can’t fully remove a package, you can do rpm –e –-noscripts. Debian
packages have this “rc” state where they are partially installed /
uninstalled, but not fully either. Then you have to purge them using dpkg
–-purge, and that is assuming that you’ve successfully hacked up the scripts
that read the plain text files under /var/lib/dpkg. The entire design is
unbelievably fragile. They make it a bit less fragile by writing an ENORMOUS
debian packaging policy[5] to try to get people to work around silly
limitations in the software via policy. This is against one of the fundamental
design choices of rpm, which is that package installs should be atomic. It is
either installed, or not installed. There is no “I’m half installed” status
for rpm packages.

Hopefully this is a reasonable technical defense of rpm’s superiority over
dpkg. What blows my mind is that Ian Murdoch made Debian after Redhat Linux
existed but before YellowDog wrote yum. He had time to study the internals of
RPM and design something superior. Instead, he NIH and invented something that
on most levels to this day is still technically inferior. The yum vs apt
debian isn’t near as lopsided where old apt > old yum, but new yum is
unbelievably > than new apt. That is for another day, and only if you’re
interested.

[1]
[http://www.digitalprognosis.com/opensource/scripts/restorepe...](http://www.digitalprognosis.com/opensource/scripts/restoreperms)

[2] [https://lwn.net/Articles/485349/](https://lwn.net/Articles/485349/)

[3] [http://build-common.alioth.debian.org/cdbs-
doc.html#id250485...](http://build-common.alioth.debian.org/cdbs-
doc.html#id2504855)

[4] [https://joeyh.name/code/debhelper/](https://joeyh.name/code/debhelper/)

[5] [https://www.debian.org/doc/debian-
policy/](https://www.debian.org/doc/debian-policy/)

~~~
worklogin
I don't have the knowledge to support or contest anything except your last
sentence. yum's UI is the only thing I like more than apt-get. But yum doesn't
have the concept of depends vs. recommends, so installing nginx, for example,
REQUIRES pulling down geo-IP libraries that are entirely unnecessary.

~~~
darkr
RPM 4.12.0 brought in support for weak dependencies, which implements the tags
Recommends, Suggests, Supplements and Enhances, which provides analog
functionality from apt >
[http://rpm.org/wiki/Releases/4.12.0](http://rpm.org/wiki/Releases/4.12.0)

This is in Fedora 21, so should be in RHEL 8.

~~~
lisivka
Newest version of RPM 4.13 will implemented logical dependencies, e.g.
Require: foo IF bar , Require: foo OR bar OR baz, which are far more usable
than soft dependencies.

