
What I learned while securing Ubuntu (2015) - jessaustin
https://major.io/2015/10/14/what-i-learned-while-securing-ubuntu/
======
rdtsc
And that is a major reason military and lots of enterprise customers run on
RHEL. It is this seemingly uncool and boring stuff like having STIGs,
FIPS-140-2 certifications, EAL-4 etc.

And it spreads like a virus in a way. Say you use Alpine / Gentoo / Ubuntu.
You get your first enterprise or DOD customer. They want an onprem version of
your service. Now you have to either not sell it o them (but it so happens
those customers also come with loads of cash willing to drop it in your lap).
Or support both RHEL and Ubunu. So after a while you start to wonder, why
bother supporting both, so you switch to CentOS/ RHEL stack only.

I've seen it happen, people grumbled and were angry, but they still would
rather have loads of cash dropped in their laps than stick to their preferred
OS.

A couple of years back. Remember Mark Shuttleworth inquiring about what would
it take for Ubuntu to get some uptake with US govt customers. Remember
emailing him a list of some of those requirements and yeah, it is very
expensive and time consuming to get those red tape stamps on your product. I
don't know if anything happened with Ubuntu in that regard since.

(You can also get exceptions depending on who you sell to, but it only happens
if you have friends in high places which can grant those. Say sell to special
ops teams, for example).

~~~
chrisper
My only problem with CentOS is that it is "outdated." A few times this was
annoying because I had hit a bug that was fixed in newer versions. I
understand that outdated is a feature of CentOS, but it comes with a trade
off.

I switched my servers to Ubuntu for now.

~~~
merb
Thats not true compared to ubuntu. while centos 7 included java8, ubuntu 14.04
did not, both released on 2014. And on CentOS 6 I'm not sure but I think EPEL
(which is way more official than any ubutnu PPA) has openjdk 8 for centos 6.
there is some software that is maybe "old" but not "outdated" on centos,
however as far as I know most software runs on centos, really good. The only
thing which has bad support on CentOS at the moment is openssl 1.0.2 and newer
nginx/haproxy.

~~~
chrisper
How is it not true? CentOS is still at version 7 with a 3.x kernel, while
Ubuntu is now at version 16.04 with a 4.x kernel. Many os-level packages are
not going to get updated until a new major version of RHEL comes out.

~~~
6581
Why are you mentioning 7 and 16.04? Those numbers mean nothing when you
compare different distributions.

~~~
marcoperaza
Those are the current versions of each. The comparison is between the kernel
versions, not the distro version numbers.

~~~
6581
"still at version 7" strongly implies a comparison of the distro version
numbers.

~~~
JdeBP
No, it does not. It implies that one has been at a particular level for some
time, whilst the other has only reached its current level recently. chrisper
was _very clearly not_ comparing the release numbers, as xe said. One only
need read xyr prior comment in this very same thread where xe was talking
about outdatedness of what is currently available.

------
ckastner
> On the Ubuntu side, you can use the debsums package to help with some
> verification:

> [debsums invocation]

> But wait — where are the configuration files? Where are the log and library
> directories? If you these commands on an Ubuntu system, you’ll see that the
> configuration files and directories aren’t checked.

Of course it can. Quoting debsums(1):

    
    
      OPTIONS
           -a, --all
                  Also check configuration files (normally excluded).
    
           -e, --config
                  Only check configuration files.

------
the_common_man
From a comment in the blog:

    
    
      You can decide if a daemon should start on install by adding /usr/sbin/policy-rc.d file. It is described in /usr/share/doc/sysv-rc/README.policy-rc.d. Notably, just putting "exit 104" should give you something similar to what you expect on Redhat.
    
      You can get the original MD5 sums of the configuration files with dpkg --status. You could also just install tripwire that would cover all files.

~~~
regularfry
Policy-rc.d is an _awful_ ui for a choice admins often want to make.

~~~
vbernat
How is that an awful UI? What should it be? It's a flexible way to get what
you want. You want to automatically start the daemon on next boot, but not on
install? You can do it. You want to prevent daemon restart on upgrade, you can
do it. You want to whitelist some daemons, like the SSH server, you can.

Ubuntu inherits this behaviour from Debian. Some people think that daemons
should be started after install because the users installed the daemon to run
it, some others don't. A choice has to be made. Packages where the daemon is
an optional part of the functionality usually comes with either the daemon
disabled (eg rsync), or with the daemon in a separate package.

~~~
regularfry
> How is that an awful UI?

Let's start with the easy stuff: how long did it take you to notice that the
example in the GP's comment was wrong? Did you have to check Google, or do you
have the `policy-rc.d` magic number exit codes memorised?

The `policy-rc.d` mechanism moves the decision as to whether to run a daemon
on installation to a _really_ unexpected place. What should it be? Well, what
I always ended up googling for was an option in the `apt` system, because
that's what handles installation. I want `apt-get install foo --no-start-
daemon`. I want something self-documenting I can set in
`/etc/apt/preferences`, if I ever want the choice persisted.

> You can do it

doesn't make for a good UI. It makes for a Turing tarpit. Systems should make
the common case easy; in my experience the common case is _either_ you want a
fully-working system, with everything started automatically and usable
defaults in place, _or_ you want services stopped so that you can configure
them and start them later. I've never heard of anyone wanting per-service
whitelisting - not to say that people don't do this, just that it's
sufficiently rare in my experience that having the default optimised for that
use case _sucks_.

> A choice has to be made.

I'm not arguing about the default. I'm saying that the specific mechanism you
have to use to change the default behaviour shouldn't take googling to figure
out.

~~~
JdeBP
> _What_ [place] _should it be?_

Perhaps

    
    
        echo disable 'foo.*' >> /etc/systemd/system-preset/20-regularfry.preset
    

as (as can be seen) some people hope one day Ubuntu will respect, per its
manual?

* [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=772555](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=772555)

* [http://manpages.ubuntu.com/manpages/xenial/en/man5/systemd.p...](http://manpages.ubuntu.com/manpages/xenial/en/man5/systemd.preset.5.html)

Your several cases are, respectively, the aforegiven for the administrator-
chosen per-service controls,

    
    
        echo enable '*' >> /etc/systemd/system-preset/99-default.preset
    

for the fully auto-starting system (unless specifically overridden) specified
by the administrator,

    
    
        echo enable '*' >> /usr/lib/systemd/system-preset/99-default.preset
    

for the fully auto-starting system (unless overridden by administrator or
specific packaging) specified by the operating system builders,

    
    
        echo disable '*' >> /usr/lib/systemd/system-preset/99-default.preset
    

for the fully disabled requiring manual start system (unless overridden by
administrator or specific packaging) specified by the operating system
builders, and

    
    
        echo disable '*' >> /etc/systemd/system-preset/00-admin.preset
    

for the fully disabled requiring manual start system specified by the
administrator.

~~~
regularfry
Perhaps, and I realise this may be a revolutionary thought, this is not
somewhere systemd needs to exert itself. The `system-preset` interface is
barely any better than `policy-rc.d` from the point of view of someone
installing a package.

~~~
JdeBP
You'll find yourself in opposition to Major Hayden, the author of the
headlined article, who wrote in May:

> _automatic management of running daemons shouldn’t be handled by a package
> manager._

There's also the argument that determining whether a daemon should or should
not be automatically started _very much is_ the province of the service
management subsystem, as automatic service start/stop/restart is very much
what service management properly deals in.

~~~
regularfry
You're confusing interface with implementation.

------
Daviey
I'm not sure there is anything new here... Ubuntu did start down the path of
getting FIPS-140-2 and EAL, but I don't believe it was ever completed. The
main reason for this, is paying users aren't screaming for it. It is a
signficant investment, for a low return.

There has been a number of external efforts for STIG compliance, but nothing
formal. You can find a bunch of them on github.

Most of the technical items that Major raises aren't really issues IMO.

    
    
      - Services starting by default: Don't like it, use policyrc.d to stop it.  This can be done as part of the install, or via ansible prior to installing packages.
      - AIDE doing a full scan: Change the config, but as a default - covering everything is safer than leaving gaps (considering his effort is part of an Ansible project, this would seem logical!)
      - Verifying packages: Don't purely rely on the package manager for this!  Use tripwire which is explicitly designed for this.. and debsums (with -a, that was omitted in the article [but isn't perfect])
      - Firewall, permissive by default for most installations is perfectly acceptable.  That is what post-install config is for.  The target audience of Ubuntu server is cloud, where IaaS provided firewalls (security groups) provide the default protection, and for baremetals there should really be hardware firewalls as the 1st line of defence.  However, doing # ufw default deny , switches to whitelist
      - LSM, AppArmor is a pretty good default.  But selinux can be switched pretty easily.  The main issue against selinux is that many find it hard to use, which means that systems are left insecure.  As an out of the box solution, AppArmor does provide some confinement which for many is enough.  When was the last time you saw a how-to that prefixed with disabling the LSM (not as common as it used to be).
    
    

Generally, the out-of-box security experience isn't awful as it fits the
majority of the users. Hardening for per-deployment basis is expected, but
there hasn't been enough standardisation around this - which the ansible work
Major is doing will be a good contribution.

Whilst I'm a bit down on the actual content of this article, I'm really
excited that Major is working on this and it is a good thing for Ubuntu when
his work has finished.

\-- former Ubuntu Server Engineering Manager and current cloud/security.

------
daveguy
He brings up the fact that Ubuntu doesn't enable the firewall by default. This
is just a poor decision on the part of the Ubuntu maintainers. Sure it doesn't
have any ports open by default, but installing a poorly configured package
(that does leave a port open) means a potential security hole that could
easily be prevented. Incoming firewall should be the rule, not the exception.

~~~
GauntletWizard
So, what you've done is add a new, non-obvious step to every installation,
that will probably get worked around by the package maintainers very shortly
after (they'll add a step to the installer to add a firewall rule). Your
suggestion is pure human churn, and I seriously doubt that it will make it any
more likely that admins will pay attention)

~~~
daveguy
1) it is an obvious step for system administrators and package managers too.

2) red hat and all the other Linux systems don't seem to have a problem with
this simple additional step. Ubuntu even built a wrapper to make it easy to
configure iptables firewalls (ufw) -- but it isn't enabled by default.

3) Better that maintainers are adding the additional step (that will remain
correct) than have each administrator do it (unless needed). That requires
maintainers to _at least give a passing consideration to security_. Without it
you set it up and yay it works! No thinking required! No thinking that you may
have just left a system wide open to an untested system. I would much prefer
an extra step to default insecurity.

Even Microsoft gets the default firewall thing.

------
jbicha
Virtually everything here also applies to Debian. aide, postfix, and debsums
are completely unmodified from Debian in Ubuntu. dpkg is virtually the same.

The only part that is different is that Debian doesn't enable either apparmor
or selinux by default and selinux on Ubuntu is slightly different than on
Debian.

------
ericcholis
>The AIDE package is critical for secure deployments since it helps
administrators monitor for file integrity on a regular basis. However, Ubuntu
ships with some interesting configuration files and wrappers for AIDE.

Critical for secure deployments and hosted on sourceforge.....

~~~
seanp2k2
Please excuse my ignorance, but what are better options for code hosting and
collaboration?

~~~
gravypod
Anything that doesn't alter your binaries from time to time.

I seem to remember that being a "Feature" of a desperate SourceForge not to
long ago.

~~~
x1798DE
Allegedly the most recent owners have reversed the earlier policies and are
trying to reclaim SourceForge's previous sterling reputation, FYI. Seems
reasonably credible.

That said, I'm one of those people who bothers to check the signatures on
binaries before installing them, so as long as I trust the signature and dev,
doesn't much matter to me if I downloaded it from xxx-malware-binaries.biz.ru.

~~~
vacri
You get that 'for free' with apt-based systems, as packages are GPG-signed by
default, and will throw up an error if the signature is missing or is not in
your trust store.

------
pnathan
Fascinating that the the lede here is the certifications. While some entities
put a great stock in so-and-so certification, I have generally not put any
weight in certification x or y, as they are pretty manipulatable if you are
interested in doing so.

~~~
Spooky23
My understand the cynicism over certifications like this, but they have value.

In the case of FIPS 140-2, you have the assurance that the crypto module, say
OpenSSL, was verified to work consistent with the FIPS spec, and that it was
built in a manner consistent with that test.

Now I get that FIPS has its issues, but the vast number of organizations lack
the knowledge, skill and time required to evaluate the correctness of crypto
code.

~~~
pnathan
Yes, I'm fully aware that OpenSSL is an _excellent_ piece of software, well
certified... Then, are the _updates_ to OpenSSL FIPS certified... sometimes
certifications expire for updates to things.

TBH, the certifications seem like they are mostly a con to push bad quality to
ignorant people. I think if I was securing Ubuntu, I'd be analyzing it based
on threat models for the organization, then derive the specified behavior
based upon that; only then would the manuals for the software come out.
Analyze based on the needs of the enterprise I am involved in... Maybe this is
too much a hacker attitude, but I really don't see why you start with certs or
even _care_. Better to have a trusted firm evaluate for _your_ organization
IMO.

~~~
Spooky23
We're all familiar with the quality piece of software OpenSSL is.

Most people regard OpenSSH as very well written software. But guess what? Poor
procedural controls at Debian meant that lots of systems generated weak keys.
Certification of a known, verified build process helps with that kind of
problem.

~~~
nickpsecurity
Remember that those are the most broken of the certification criteria. They
just say it has certain features with a structured development process.
Jonathan Shapiro had a nice write-up after Windows got EAL4+:

[https://web.archive.org/web/20040214043848/http://eros.cs.jh...](https://web.archive.org/web/20040214043848/http://eros.cs.jhu.edu/~shap/NT-
EAL4.html)

Whereas, medium to high assurance evaluations were more meaningful as they had
stronger assurance requirements. Precise ways of looking at system, careful
implementation, every feature corresponding to a requirement, covert channel
analysis, pentests, SCM, trusted distribution, and so on. _Those_ say quite a
bit about what level of trust one can put into software. That's how it was
originally meant to be done. So, you can put a little more trust in the TOE.

Whereas, for EAL4, there's no reason to believe it's secure at all given the
criteria are designed for "casual or inadvertent attempts" to breach security.
Internet and insider threats are slightly more hostile than that. ;)

------
jiang01
That's arguably a kludgy workaround for a package mangers (arguably
incorrectly) starting services before you have configured them; wouldn't it be
preferable to fix that problem instead?

~~~
choosername
Absolutely. It's a helpful default, if you have no idea how to run it
otherwise. It's far worse a trap if this behaviour comes unexpected.

------
skywhopper
I wish this article had gone a bit deeper than just being a couple of first-
glance whinges. Sure the daemon autostart is a reasonable complaint, but it's
easily worked around. Presumably the folks who are concerned about locking
their OSes down this tightly are not building their images on a public
network.

------
sirmike_
This is a good article but the author missed the broader CONSUMER/dev consumer
mission of Ubuntu. It's sole purpose and number one priority is Usability
first followed and shaped by quick deployment as number two. Ultra top tier
hardened Security isn't in the top mix. Easy answer. Having said this I'm glad
someone is taking a hammer and chisel to it so that it can be a better
platform. The same thing can be said of Windows or Mac. Consumer facing off
the self platforms simply weren't designed for hardened security.

~~~
rodgerd
> the broader CONSUMER/dev consumer mission of Ubuntu.

The consumer mission which is enhanced by insecure-by-default package
installation?

~~~
sirmike_
Enhanced and driven by ease of use. Not sure what parts you mean for default
package install? The default set up out of the box with guaranteed regular
security updates or if this is a criticism of how Ubuntu achieves this.

------
webwanderings
Is there a good read anywhere on hardening the RHEL?

~~~
amjo324
Although not a good read (in terms of being engaging or interesting), you'll
find that a lot of security professionals will use something like the Center
for Internet Security (CIS) benchmark when doing a formal audit or
configuration review of RHEL (or any major Linux distribution for that
matter). They will run a command line tool that will check the system's config
against every item in the benchmark. The tool will generate a report with
pass/fail outcomes for each item plus hardening advice. It's not perfect but
it can be a decent starting point before you do further manual analysis of
your system.

More info about the CIS benchmarks:
[https://benchmarks.cisecurity.org/downloads/benchmarks/](https://benchmarks.cisecurity.org/downloads/benchmarks/)

The RHEL 7 benchmark:
[https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_E...](https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.0.0.pdf)

------
geerlingguy
Note that finally, with 16.04, you can block the (ridiculous) auto start
behavior of daemons installed via apt using systemd features:
[https://major.io/2016/05/05/preventing-
ubuntu-16-04-starting...](https://major.io/2016/05/05/preventing-
ubuntu-16-04-starting-daemons-package-installed/)

It's always angered me that packages start running before I have a chance to
configure them... With some, like MySQL, it means if I want to change certain
settings, I also have to clean up some other files before restarting the
service or the package will not start. On CentOS/RHEL, this is never an issue.

~~~
JdeBP
Masks would be overkill, were the designed mechanism widely available on
Ubuntu 15/16\. See [http://askubuntu.com/questions/779554/how-to-install-apt-
pac...](http://askubuntu.com/questions/779554/how-to-install-apt-package-
without-starting-systemd-process#comment1175118_779554) .

------
mercurial
I'd be interested to see a comparison with nixos, seeing as you have to
explicity enable services (assuming you use the declarative configuration) and
everything is declarative.

------
orbitingpluto
Click on his resume to be entertained.

~~~
feld
Nice, a man page for a resume is thoroughly amusing.

~~~
outworlder
And relevant to his area of expertise.

------
brians
Perhaps the reason for a short Ubuntu hardening guide is that these changes
are already incorporated in the code. I understand the value of a uniform base
configuration for a large network—say, 1000 systems or more. But surely it's
better to put that in the postinst scripts of packages.

I view the "hardening guide" requirements, particularly PCI 2.2, as an ugly
jobs program.

------
anonbanker
Ew. Definitely not interested in replacing a Gentoo Hardened (grsecurity, PaX)
install with this Redhat/NSA toy. Good luck to everyone diving down this
rabbit hole of pain.

------
polard2
Since Ubuntu doesn’t come with a firewall enabled by default, your postfix
server is listening on all interfaces for mail immediately. The mynetworks
configuration should prevent relaying, but any potential vulnerabilities in
your postfix daemon are exposed to the network without your consent. I would
prefer to configure postfix first before I ever allow it to run on my server

Technically, if security is your focus, then shouldn't one of your first
actions after setting up a new machine be to set iptables default action to
drop all incoming new,invalid packets anyway? I mean, I generally install
server with nothing. Set up iptables. Then install packages and open ports.

~~~
smacktoward
_> Technically, if security is your focus, then shouldn't one of your first
actions after setting up a new machine be to set iptables default action to
drop all incoming new,invalid packets anyway?_

But why expect the user to do that themselves? Shouldn't security be the
default, with a secure configuration provided out of the box?

Giving the user a secure configuration which they can then opt out of if they
wish does better by the user than giving them an insecure configuration and
then asking them to opt in to security does.

~~~
vacri
If the user can't do that, then the user is not going to have the chops to
configure Postfix anyway.

Seriously, Debian is the upstream distro does this, and it's the other linux
grandparent with RHEL. It's been around for decades, used in production the
same way RHEL and BSD are, and we haven't had debian-based boxes compromised
left, right, and centre with these 'insecure' defaults. The threat is
imaginary, and not born out in real world numbers.

------
vacri
Oh, god, the lead is that "starts before configuration" non-issue. If you're
not properly firewalled to begin with, why are you installing applications?
And seriously, how hard is it to put the config file in place _before_
installing the package instead of _after_ it (apt won't overwrite it), if
you're that damned concerned about the milliseconds of "vulnerability" between
your configuration management tool installing the package and then plonking
down your config and restarting it?

And if you're manually configuring your hardened servers instead of using a
configuration tool, then you have significantly greater problems than 'omg!
starts too early!'. And even then, if you're manually installing something and
you think it's going to have a preconfig vuln, then just add '&& service
thingy stop' after your install line.

Seriously, this is a vim-vs-emacs-style non-complaint, for when you can't
think of any actual issues.

