Hacker News new | past | comments | ask | show | jobs | submit login
What I learned while securing Ubuntu (2015) (major.io)
297 points by jessaustin on June 10, 2016 | hide | past | web | favorite | 107 comments



And that is a major reason military and lots of enterprise customers run on RHEL. It is this seemingly uncool and boring stuff like having STIGs, FIPS-140-2 certifications, EAL-4 etc.

And it spreads like a virus in a way. Say you use Alpine / Gentoo / Ubuntu. You get your first enterprise or DOD customer. They want an onprem version of your service. Now you have to either not sell it o them (but it so happens those customers also come with loads of cash willing to drop it in your lap). Or support both RHEL and Ubunu. So after a while you start to wonder, why bother supporting both, so you switch to CentOS/ RHEL stack only.

I've seen it happen, people grumbled and were angry, but they still would rather have loads of cash dropped in their laps than stick to their preferred OS.

A couple of years back. Remember Mark Shuttleworth inquiring about what would it take for Ubuntu to get some uptake with US govt customers. Remember emailing him a list of some of those requirements and yeah, it is very expensive and time consuming to get those red tape stamps on your product. I don't know if anything happened with Ubuntu in that regard since.

(You can also get exceptions depending on who you sell to, but it only happens if you have friends in high places which can grant those. Say sell to special ops teams, for example).


My only problem with CentOS is that it is "outdated." A few times this was annoying because I had hit a bug that was fixed in newer versions. I understand that outdated is a feature of CentOS, but it comes with a trade off.

I switched my servers to Ubuntu for now.


There is https://access.redhat.com/documentation/en-US/Red_Hat_Softwa... but we didn't use it. Not sure how successful others are with it.

But yeah it was a major pain. Remember all the time wasted getting things to work or compile on outdated kernels or standard library...


Thats not true compared to ubuntu. while centos 7 included java8, ubuntu 14.04 did not, both released on 2014. And on CentOS 6 I'm not sure but I think EPEL (which is way more official than any ubutnu PPA) has openjdk 8 for centos 6. there is some software that is maybe "old" but not "outdated" on centos, however as far as I know most software runs on centos, really good. The only thing which has bad support on CentOS at the moment is openssl 1.0.2 and newer nginx/haproxy.


In general EPEL should never override packages from the base system. Other repos do that but EPEL's mission is to provide extra package not newer packages.

Otherwise it creates a mess. Someone adds EPEL, updates, now their software which was based of a base package is broken because EPEL upgraded libc or something like that.


How is it not true? CentOS is still at version 7 with a 3.x kernel, while Ubuntu is now at version 16.04 with a 4.x kernel. Many os-level packages are not going to get updated until a new major version of RHEL comes out.


Why are you mentioning 7 and 16.04? Those numbers mean nothing when you compare different distributions.


I am not comparing the numbers in a mathematical way (I am not saying 16 is better than 7). I meant more like that packages get updates to next major versions when the OS jumps to the next major version. Somehow, I felt that Ubuntu releases new major versions more often than RHEL. But checking the release dates it seems that I was wrong.


Actually 7 is released on 2012 while 16.04 is released on 2016. You could compare 14.04 and 7 but not 16.04 and 7. Eventually you could compare CentOS 8 when it comes out this year.


No, RHEL 7 was released in June 2014, not 2012. There's no sign of RHEL 8 yet either; it would be quite surprising for it to come out this year.

https://access.redhat.com/articles/3078


> Actually 7 is released on 2012 while 16.04 is released on 2016.

Observe that that was chrisper's original very point.


Those are the current versions of each. The comparison is between the kernel versions, not the distro version numbers.


Even the kernel version isn't particularly helpful, as Red Hat is backporting new features into the version shipped by RHEL 6 and 7.

One specific example is the SO_REUSEPORT socket option which was added to Linux kernel 3.9, and was subsequently backported and became available since RHEL 6.5, which uses version 2.6.32 of the Linux kernel.[0]

[0] https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Kerne...


"still at version 7" strongly implies a comparison of the distro version numbers.


No, it does not. It implies that one has been at a particular level for some time, whilst the other has only reached its current level recently. chrisper was very clearly not comparing the release numbers, as xe said. One only need read xyr prior comment in this very same thread where xe was talking about outdatedness of what is currently available.


Which reminds me of 1999 when Slackware jumped from version 4 to 7, purely to keep up with competition.


I think, at least for RHEL, that is a feature.


And Debian stable, see JWZs ranting on that regarding xscreensaver...


You could always run Fedora Cloud: https://getfedora.org/en/cloud/

But you're back in the world of not having all those certifications which enterprise customers may want. On the other hand it shares common package infrastructure with RHEL so that's a bonus.

We used Fedora Cloud for a bit and I liked it. Be prepared for cutting-edge though, for example, journalctld rather than syslog.

We switch to Ubuntu for reasons not related to Fedora in particular but just ease of running an Apt repository on S3 vs Yum/Dnf.


Cutting edge is terrible for running a business on though.


This is a good point. At Userify, we support RHEL7+ (also CentOS), Ubuntu 14.04LTS, and Debian Jessie for the on-site server product (we support a broader range of managed nodes). It's a good point that about 2/3's of our customers use either RHEL7 or CentOS, but we do have customers running on Ubuntu (for instance) in both HIPAA and PCI environments.

(We also provide cloud-based SSH key management at no charge with upgrades available, but mostly our customers there are startups, although we do have a couple of public companies on our cloud SaaS platform. Since our revenue is largely driven by enterprises and mid-sized companies that want to run their own in-house and/or in their own clouds, we'll try to accommodate any OS they prefer: normally, that's RHEL, but increasingly Ubuntu, Amazon Linux, and CentOS.)


Thanks - this is very informative, and from my previous experience at VMW I can confirm this is absolutely right. p.s. Is there a way to know who you are? Cannot recongnize your username and I would be interested in knowing who you are and if we can get in touch.


Never mind that the chairman of the RH board is a retired general...


It's how you get contracts. You pay them then they really pay you. Watchdog groups call it The Revolving Door.


Or lobbying. Or graft/bribery.

I suppose it depends on whether you are in the private sector, the public sector, or law enforcement.


Pro tip: do everyone available for best results. ;)


Shuttleworth inquired with you personally?


It was an open request to the community from his blog post. I emailed him first, that's how conversation started.

http://www.markshuttleworth.com/?s=government


How does FreeBSD stack up?


In the same category as Ubuntu. There are some things you can do and market your product as an "embedded" device, if it truly is so, but that is a bit tricky. And of course, can get exceptions, that's tricky too, and doesn't scale.


> On the Ubuntu side, you can use the debsums package to help with some verification:

> [debsums invocation]

> But wait — where are the configuration files? Where are the log and library directories? If you these commands on an Ubuntu system, you’ll see that the configuration files and directories aren’t checked.

Of course it can. Quoting debsums(1):

  OPTIONS
       -a, --all
              Also check configuration files (normally excluded).

       -e, --config
              Only check configuration files.


From a comment in the blog:

  You can decide if a daemon should start on install by adding /usr/sbin/policy-rc.d file. It is described in /usr/share/doc/sysv-rc/README.policy-rc.d. Notably, just putting "exit 104" should give you something similar to what you expect on Redhat.

  You can get the original MD5 sums of the configuration files with dpkg --status. You could also just install tripwire that would cover all files.


I believe that the point of the blog post is that Ubuntu gets it wrong for some important defaults (like automatically starting services when installed). It's nice to have the possibility of overriding those defaults, but the distribution would be better if this wasn't necessary.


Policy-rc.d is an awful ui for a choice admins often want to make.


How is that an awful UI? What should it be? It's a flexible way to get what you want. You want to automatically start the daemon on next boot, but not on install? You can do it. You want to prevent daemon restart on upgrade, you can do it. You want to whitelist some daemons, like the SSH server, you can.

Ubuntu inherits this behaviour from Debian. Some people think that daemons should be started after install because the users installed the daemon to run it, some others don't. A choice has to be made. Packages where the daemon is an optional part of the functionality usually comes with either the daemon disabled (eg rsync), or with the daemon in a separate package.


> How is that an awful UI?

Let's start with the easy stuff: how long did it take you to notice that the example in the GP's comment was wrong? Did you have to check Google, or do you have the `policy-rc.d` magic number exit codes memorised?

The `policy-rc.d` mechanism moves the decision as to whether to run a daemon on installation to a really unexpected place. What should it be? Well, what I always ended up googling for was an option in the `apt` system, because that's what handles installation. I want `apt-get install foo --no-start-daemon`. I want something self-documenting I can set in `/etc/apt/preferences`, if I ever want the choice persisted.

> You can do it

doesn't make for a good UI. It makes for a Turing tarpit. Systems should make the common case easy; in my experience the common case is either you want a fully-working system, with everything started automatically and usable defaults in place, or you want services stopped so that you can configure them and start them later. I've never heard of anyone wanting per-service whitelisting - not to say that people don't do this, just that it's sufficiently rare in my experience that having the default optimised for that use case sucks.

> A choice has to be made.

I'm not arguing about the default. I'm saying that the specific mechanism you have to use to change the default behaviour shouldn't take googling to figure out.


> What [place] should it be?

Perhaps

    echo disable 'foo.*' >> /etc/systemd/system-preset/20-regularfry.preset
as (as can be seen) some people hope one day Ubuntu will respect, per its manual?

* https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=772555

* http://manpages.ubuntu.com/manpages/xenial/en/man5/systemd.p...

Your several cases are, respectively, the aforegiven for the administrator-chosen per-service controls,

    echo enable '*' >> /etc/systemd/system-preset/99-default.preset
for the fully auto-starting system (unless specifically overridden) specified by the administrator,

    echo enable '*' >> /usr/lib/systemd/system-preset/99-default.preset
for the fully auto-starting system (unless overridden by administrator or specific packaging) specified by the operating system builders,

    echo disable '*' >> /usr/lib/systemd/system-preset/99-default.preset
for the fully disabled requiring manual start system (unless overridden by administrator or specific packaging) specified by the operating system builders, and

    echo disable '*' >> /etc/systemd/system-preset/00-admin.preset
for the fully disabled requiring manual start system specified by the administrator.


Perhaps, and I realise this may be a revolutionary thought, this is not somewhere systemd needs to exert itself. The `system-preset` interface is barely any better than `policy-rc.d` from the point of view of someone installing a package.


You'll find yourself in opposition to Major Hayden, the author of the headlined article, who wrote in May:

> automatic management of running daemons shouldn’t be handled by a package manager.

There's also the argument that determining whether a daemon should or should not be automatically started very much is the province of the service management subsystem, as automatic service start/stop/restart is very much what service management properly deals in.


You're confusing interface with implementation.


I'm not sure there is anything new here... Ubuntu did start down the path of getting FIPS-140-2 and EAL, but I don't believe it was ever completed. The main reason for this, is paying users aren't screaming for it. It is a signficant investment, for a low return.

There has been a number of external efforts for STIG compliance, but nothing formal. You can find a bunch of them on github.

Most of the technical items that Major raises aren't really issues IMO.

  - Services starting by default: Don't like it, use policyrc.d to stop it.  This can be done as part of the install, or via ansible prior to installing packages.
  - AIDE doing a full scan: Change the config, but as a default - covering everything is safer than leaving gaps (considering his effort is part of an Ansible project, this would seem logical!)
  - Verifying packages: Don't purely rely on the package manager for this!  Use tripwire which is explicitly designed for this.. and debsums (with -a, that was omitted in the article [but isn't perfect])
  - Firewall, permissive by default for most installations is perfectly acceptable.  That is what post-install config is for.  The target audience of Ubuntu server is cloud, where IaaS provided firewalls (security groups) provide the default protection, and for baremetals there should really be hardware firewalls as the 1st line of defence.  However, doing # ufw default deny , switches to whitelist
  - LSM, AppArmor is a pretty good default.  But selinux can be switched pretty easily.  The main issue against selinux is that many find it hard to use, which means that systems are left insecure.  As an out of the box solution, AppArmor does provide some confinement which for many is enough.  When was the last time you saw a how-to that prefixed with disabling the LSM (not as common as it used to be).

Generally, the out-of-box security experience isn't awful as it fits the majority of the users. Hardening for per-deployment basis is expected, but there hasn't been enough standardisation around this - which the ansible work Major is doing will be a good contribution.

Whilst I'm a bit down on the actual content of this article, I'm really excited that Major is working on this and it is a good thing for Ubuntu when his work has finished.

-- former Ubuntu Server Engineering Manager and current cloud/security.


He brings up the fact that Ubuntu doesn't enable the firewall by default. This is just a poor decision on the part of the Ubuntu maintainers. Sure it doesn't have any ports open by default, but installing a poorly configured package (that does leave a port open) means a potential security hole that could easily be prevented. Incoming firewall should be the rule, not the exception.


So, what you've done is add a new, non-obvious step to every installation, that will probably get worked around by the package maintainers very shortly after (they'll add a step to the installer to add a firewall rule). Your suggestion is pure human churn, and I seriously doubt that it will make it any more likely that admins will pay attention)


1) it is an obvious step for system administrators and package managers too.

2) red hat and all the other Linux systems don't seem to have a problem with this simple additional step. Ubuntu even built a wrapper to make it easy to configure iptables firewalls (ufw) -- but it isn't enabled by default.

3) Better that maintainers are adding the additional step (that will remain correct) than have each administrator do it (unless needed). That requires maintainers to at least give a passing consideration to security. Without it you set it up and yay it works! No thinking required! No thinking that you may have just left a system wide open to an untested system. I would much prefer an extra step to default insecurity.

Even Microsoft gets the default firewall thing.


I don't think a firewall has ever protected me from anything. It's only ever caused confusing error messages, usually without actually even indicating that the firewall's the problem.

That's not something I'd want to see in 'Linux for human beings'. If my mother is going to use it on her desktop, things should just work.


Windows has a firewall by default. The outgoing rule is open the incoming is closed/established. That is the same default firewall config for most Linux distros. This is also probably the default for OSX -- does your mom have trouble with her current system? Because it probably has a firewall configured.

Ubuntu is one of the very few modern operating systems (distributions) that contains this easy to fix security flaw.

Also, you probably wouldn't know if a firewall protected you unless you have a habit of poring over network logs.

Your wifi router definitely has this default configuration which certainly does protect you on a regular basis.


Actually OSX didn't enable it by default last time I checked. And the last windows machine I touched had about 300 exceptions to the default deny incoming rule -- that the user had not opened themselves. Ubuntu would need some kind of phone-os like permission system for default deny to be both secure and be grandma friendly.


You're right. OSX firewall is off by default -- also a poor decision.

The outgoing exceptions on Windows probably because a Windows machine default deny for both incoming and outgoing which does require exceptions. Windows doesn't phone home for additional exceptions -- it asks you.

The default on Ubuntu should be like all other linux distros where where all unsolicited incoming connections are blocked. Related incoming connections are allowed and outgoing connections are allowed.

This would not require any difficult configuration on the part of the average home user.

It seems to me that most people who are blindly defending Ubuntu's current setup do not understand how networking and firewalls work and the better options. That or they do not understand how wide open the Ubuntu defaults are.

There is certainly a more secure easy to use option that Ubuntu is ignoring, but which all of the other linux distros use by default.


> That or they do not understand how wide open the Ubuntu defaults are.

Fair enough. I'd love to learn. What threats does the firewall even defend against?

> The default on Ubuntu should be like all other linux distros where where all unsolicited incoming connections are blocked.

I don't understand why I should care. Evil hackers can send all they want, but the only things listening on my end should be those that I explicitly installed or enabled. The firewall seems redundant as the set of running applications and my set of firewall exceptions should match exactly.


> Fair enough. I'd love to learn. What threats does the firewall even defend against?

Many Debian/Ubuntu packages have a dependency on postfix. Even some trivial ones. Installing these packages installs postfix (which at least asks you to configure it during install). Having a firewall active means that your grandmother won't have services you didn't realise she had running listening.


Firewall is a component to a layered defense. One of the points about Ubuntu was starting the service immediately after install, was it not?


I fear that if Ubuntu enabled firewall with default deny on incoming packages by default packages would start adding the allow rules in their install scripts..


I am reminded of https://news.ycombinator.com/item?id=11667485 at this point.


I picked my own mother for a reason. She's a very successful geological engineer and quite an intelligent woman. However, she is not a sysadmin and is definitely not going to know to configure iptables if Ubuntu software doesn't work.


Coupled with the aptitude methodology of package management which enables/runs services on installation.


Is that true even for the "Server" distribution, or just the desktop spins?


This Ubuntu Server Guide would indicate that the default firewall configuration is "off" for Ubuntu Server distributions as well:

https://help.ubuntu.com/lts/serverguide/firewall.html


> installing a poorly configured package (that does leave a port open) means a potential security hole that could easily be prevented.

Please share with us what magic you have that prevents human error locally, but is defeated by human error in packaging.


Well, I wouldn't call a firewall magic, but that's what it does. If a package is setup so the software listens to 0.0.0.0 (all networks) on a particular port then the firewall will prevent incoming attacks. Opening the firewall port is an extra step that requires two misconfigurations rather than one. Look up fail-safe for more reading.


Virtually everything here also applies to Debian. aide, postfix, and debsums are completely unmodified from Debian in Ubuntu. dpkg is virtually the same.

The only part that is different is that Debian doesn't enable either apparmor or selinux by default and selinux on Ubuntu is slightly different than on Debian.


>The AIDE package is critical for secure deployments since it helps administrators monitor for file integrity on a regular basis. However, Ubuntu ships with some interesting configuration files and wrappers for AIDE.

Critical for secure deployments and hosted on sourceforge.....


Most users will get it through a distro package manager.


Please excuse my ignorance, but what are better options for code hosting and collaboration?


Anything that doesn't alter your binaries from time to time.

I seem to remember that being a "Feature" of a desperate SourceForge not to long ago.


tl;dr: Slashdot and SourceForge have a new owner which removed all this nonsense from those sites - including the adware.

I believe the average HN reader - including you - should be interested in this then: https://www.reddit.com/r/sysadmin/comments/4n3e1s/the_state_...


Allegedly the most recent owners have reversed the earlier policies and are trying to reclaim SourceForge's previous sterling reputation, FYI. Seems reasonably credible.

That said, I'm one of those people who bothers to check the signatures on binaries before installing them, so as long as I trust the signature and dev, doesn't much matter to me if I downloaded it from xxx-malware-binaries.biz.ru.


You get that 'for free' with apt-based systems, as packages are GPG-signed by default, and will throw up an error if the signature is missing or is not in your trust store.


They've been bought by a different company since then, and they've cleaned it up. https://news.ycombinator.com/item?id=11860752


Any company that resorts to practices that low will never regain my trust.

Even if they were bought out by RMS himself, I'd never host any of my projects there.

It's just something I don't think I'll ever support after what happened; it's a matter of principle.


Who exactly has lost your trust? A company is a group of people, but a brand is not. Companies can sell brands off to other companies. You can choose to trust—or not trust—a company, but if a brand moves from one company to another, the trust (or lack thereof) adhered to the brand because of the company shouldn't come along with it.

For an example: IBM made a brand of laptop called ThinkPad. Now Lenovo makes a brand of laptop called ThinkPad. IBM's ThinkPads were excellent; Lenovo's ThinkPads are average. They're two different products, produced by two different companies, that happen to share a brand because one company sold that brand to the other. The brand conveys absolutely no information about whether you should trust the product. The company owning the brand conveys 100% of the information.


Trying to say "this brand was bought by another entity so now all that negative brand equity and well-deserved consumer ire it had earned before is null and void" seems a bit like trying to have your cake and eat it too.

No employees came along with the brand? None of the decisionmakers responsible for the previous SourceForge derp over the past couple of years? It's a clean slate - tabula rasa? That's fine, but it'll take time for people to warm up to that idea. That's what happens when you abuse trust.


What you think logically should happen is not what actually happens. Read about the accounting, marketing, and legal aspects of what are called goodwill and brand equity.


You get it! This is what I'm referring to.

The brand is tarnished in my mind.


And I was, conversely, not talking about what actually happens by default, but rather what way you need to force yourself to think if you want to avoid bias when doing something important involving the corporate reputations of others—like, say, picking stocks.

If you have both information about a "brand", and information about the actual group or individuals underlying it, giving any weight to the brand will give a suboptimal result and let people exploit you. The reputational information has some value, but only if you have no "observational" information. Trying to use them both together is double-counting. Just use the information about the individuals and throw the brand information away.

In in other words, basically http://lesswrong.com/lw/lx/argument_screens_off_authority/, but with a more generalized halo effect in place of "authority."


The acquiring company removed that feature the moment they acquired SF


Fascinating that the the lede here is the certifications. While some entities put a great stock in so-and-so certification, I have generally not put any weight in certification x or y, as they are pretty manipulatable if you are interested in doing so.


My understand the cynicism over certifications like this, but they have value.

In the case of FIPS 140-2, you have the assurance that the crypto module, say OpenSSL, was verified to work consistent with the FIPS spec, and that it was built in a manner consistent with that test.

Now I get that FIPS has its issues, but the vast number of organizations lack the knowledge, skill and time required to evaluate the correctness of crypto code.


Yes, I'm fully aware that OpenSSL is an excellent piece of software, well certified... Then, are the updates to OpenSSL FIPS certified... sometimes certifications expire for updates to things.

TBH, the certifications seem like they are mostly a con to push bad quality to ignorant people. I think if I was securing Ubuntu, I'd be analyzing it based on threat models for the organization, then derive the specified behavior based upon that; only then would the manuals for the software come out. Analyze based on the needs of the enterprise I am involved in... Maybe this is too much a hacker attitude, but I really don't see why you start with certs or even care. Better to have a trusted firm evaluate for your organization IMO.


We're all familiar with the quality piece of software OpenSSL is.

Most people regard OpenSSH as very well written software. But guess what? Poor procedural controls at Debian meant that lots of systems generated weak keys. Certification of a known, verified build process helps with that kind of problem.


Remember that those are the most broken of the certification criteria. They just say it has certain features with a structured development process. Jonathan Shapiro had a nice write-up after Windows got EAL4+:

https://web.archive.org/web/20040214043848/http://eros.cs.jh...

Whereas, medium to high assurance evaluations were more meaningful as they had stronger assurance requirements. Precise ways of looking at system, careful implementation, every feature corresponding to a requirement, covert channel analysis, pentests, SCM, trusted distribution, and so on. Those say quite a bit about what level of trust one can put into software. That's how it was originally meant to be done. So, you can put a little more trust in the TOE.

Whereas, for EAL4, there's no reason to believe it's secure at all given the criteria are designed for "casual or inadvertent attempts" to breach security. Internet and insider threats are slightly more hostile than that. ;)


Because certifications is cover your behind thing for management in case something blows up.

It allows their lawyers to go "we are compliant with X, Y and Z so the fault is not with our product/service".


That's arguably a kludgy workaround for a package mangers (arguably incorrectly) starting services before you have configured them; wouldn't it be preferable to fix that problem instead?


Absolutely. It's a helpful default, if you have no idea how to run it otherwise. It's far worse a trap if this behaviour comes unexpected.


I wish this article had gone a bit deeper than just being a couple of first-glance whinges. Sure the daemon autostart is a reasonable complaint, but it's easily worked around. Presumably the folks who are concerned about locking their OSes down this tightly are not building their images on a public network.


This is a good article but the author missed the broader CONSUMER/dev consumer mission of Ubuntu. It's sole purpose and number one priority is Usability first followed and shaped by quick deployment as number two. Ultra top tier hardened Security isn't in the top mix. Easy answer. Having said this I'm glad someone is taking a hammer and chisel to it so that it can be a better platform. The same thing can be said of Windows or Mac. Consumer facing off the self platforms simply weren't designed for hardened security.


I'm convinced that being a strong deployment target is Ubuntu's #1 priority. Look at the deal with Microsoft - Ubuntu on Windows only benefits Ubuntu because it will convince some Windows-based developers to choose Ubuntu Server for deployment for dev/prod parity.

While I agree that Ubuntu has a priority of being a usable desktop system (especially for developers), I believe being a good deployment target is more important to them.


> the broader CONSUMER/dev consumer mission of Ubuntu.

The consumer mission which is enhanced by insecure-by-default package installation?


Enhanced and driven by ease of use. Not sure what parts you mean for default package install? The default set up out of the box with guaranteed regular security updates or if this is a criticism of how Ubuntu achieves this.


Is there a good read anywhere on hardening the RHEL?


Although not a good read (in terms of being engaging or interesting), you'll find that a lot of security professionals will use something like the Center for Internet Security (CIS) benchmark when doing a formal audit or configuration review of RHEL (or any major Linux distribution for that matter). They will run a command line tool that will check the system's config against every item in the benchmark. The tool will generate a report with pass/fail outcomes for each item plus hardening advice. It's not perfect but it can be a decent starting point before you do further manual analysis of your system.

More info about the CIS benchmarks: https://benchmarks.cisecurity.org/downloads/benchmarks/

The RHEL 7 benchmark: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_E...


The DISA STIG was my starting point when deploying RHEL6 boxes. I don't believe the STIG for v7 is available yet, but I haven't checked in quite a while.

The CIS Benchmark is also quite extensive, and there is a lot of overlap between the two.



Note that finally, with 16.04, you can block the (ridiculous) auto start behavior of daemons installed via apt using systemd features: https://major.io/2016/05/05/preventing-ubuntu-16-04-starting...

It's always angered me that packages start running before I have a chance to configure them... With some, like MySQL, it means if I want to change certain settings, I also have to clean up some other files before restarting the service or the package will not start. On CentOS/RHEL, this is never an issue.


Masks would be overkill, were the designed mechanism widely available on Ubuntu 15/16. See http://askubuntu.com/questions/779554/how-to-install-apt-pac... .


I'd be interested to see a comparison with nixos, seeing as you have to explicity enable services (assuming you use the declarative configuration) and everything is declarative.


Click on his resume to be entertained.


Nice, a man page for a resume is thoroughly amusing.


And relevant to his area of expertise.


Perhaps the reason for a short Ubuntu hardening guide is that these changes are already incorporated in the code. I understand the value of a uniform base configuration for a large network—say, 1000 systems or more. But surely it's better to put that in the postinst scripts of packages.

I view the "hardening guide" requirements, particularly PCI 2.2, as an ugly jobs program.


Ew. Definitely not interested in replacing a Gentoo Hardened (grsecurity, PaX) install with this Redhat/NSA toy. Good luck to everyone diving down this rabbit hole of pain.


Since Ubuntu doesn’t come with a firewall enabled by default, your postfix server is listening on all interfaces for mail immediately. The mynetworks configuration should prevent relaying, but any potential vulnerabilities in your postfix daemon are exposed to the network without your consent. I would prefer to configure postfix first before I ever allow it to run on my server

Technically, if security is your focus, then shouldn't one of your first actions after setting up a new machine be to set iptables default action to drop all incoming new,invalid packets anyway? I mean, I generally install server with nothing. Set up iptables. Then install packages and open ports.


> Technically, if security is your focus, then shouldn't one of your first actions after setting up a new machine be to set iptables default action to drop all incoming new,invalid packets anyway?

But why expect the user to do that themselves? Shouldn't security be the default, with a secure configuration provided out of the box?

Giving the user a secure configuration which they can then opt out of if they wish does better by the user than giving them an insecure configuration and then asking them to opt in to security does.


If the user can't do that, then the user is not going to have the chops to configure Postfix anyway.

Seriously, Debian is the upstream distro does this, and it's the other linux grandparent with RHEL. It's been around for decades, used in production the same way RHEL and BSD are, and we haven't had debian-based boxes compromised left, right, and centre with these 'insecure' defaults. The threat is imaginary, and not born out in real world numbers.


> Since Ubuntu doesn’t come with a firewall enabled by default, your postfix server is listening on all interfaces for mail immediately.

But Ubuntu doesn't come with a postfix server enabled by default, so that's not really a concern, is it?


Ubuntu does, however. Here is its guide:

* https://help.ubuntu.com/lts/serverguide/postfix.html

Here is where it is enabled by default:

* https://git.launchpad.net/postfix/tree/debian/postfix.postin...

And here is where the installation process auto-starts the server:

* https://git.launchpad.net/postfix/tree/debian/postfix.postin...


Well I never. TIL. I've always installed and started it because I assumed it wasn't already running.

Thanks!


Oh, god, the lead is that "starts before configuration" non-issue. If you're not properly firewalled to begin with, why are you installing applications? And seriously, how hard is it to put the config file in place before installing the package instead of after it (apt won't overwrite it), if you're that damned concerned about the milliseconds of "vulnerability" between your configuration management tool installing the package and then plonking down your config and restarting it?

And if you're manually configuring your hardened servers instead of using a configuration tool, then you have significantly greater problems than 'omg! starts too early!'. And even then, if you're manually installing something and you think it's going to have a preconfig vuln, then just add '&& service thingy stop' after your install line.

Seriously, this is a vim-vs-emacs-style non-complaint, for when you can't think of any actual issues.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: