And it spreads like a virus in a way. Say you use Alpine / Gentoo / Ubuntu. You get your first enterprise or DOD customer. They want an onprem version of your service. Now you have to either not sell it o them (but it so happens those customers also come with loads of cash willing to drop it in your lap). Or support both RHEL and Ubunu. So after a while you start to wonder, why bother supporting both, so you switch to CentOS/ RHEL stack only.
I've seen it happen, people grumbled and were angry, but they still would rather have loads of cash dropped in their laps than stick to their preferred OS.
A couple of years back. Remember Mark Shuttleworth inquiring about what would it take for Ubuntu to get some uptake with US govt customers. Remember emailing him a list of some of those requirements and yeah, it is very expensive and time consuming to get those red tape stamps on your product. I don't know if anything happened with Ubuntu in that regard since.
(You can also get exceptions depending on who you sell to, but it only happens if you have friends in high places which can grant those. Say sell to special ops teams, for example).
I switched my servers to Ubuntu for now.
But yeah it was a major pain. Remember all the time wasted getting things to work or compile on outdated kernels or standard library...
Otherwise it creates a mess. Someone adds EPEL, updates, now their software which was based of a base package is broken because EPEL upgraded libc or something like that.
Observe that that was chrisper's original very point.
One specific example is the SO_REUSEPORT socket option which was added to Linux kernel 3.9, and was subsequently backported and became available since RHEL 6.5, which uses version 2.6.32 of the Linux kernel.
But you're back in the world of not having all those certifications which enterprise customers may want. On the other hand it shares common package infrastructure with RHEL so that's a bonus.
We used Fedora Cloud for a bit and I liked it. Be prepared for cutting-edge though, for example, journalctld rather than syslog.
We switch to Ubuntu for reasons not related to Fedora in particular but just ease of running an Apt repository on S3 vs Yum/Dnf.
(We also provide cloud-based SSH key management at no charge with upgrades available, but mostly our customers there are startups, although we do have a couple of public companies on our cloud SaaS platform. Since our revenue is largely driven by enterprises and mid-sized companies that want to run their own in-house and/or in their own clouds, we'll try to accommodate any OS they prefer: normally, that's RHEL, but increasingly Ubuntu, Amazon Linux, and CentOS.)
I suppose it depends on whether you are in the private sector, the public sector, or law enforcement.
> [debsums invocation]
> But wait — where are the configuration files? Where are the log and library directories? If you these commands on an Ubuntu system, you’ll see that the configuration files and directories aren’t checked.
Of course it can. Quoting debsums(1):
Also check configuration files (normally excluded).
Only check configuration files.
You can decide if a daemon should start on install by adding /usr/sbin/policy-rc.d file. It is described in /usr/share/doc/sysv-rc/README.policy-rc.d. Notably, just putting "exit 104" should give you something similar to what you expect on Redhat.
You can get the original MD5 sums of the configuration files with dpkg --status. You could also just install tripwire that would cover all files.
Ubuntu inherits this behaviour from Debian. Some people think that daemons should be started after install because the users installed the daemon to run it, some others don't. A choice has to be made. Packages where the daemon is an optional part of the functionality usually comes with either the daemon disabled (eg rsync), or with the daemon in a separate package.
Let's start with the easy stuff: how long did it take you to notice that the example in the GP's comment was wrong? Did you have to check Google, or do you have the `policy-rc.d` magic number exit codes memorised?
The `policy-rc.d` mechanism moves the decision as to whether to run a daemon on installation to a really unexpected place. What should it be? Well, what I always ended up googling for was an option in the `apt` system, because that's what handles installation. I want `apt-get install foo --no-start-daemon`. I want something self-documenting I can set in `/etc/apt/preferences`, if I ever want the choice persisted.
> You can do it
doesn't make for a good UI. It makes for a Turing tarpit. Systems should make the common case easy; in my experience the common case is either you want a fully-working system, with everything started automatically and usable defaults in place, or you want services stopped so that you can configure them and start them later. I've never heard of anyone wanting per-service whitelisting - not to say that people don't do this, just that it's sufficiently rare in my experience that having the default optimised for that use case sucks.
> A choice has to be made.
I'm not arguing about the default. I'm saying that the specific mechanism you have to use to change the default behaviour shouldn't take googling to figure out.
echo disable 'foo.*' >> /etc/systemd/system-preset/20-regularfry.preset
Your several cases are, respectively, the aforegiven for the administrator-chosen per-service controls,
echo enable '*' >> /etc/systemd/system-preset/99-default.preset
echo enable '*' >> /usr/lib/systemd/system-preset/99-default.preset
echo disable '*' >> /usr/lib/systemd/system-preset/99-default.preset
echo disable '*' >> /etc/systemd/system-preset/00-admin.preset
> automatic management of running daemons shouldn’t be handled by a package manager.
There's also the argument that determining whether a daemon should or should not be automatically started very much is the province of the service management subsystem, as automatic service start/stop/restart is very much what service management properly deals in.
There has been a number of external efforts for STIG compliance, but nothing formal. You can find a bunch of them on github.
Most of the technical items that Major raises aren't really issues IMO.
- Services starting by default: Don't like it, use policyrc.d to stop it. This can be done as part of the install, or via ansible prior to installing packages.
- AIDE doing a full scan: Change the config, but as a default - covering everything is safer than leaving gaps (considering his effort is part of an Ansible project, this would seem logical!)
- Verifying packages: Don't purely rely on the package manager for this! Use tripwire which is explicitly designed for this.. and debsums (with -a, that was omitted in the article [but isn't perfect])
- Firewall, permissive by default for most installations is perfectly acceptable. That is what post-install config is for. The target audience of Ubuntu server is cloud, where IaaS provided firewalls (security groups) provide the default protection, and for baremetals there should really be hardware firewalls as the 1st line of defence. However, doing # ufw default deny , switches to whitelist
- LSM, AppArmor is a pretty good default. But selinux can be switched pretty easily. The main issue against selinux is that many find it hard to use, which means that systems are left insecure. As an out of the box solution, AppArmor does provide some confinement which for many is enough. When was the last time you saw a how-to that prefixed with disabling the LSM (not as common as it used to be).
Whilst I'm a bit down on the actual content of this article, I'm really excited that Major is working on this and it is a good thing for Ubuntu when his work has finished.
-- former Ubuntu Server Engineering Manager and current cloud/security.
2) red hat and all the other Linux systems don't seem to have a problem with this simple additional step. Ubuntu even built a wrapper to make it easy to configure iptables firewalls (ufw) -- but it isn't enabled by default.
3) Better that maintainers are adding the additional step (that will remain correct) than have each administrator do it (unless needed). That requires maintainers to at least give a passing consideration to security. Without it you set it up and yay it works! No thinking required! No thinking that you may have just left a system wide open to an untested system. I would much prefer an extra step to default insecurity.
Even Microsoft gets the default firewall thing.
That's not something I'd want to see in 'Linux for human beings'. If my mother is going to use it on her desktop, things should just work.
Ubuntu is one of the very few modern operating systems (distributions) that contains this easy to fix security flaw.
Also, you probably wouldn't know if a firewall protected you unless you have a habit of poring over network logs.
Your wifi router definitely has this default configuration which certainly does protect you on a regular basis.
The outgoing exceptions on Windows probably because a Windows machine default deny for both incoming and outgoing which does require exceptions. Windows doesn't phone home for additional exceptions -- it asks you.
The default on Ubuntu should be like all other linux distros where where all unsolicited incoming connections are blocked. Related incoming connections are allowed and outgoing connections are allowed.
This would not require any difficult configuration on the part of the average home user.
It seems to me that most people who are blindly defending Ubuntu's current setup do not understand how networking and firewalls work and the better options. That or they do not understand how wide open the Ubuntu defaults are.
There is certainly a more secure easy to use option that Ubuntu is ignoring, but which all of the other linux distros use by default.
Fair enough. I'd love to learn. What threats does the firewall even defend against?
> The default on Ubuntu should be like all other linux distros where where all unsolicited incoming connections are blocked.
I don't understand why I should care. Evil hackers can send all they want, but the only things listening on my end should be those that I explicitly installed or enabled. The firewall seems redundant as the set of running applications and my set of firewall exceptions should match exactly.
Many Debian/Ubuntu packages have a dependency on postfix. Even some trivial ones. Installing these packages installs postfix (which at least asks you to configure it during install). Having a firewall active means that your grandmother won't have services you didn't realise she had running listening.
Please share with us what magic you have that prevents human error locally, but is defeated by human error in packaging.
The only part that is different is that Debian doesn't enable either apparmor or selinux by default and selinux on Ubuntu is slightly different than on Debian.
Critical for secure deployments and hosted on sourceforge.....
I seem to remember that being a "Feature" of a desperate SourceForge not to long ago.
I believe the average HN reader - including you - should be interested in this then: https://www.reddit.com/r/sysadmin/comments/4n3e1s/the_state_...
That said, I'm one of those people who bothers to check the signatures on binaries before installing them, so as long as I trust the signature and dev, doesn't much matter to me if I downloaded it from xxx-malware-binaries.biz.ru.
Even if they were bought out by RMS himself, I'd never host any of my projects there.
It's just something I don't think I'll ever support after what happened; it's a matter of principle.
For an example: IBM made a brand of laptop called ThinkPad. Now Lenovo makes a brand of laptop called ThinkPad. IBM's ThinkPads were excellent; Lenovo's ThinkPads are average. They're two different products, produced by two different companies, that happen to share a brand because one company sold that brand to the other. The brand conveys absolutely no information about whether you should trust the product. The company owning the brand conveys 100% of the information.
No employees came along with the brand? None of the decisionmakers responsible for the previous SourceForge derp over the past couple of years? It's a clean slate - tabula rasa? That's fine, but it'll take time for people to warm up to that idea. That's what happens when you abuse trust.
The brand is tarnished in my mind.
If you have both information about a "brand", and information about the actual group or individuals underlying it, giving any weight to the brand will give a suboptimal result and let people exploit you. The reputational information has some value, but only if you have no "observational" information. Trying to use them both together is double-counting. Just use the information about the individuals and throw the brand information away.
In in other words, basically http://lesswrong.com/lw/lx/argument_screens_off_authority/, but with a more generalized halo effect in place of "authority."
In the case of FIPS 140-2, you have the assurance that the crypto module, say OpenSSL, was verified to work consistent with the FIPS spec, and that it was built in a manner consistent with that test.
Now I get that FIPS has its issues, but the vast number of organizations lack the knowledge, skill and time required to evaluate the correctness of crypto code.
TBH, the certifications seem like they are mostly a con to push bad quality to ignorant people. I think if I was securing Ubuntu, I'd be analyzing it based on threat models for the organization, then derive the specified behavior based upon that; only then would the manuals for the software come out. Analyze based on the needs of the enterprise I am involved in... Maybe this is too much a hacker attitude, but I really don't see why you start with certs or even care. Better to have a trusted firm evaluate for your organization IMO.
Most people regard OpenSSH as very well written software. But guess what? Poor procedural controls at Debian meant that lots of systems generated weak keys. Certification of a known, verified build process helps with that kind of problem.
Whereas, medium to high assurance evaluations were more meaningful as they had stronger assurance requirements. Precise ways of looking at system, careful implementation, every feature corresponding to a requirement, covert channel analysis, pentests, SCM, trusted distribution, and so on. Those say quite a bit about what level of trust one can put into software. That's how it was originally meant to be done. So, you can put a little more trust in the TOE.
Whereas, for EAL4, there's no reason to believe it's secure at all given the criteria are designed for "casual or inadvertent attempts" to breach security. Internet and insider threats are slightly more hostile than that. ;)
It allows their lawyers to go "we are compliant with X, Y and Z so the fault is not with our product/service".
While I agree that Ubuntu has a priority of being a usable desktop system (especially for developers), I believe being a good deployment target is more important to them.
The consumer mission which is enhanced by insecure-by-default package installation?
More info about the CIS benchmarks:
The RHEL 7 benchmark:
The CIS Benchmark is also quite extensive, and there is a lot of overlap between the two.
This is a good one I recently found.
It's always angered me that packages start running before I have a chance to configure them... With some, like MySQL, it means if I want to change certain settings, I also have to clean up some other files before restarting the service or the package will not start. On CentOS/RHEL, this is never an issue.
I view the "hardening guide" requirements, particularly PCI 2.2, as an ugly jobs program.
Technically, if security is your focus, then shouldn't one of your first actions after setting up a new machine be to set iptables default action to drop all incoming new,invalid packets anyway?
I mean, I generally install server with nothing. Set up iptables. Then install packages and open ports.
But why expect the user to do that themselves? Shouldn't security be the default, with a secure configuration provided out of the box?
Giving the user a secure configuration which they can then opt out of if they wish does better by the user than giving them an insecure configuration and then asking them to opt in to security does.
Seriously, Debian is the upstream distro does this, and it's the other linux grandparent with RHEL. It's been around for decades, used in production the same way RHEL and BSD are, and we haven't had debian-based boxes compromised left, right, and centre with these 'insecure' defaults. The threat is imaginary, and not born out in real world numbers.
But Ubuntu doesn't come with a postfix server enabled by default, so that's not really a concern, is it?
Here is where it is enabled by default:
And here is where the installation process auto-starts the server:
And if you're manually configuring your hardened servers instead of using a configuration tool, then you have significantly greater problems than 'omg! starts too early!'. And even then, if you're manually installing something and you think it's going to have a preconfig vuln, then just add '&& service thingy stop' after your install line.
Seriously, this is a vim-vs-emacs-style non-complaint, for when you can't think of any actual issues.