FWIW, I’ve worked at many RedHat shops over the years and I’ve never seen one where disabling SELinux wasn’t a normal part of provisioning a server. I haven’t seen the same thing with AppArmor (although I admit I have less visibility into debian systems administration). YMMV but it seems to me that a component which is so inconvenient that it’s normally disabled doesn’t provide much security in the end.
Security folks usually are detached from the actual reality, unfortunately.
Yes, people/sysadmins should take their time to properly configure SELinux when things don't work, instead of just disabling it completely for good. I tried for a whole year in a place where we used CentOS, and then finally I gave up, too many hours wasted in finding the right conf for this new program or configurations etc.
I feel like I'm taking crazy pills in this thread. SELinux is so easy to set up in anything RHEL 7+. Everything from the distro works ootb, the auditor will tell you exactly what caused the error, it will give you the commands to fix it, and you can label programs you don't want to deal with with unconfined_t. There's no reason to completely disable it, you lose all the benefit of all the software Red Hat engineers have already made work with it.
Have you ever read the commands the auditor gives you? They can be laughably broad, barely short of just giving the app unconfined permissions. If you're just blindly copy-pasting what it tells you, you might as well just disable it.
Yeah, they're not great recommendations sometimes but they do have the advantage of always allowing the behavior which I think is meant to not make a frustrated ops person even more mad. But I disagree on the "you might as well
disable it" because now you've lost the policies on the thousands of packages you
didn't make exceptions for. Even if $company_app is running basically unconfined at least sshd is still locked down.
coming from the IT Operations side of things, most developers who I work with are unable to tell you how to get their application through a dead simple stateful firewall much less any kind of OS-level control scheme like selinux or applocker.
Watching a 15 minute selinux tutorial video will give you a moat ahead of 90% of the community but it won't matter because management kind of agrees that anything that slows you down has to go, and security policy is ultimately just a type of insurance rather than a revenue generating activity. Disabling selinux reduces cost today so we might as well go for it.
I do think it is worth having on any public webserver since it's only a matter of time before your app gets popped and you want that sucker in jail, but I gave up on internal servers a long time ago.
As an ops person, is it not your profession to make applications work in the real world with firewalls, load balancing, certificates, and mandatory access control? I have never worked somewhere where that wasn't part of the sysadmin/ops/SRE/devops role. Where devs do it, it's only because you're small enough to lack the specialists.
Yeah, when I was an infrastructure engineer, this was definitely part of the work I was expected to do, though eventually I turned it into educating and supporting developers in understanding security technologies and leveraging them for their application development. But that's just because I wanted to do it that way.
Typically, developers have to tell the operations/sys admin/devops people a lot of things to get their application running in test/pre-production and production. Here are some examples:
- Network ports the service listens on
- What security permissions does the application need
- What commands have to be run so the application starts
- What platform does the service use? Java, Node, C#, C/C++, Go, something else?
- What GIT repository or repositories contains the service's code?
- How does the build work?
- What needs to deployed to the machine?
- Any configuration changes the application needs
There are also a lot of join decisions where the operations engineer and the software engineers have to work together. Here are some examples:
- What cloud will the team use? (AWS, Azure, GCE, etc.)?
- What cloud technologies will the team use?
- What database will the team use?
- How will logs and alerts work?
- How will the on-call rotation work?
My main point is you cannot just tell an operations person to deploy something and expect good results. They will have a lot of reasonable questions and software engineers should be able to answer them.
When we onboard, say, M365, they provide a list of 50 or so URLs that we can expect their service to use. Sure we could get the common ones out of the way on our own but 3 months from now someone is going to click a weird button that deeplinks to some oddball one-off domain that they had lying around for some reason. It's nice of microsoft to let us know all the URLs they are associated with in advance.
Is there any reason that you do not know what your application can be expected to connect to? You wrote the thing, right?
I work with a lot of external vendors who offer self-hosted software and a really common refrain is "it must be a network problem" but these guys are universally unable to describe their application's networking requirements.
"tell me what outgoing ports the application needs" is what results in blank stares the most. its embarrassingly basic, but maybe it has something to do with "senior" devs only being on the job for mere 5-8 years. that is the amount of time good cybersec people spend on understanding security after a decade in dev (or in ops).
Ohh! I’ve been at the other side of this discussion.
It usually goes like:
“What are the outgoing ports?” “1024-65535, I mean, the app is using X language’s standard library to make an HTTPS request.”
“What are the IPs we have to whitelist?” “You can either allow app.example.com or take AWS’s IP range JSON file and allow all of those, we don’t control what IP gets assigned to AWS’s API Gateway service”
Then some cloud provider’s SA/SE gets looped in to say the same stuff to the security team.
Some exec then gets escalated and approves this as a risk.
Yeah, OS firewalls have limited use in the modern app stack. It's not just HTTP(S), you've got cache and database in there too. If any ops person asked me this question I'd take it as a bad sign. Like, you're worried about exfil on an application by application basis in your prod deployment, which I assume is all VPC'd and not SSHable? There are better ways to spend your time.
I keep SELinux enabled at all times, but it does break quite often. For the sake of sticking to Fedora, wg-quick (wireguard) does not work out of the box.
On OpenSUSE/MicroOS who is employing SELinux boot takes about 5 minutes on every kernel change, because of home-relabel. I hear you, that they probably do it wrong, but that's what you get with SELinux. Not enough to push me to disable SELinux, but maybe to avoid SELinux distributions in the future.
I agree. I have worked at various companies that use Red Hat/CentOS extensively and the only time I ever saw someone turn Selinux off was on RHEL 6. Ever since then, it has been easier and easier to use. Not saying it is perfect for everyone, but it does work and can be made to work well.
You must have been extremely lucky because I've had multiple apps just trigger endless SELinux warnings on RHEL8 (Rustdesk is an example) and I very much subscribe to the views in this article:
I'm not going to waste my time fighting SELinux to stop non-existent threats (I'm just using a desktop and I'm not a high profile target). Too many false positives and I'll just turn it off. And in my experience there are always too many false positives.
The documentation is atrocious, and usually won't say things like "label your program unconfined_t" because they don't want you to do that ever. Also, tutorials -- even RedHat's -- are always some variation of "here's how to use audit2allow." That is very much not what I want. I want to create a reusable policy that I can apply to many hosts via Ansible or as part of an RPM package I created. I've never been able to figure out how to do that because it is always drowned out by SEO spam that barely scratches the surface of practical usage.
It's painfully obvious to me that the people who create SELinux and its documentation live in some alternate universe where they don't do anything the way I do, so I just turn it off.
Not excusing that state of documentation by any means, but a good starting point for understanding the actual policy for me was "SELinux System Administration" (ISBN 978-1-80020-147-7).
It won't carry you all the way to applying policies via Ansible or RPM packages, but definitely took me from running random audit2allow commands to taking a more holistic view of my SELinux policies.
It also looks like a long read but if you fast-forward through chapters that aren't relevant to you (looking at you IPSEC) it isn't such a slog.
(this was in 2016) Yes, usually the logs pointed you in the right direction, but it still made things more complicated and trick the "lazy attitude" in many people (or, at least, in me).
I maintained policies for multiple proprietary products. It took several months to get a grasp how SELinux works and what it wants from me. It's quite far from easy.
> it will give you the commands to fix it
Usually it's a crap you should not apply to the system.
The "CIA triad" definition of security (Confidentiality, Integrity, and Availability ) is most often violated by loss of the Availability component. Very often by "security" mechanisms that effectively serve as a Denial of Service attack on the users.
A system which is easy to use securely will stay secure, a system which is difficult to use securely will be insecure.
I always pushed back hard on vendors who wanted me to disable SELinux on my RHEL boxes. It's unacceptable to disable default OS security protections to make an application function. It's no different than demanding an app run as root.
Indeed, disabling SELinux is like following instructions for PHP applications and running "chmod -R 777 /var/www".
I used to work at a payment provider and we had to deal with lots of monitoring and security stuff. Some of it was (obviously) busywork and needless checkbox filing, but other parts were genuinely useful. Setting up systems was tedious and difficult, but ultimately worthwhile and necessary.
This reminds me of my college years and the one time we used Fedora and someone accidentally set it up with SELinux, we spent hours pulling our hairs out trying to figure out why nothing worked. Only to finally realize SELinux was the culprit and we needed to turn it off.
SELinux has been enabled on Fedora for as long as I remember.
SELinux is complex, badly documented, policy code is obscure macro incantation, and basic debugging tools often aren't installed out of the box on server distros (such as audit2allow). But for the day to day administration of systems, policies are included for distribution packages and most issues can be fixed by enabling a boolean here and there, and relabeling files.
The principles, basic admin & debugging part can be learned in a couple of hours, and when you have custom service software, you can throw it in /opt and have it run unconfined (ie: not subject to SELinux rules).
"The principles, basic admin & debugging part can be learned in a couple of hours,"
In principle, yes. In reality I've gone looking for a resource that I could do this with and come up short.
(I am starting to get really annoyed at things where I can find a million "paste this bit to do that thing" and there are so darned many of those on the internet that any hope of finding a good resource that gets to the underlying structure such that I can figure out how to do these things myself is virtually nil. It seems like this is getting worse as the search engines continue their trend of taking your query as a vague suggestion of the sort of thing you're looking for.)
Maybe I'll write that guide, and then (fingers crossed) people will find it via search engines (before all search engines become just an LLM frontend).
Well, if you do, my email is in my profile and I'll be happy to be an advance proofreader if you're interested. I've got a couple of teens in a very similar position as well, so I can even provide multiple people's point of view. I have pretty much the exact right amount of experience for that; I've been in there, I've done things, even completed a couple of non-trivial projects (nothing amazing, but, more than just "a pad with a pocket in it"), I recognize I'm confused, I don't know where to proceed from there.
I mentioned elsewhere in the thread, I (well, my employer) picked up a copy of a book on SELinux System Administration (that's the title) and it has served just this function for me.
It won't make you an expert but it takes the voodoo out of the whole process, if that makes sense. And it is reasonably short if you skip the stuff that you're (probably) never going to configure like passing labelled traffic between hosts with IPSEC.
It's baffling to me that SELinux's UI is like...the best we can apparently do?
The underlying concepts of SELinux aren't so hard but trying to manage it in any sort of coherent way is a nightmare - up to and including the provisions in it for a network based policy server component which just never appeared.
And it sucks! In theory it does so many things we really really want, and should do more. Like I as a user have a great interest in ensuring my home directory files follow sensible markings based on their content - my SSH keys, AWS keys, or banking files all exist in different logical zones of control.
And this is a concept SELinux can handle...but the tools are just so bad at surfacing it.
Reminds me of git itself, you read about its internals it sounds easy. You start trying to figure out how to map commands in a way that makes sense, it baffles you when things break.
Because that's not their job. The NSA sponsored the development of SELinux because they needed something to solve their problems. The current state of SELinux does that. Why would they spend resources solving problems that they don't have.
Kind of a weird counter-question? Why would any business spend more effort on building a tool than is necessary to make the tool they need? The NSA doesn't care about brownie points.
I wouldn't say it's that drastic. Also, SELinux can give you a false sense of security. It's best to harden the system overall instead of relying on one security feature (however good it might be).
It's not necessary, it's a stupid dick move. cPanel was just not capable to tune the selinux profiles for their services, I've worked there.
My servers all run with selinux, it's really trivial. Just the ssh client and tailscale recipes are missing by default. Selinux gives you precise choices if something is rejected.
IMO, it's because relying on filesystem labels and compiled policies (SELinux) ended up being a poor design choice vs defining the access in easy to understand policy files (AppArmor).
AppArmor is easier to understand because it is simply less restrictive, and in that way it is less effective solution. I would not call SELinux as poor design choice because of that. You can't do things with AppArmor that you can with SELinux.
You can make your security as granular as you like; but it's just like any other architecture in that you have to come up with good abstractions that make it usable. SElinux is simply poorly designed.
Its because Selinux wasn't really designed for "sysadmins" it was designed for "governments" or organizations that need to meet a specific level of security as a contractual/legal requirement. Selinux came out of the NSA and is based around the Trusted Systems Criteria / Common Criteria, aka the Rainbow Books. If you look at 'Trusted Solaris' (or IRIX, AIX) you'll see very similar systems.
Is this poor design or simply, not designed _for_you_?
I agree, its a royal pain to manage, and it might be overkill for a small shop trying to lock down their web server. Thankfully there are other solutions, and operating systems that may better fit your use cases.
Well put, so to rephrase: SELinux is not for most people and cooperation, therefore, it is sensible to just disable it, making RHEL less secure than Debian in practice.
Very much the wrong takeaway. SELinux is absolutely for people and corporations and has been for most of it's existence, and no, it doesn't make sense to disable it anymore than it makes sense to run as root because it's convenient.
If you are looking for a justification to excuse bad security practices, you won't find it in the origin story of SELinux.
Its very much for people who are trying to lock down their systems. Its also very much for people who want to meet the Common Criteria. It can be both. But for some people, its very much over kill.
I mean yeah. It's software designed for compliance. It's technically capable of any kind of restriction a bureaucrat might envision, so it's the best thing available for the kind of checkbox security needed in a regulated industry.
I find it enlightening to read what kinds of justifications the proponents of SElinux use. It's never about the quality of the software; it's about how there's more band-aid tooling to make it easier to work with, or about how it's not as bad as it was, or that it gives you all these knobs and levers to have more control. It's not what you focus on when you're serious about quality software engineering.
Imagine if we were talking about something like Gnome or the Windows 11 interface: yeah, the interface is a real pain to navigate, but we added even more menus and buttons and the rightclick menu is twice as long now, so you can do even more stuff with it, and we even added Clippy back in to help you when you get stuck!
Yes, SELinux is enormously complex and typically obtuse. However, it's difficult to imagine a much more "elegant" solution for the role SELinux serves. Linux, and Unices in general, are simply not designed for security. Indeed, the virtualization movement was largely driven by process isolation being so poor in mainstream operating systems.
SELinux is designed to fulfill to primary goals. First, to secure the messy and complicated Linux architecture. And second, to be flexible enough to accommodate (highly) complex security architectures, as well as potentially unique and/or unforeseen needs. With that in mind, it's difficult to imagine any equivalent being practically more simple and/or elegant than SELinux.
The primary problem with SELinux is the broad lack of experience amongst users and sysadmins, opaque documentation, and primitive tooling. And in many ways, it is a negative feedback loop. If SELinux was used everywhere, improvements to its documentation and tooling would naturally follow.
Some variants of Unix are designed for security; OpenBSD comes to mind. And Theo is on the record eviscerating the notion that virtualization be used as a security measure. Something about complexity being counterproductive to a secure system.
You're describing the linux architecture as messy and complicated, but that describes the SELinux architecture as well; if complexity & mess are bugs that should be squashed in pursuit of security, SELinux is ill-suited to the task.
> And second, to be flexible enough to accommodate (highly) complex security architectures, as well as potentially unique and/or unforeseen needs.
>> It's technically capable of any kind of restriction a bureaucrat might envision
Sounds like we're on the same page there. Or at least looking at the same phenomenon.
Your last paragraph is definitely outside the pattern of justifications I listed, but it's not much better: you're just blaming the users. Sysadmins use all kinds of complex software to accomplish any number of delicate tasks - if the tool is well-built, they don't tend to complain that it isn't. SELinux is not. Don't blame the user when the tool's at fault.
> Some variants of Unix are designed for security; OpenBSD comes to mind.
This is fundamentally not true. Don't buy into the aggressive marketing. OpenBSD has a less secure design than pretty much any modern Linux. Their reputation for security is based on disabling things by default when it wasn't common 20 years ago, that's pretty much it.
First of all "OpenBSD stands on the logic of its own merits" what in the actual heck?
OpenBSD has had two remote holes in it's default install and importantly, *no mechanisms in place or restrict what can then be done*. That's not in line with a secure system.
You're vastly overstating and assuming the merits OpenBSD has, and then even worse, assuming logic exists based on that to support your position. I would say it doesn't, and I challenge you to show your work and demonstrate otherwise.
SELinux is a mature product that has seen widespread use in enterprise deployment and has real world examples of stopping attacks that OpenBSD couldn't hope to on it's best day.
If you want to go by merit and logic and not assumption and marketing, then SELinux will come out on top every time. It's actual, provable, tested security, not dreams and half-measures.
...Theo is on the record eviscerating the notion that virtualization be used as a security measure. Something about complexity being counterproductive to a secure system.
Hypervisors predate Unix, in fact they practically predate general purpose operating systems as a whole. The reason hypervisors came first is because they are substantially more simple than an OS. Tens of billions of dollars has been spent on virtualization technologies because of its reliability.
Sure, a virtual machine running on a type-2 hypervisor like KVM looks like a complete mess. But the sad part is that such an architecture is easier to secure than an operating system, whether OpenBSD or otherwise. Raadt may disagree, but AWS, Azure, GCP, etc depend on hypervisors/virtualization, not OpenBSD.
You're describing the linux architecture as messy and complicated, but that describes the SELinux architecture as well...
Well, yes. SELinux has to cope with the deficiencies of Linux. I'm not pretending otherwise.
...if complexity & mess are bugs that should be squashed in pursuit of security, SELinux is ill-suited to the task.
The problem is that the market settled on Linux, despite its "complexity & mess". SELinux isn't nice but it's the most concrete solution available today. Indeed, after twenty years of criticism, no one has been able to design a competent replacement or alternative.
>> It's technically capable of any kind of restriction a bureaucrat might envisionSounds like we're on the same page there. Or at least looking at the same phenomenon.
There is a material difference between our viewpoints. The government has systems running in a dizzying amount of configurations and environments. Moreover, government agencies and government contractors operate in a far more dangerous security environment than the vast majority private companies. Perhaps your workplace doesn't need SELinux, but companies like Lockheed Martin or Aerojet Rocketdyne definitely do.
...you're just blaming the users.
Maybe it seems like splitting hairs, but I'm not blaming anyone. Rather, I was lamenting over the bad situation of things. After all, I've been there as a new sysadmin. Trying to grok SELinux for the first time is not a pleasant experience.
Don't blame the user when the tool's at fault.
Look, if security isn't important for someone and/or their organization, then fine, don't bother. However, we have seen time and time again that compromises in supposedly "unimportant" systems ends up causing quite a bit of harm.
Ultimately SELinux exists because of the shortcomings of Linux itself. Nothing has replaced SELinux because genuinely securing a Linux system is extremely difficult and it fundamentally cannot be made simple.
Yes, money has been spent, and cloud infrastructure built, on hypervisors because of their reliability, and because they are selling virtual machines. But reliability, while paramount, is not security, and the goal is to sell VMs. OpenBSD, with its focus on security over performance, is the wrong tool for the job.
> coping with the deficiencies of linux
Good designs can take a messy domain and provide a clean interface on top of it. SELinux does not.
SELinux is partly a big matrix of tags, with definable security associations between any two such tags. That's great when a bureaucrat in a defense contractor security department writes up some new policy definition - you never know what they might come up with - and I would not make the mistake of assuming security is well-informed on the internals of Linux when they write those rules. SELinux is elegantly designed to be as granular as needed to accommodate that. But that's what it's designed for: checkbox security within massive agencies.
> it's all we have
Yes, I agree, it's the only thing that can accommodate that kind of security; since few people outside regulated industries are interested in catering to it, there's not a lot of push to make something else to fill that niche.
> the user or the tool
Arguing that security is important is not relevant to justifying SELinux. I will agree though, it's definitely very hard to twist a Linux system into a shape that fits bureaucratic security policies.
And if you want to see something that is the pinnacle of design in this space, go no further than openbsd pledge and unveil. Out of band policies is an ugly way of doing this.
Sorry, I was responding to the parent's question about why one was disabled over the other. Yes, SELinux is more capable, at the cost of additional complexity. I think it's debatable how many companies need that complexity, especially outside of the federal space.
I’d bet money the main practical purpose SELinux serves is to check boxes when negotiating government contracts, in a way that’s familiar and can be called a standard.
Then in practice someone ends up writing a couple policy statements and filing a couple forms then disabling it anyway, nearly every time.
If that’s the case it doesn’t need to actually work in practice, just hypothetically.
I've never seen SELinux as a requirement for any auditing, and I've done a fair amount of auditing.
It's not the only project like it, it's the one that is most well known because it has the NSA attached and because it got incorporated into the main kernel.
It works in practice, absolutely, but most people are too intimidated or lazy to put in the effort to learn it.
Having read the article I could not find any example of what is impossible in AppArmor, just a statement repeated in various ways that SELinux is easier to provide a secure-by-default environment with the closest thing to justification being that SELinux models things with types whereas app armor deals with restrictions on specific applications. I’m sure this all makes sense to someone already well-versed in the space, but I’m left with the same question as OP.
> I could not find any example of what is impossible in AppArmor,
AppArmor is simply less granular. For example, it doesn't provide true RBAC or MLS security models. It also uses paths instead of inodes, so a hard link can be used to override some policies.
So it just depends on what the exploit or attack is trying to do. If an attacker gets root and is trying to overwrite a file, they may be able to. Maybe they can't, but they could probably still execute any code they can write and compile themselves. And perhaps they can write to other files and do damage.
SELinux and similar systems allow a lot more granularity. Programs and users can only talk to explicit what they are allowed to talk to, and maybe you want to limit the access to say, append instead of full write access.
It just allows a lot more granularity and restriction, that's the difference.
> The link rules can get pretty granular and seem explicitly designed to prevent that scenario.
It's still an inherent weakness. No getting around that really.
> Assuming the AppArmor profile allows writing to and executing the same files. Which isn't particularly common.
I don't really want to try and come up with examples just so you can show there might be some hacky way of accomplishing something similar to what SELinux can offer - it would be missing my point.
Point is there's a lot more you can do under AppArmor than SELinux. AppArmor isn't as granular and you can't lock down a system to the same extent, period. Is it good enough, sure. Is it better than nothing? Absolutely. Is it comparable to an optimized SELinux config? Not remotely.
Hacky way to accomplish something? Literally every example you gave of AA not being "granular" enough was flat misinformation. There are dedicated rules to prevent writing and executing the same file, prevent using hardlinks to gain privileges, and prevent overwriting a file that should be append only. No hacks here. Just facts.
> Literally every example you gave of AA not being "granular" enough was flat misinformation.
No, there was no misinformation, and this stance you're committed to defending is one of the most bizarre stances I've ever come across.
There can be no question that SELinux is significantly more granular than AppArmor any more than there is that the earth is not flat. Looking at the introductory documentation for both systems should be more than enough to make that abundantly clear to anyone.
> There are dedicated rules to prevent writing and executing the same file, prevent using hardlinks to gain privileges, and prevent overwriting a file that should be append only. No hacks here. Just facts.
So just before I put more effort into replying to you, I want to be 100% clear on your stance. If I am paraphrasing or misconstruing, please correct.
It seems like you are claiming that AppArmor using hardlinks is not any sort of vulnerability or weakness and cannot be, and has never been bypassed? Is this a fair reading of your position?
My position is that you haven't demonstrated a practical example of SELinux being able to constrain a workload that AppArmor doesn't have parity with, i.e. you haven't responded to my initial question:
Can you offer some examples of things you can restrict with SELinux that you wouldn't be able to with AppArmor?
Only valid answer in the thread has been port bindings - AppArmor's network rules don't allow restricting port number, but SELinux can do that.
You tried to claim that SELinux could prevent processes from overwriting files instead of just appending to them while AppArmor could not do the same, but that statement of yours was easily disprovable -- the man page of apparmor.d shows that append-only rules are supported. If you don't want me to call your statement misinformation, then maybe invent another word because that is the only word I have to describe what you said.
> My position is that you haven't demonstrated a practical example of SELinux being able to constrain a workload that AppArmor doesn't have parity with, i.e. you haven't responded to my initial question:
I listed some of the ways AppArmor falls short, which you dismissed.
If I know an object is 3x3x3 feet, and we have a box that is 5x5x5, and another object that is 7x7x7, I don't need to thoroughly test every aspect of these items or even see them to know one of the won't fit in the box.
> Only valid answer in the thread has been port bindings
Not true. AppArmor lacks several of the models SELinux does, and thus, as has been said, is less granular, and thus, and as has also been said, it covers less area than SELinux does. AppArmor doesn't even consider user accounts as far as policy decisions go, and you can't bind policies to user objects. You realize already what a limitation that is, right?
It's sufficient to look at the designs of both systems to see this, see where one falls short, and not need practical examples to understand.
If you want practical, real world examples of SELinux blocking something AppArmor couldn't, as I said to someone else, a comparison of Debian and redHat security advisories should show this, as I would think it is extremely likely that Debian would significantly less often be able to say the issue isn't a threat if AppArmor is enabled vs RHAT saying the same for SELinux.
But, you want a setup. OK. Does AppArmor allow you to basically take root out of the equation entirely, by assigning only the capabilities a user needs to run specific programs (e.g. like binding a port under 1024) to a non root account? Does it then allow severely limiting the root account so it can't really doing anything and 'getting root' is pointless because you can eliminate the entire concept of an all powerful account? No, it doesn't, and there is plenty more it doesn't allow because it's a simpler and more limited system by design.
> You tried to claim that SELinux could prevent processes from overwriting files instead of just appending to them while AppArmor could not do the same, but that statement of yours was easily disprovable
You're right, this was my mistake. AppArmor either didn't have that functionally the last time I really played with it, or I forgot it had it. That's a bad example, sure, but the overall point is still perfectly valid.
> If you don't want me to call your statement misinformation, then maybe invent another word because that is the only word I have to describe what you said.
As far as AppArmor being able to enforce append only functionality, sure. As far as anything else, not so.
I was surprised by his praise of MCS. We noticed it when reusing the same volume for subsequent reuse of a podman volume. It's a couple of years already, but it was not really explained in the documentation, only in a blog post by a RH emloyee. One weird thing is that the labels are random, but the range of possible values is rather small. So a determined attacker could brute force them. Also we always had a mix of files with and without MCS labels on the volume. IIRC moving or copying files led to different results. Not clear to me why a copy should be protected differently than a moved file, they seem of similar sensitivity to me.
It's been a while and we hacked around it, don't remember how. Except that it was not the #1 solution, disable SELinux altogether.
At least as far as the filesystem labels are concerned, the designers of SELinux consciously chose to use inode-based labels instead of path-based policies because the latter can be dodged via hard links. For this reason, it's best to disallow hard linking when using AppArmor, while such a restriction is unnecessary under SELinux.
I just feel SELinux would add too much burden to sysadmins. I use CentOS + SELinux in one of my VPS and it's already painful. I've been sysadmin in university labs for some years. I did what I think is reasonable, setup a firewall, limit root access, never trust lab servers to the extent that I forward my SSH agent on it... But I don't want users come to me every time they want to run a custom / proprietary program and I spend time writing and debugging MAC rules.
And I don't agree with the article that containers do not add security. Container runtime implements namespace isolation, seccomp filters, etc. and that reduce the attack surface, comparing to running the software directly on the host OS. More importantly in this discussion, it is convenient for sysadmins.
There is no perfect security anyway. And I don't sacrifice convenience for national security level security :)
The k8s nodes in Oracle system are shipped with SELinux (permissive mode). One of the those nodes was extremely slow and we found out it's due to SELinux . We have to completely turn SELinux off, reboot the machine, and well, our pod start time reduced from 5 minutes to a few seconds.
That much difference in boot time with no other changes would suggest to me something (or multiple somethings) calling out in a way blocked by SELinux, and timing out rather than failing quickly. You might want to check that you don't have any undesirable calling home going on from some of your containers.
I think the article summarises the situation well:
> The policy language and tooling is cumbersome, obtuse, and is about as appealing as filling out tax forms.
If a security framework is so terribly complex and hard to use, then people won’t use it.
OTOH, look at how OpenBSD or OpenSSH approach security: simply primitives which are well documented and easy to understand.
The only reason SELinux even works in the few scenarios where it does, is because the operator had immense amount of resources to pour into it. This itself is another sign of how bad the design is: it’s so complex that no small team of humans have ever been able to use it.
Pledge and Unveil makes a ton of sense, because it moves the responsibility to the developer who should know the application better than the systems administrator.
Sometimes, when the developers make a mistake, which is unavoidable in a large project, it is nice to be able lock down applications as the administrator.I just don't think SELinux is the right tool, because the chance of you making a mistake in the configuration is pretty high. The functionality is there, but it needs to be easier to write policies and maybe that comes at the cost of some flexibility.
> the developer who should know the application better than the systems administrator
On the other hand, the administrator knows their system better than the developer. There could be certain network connections or file paths that you want to block on one system but not on another.
The OpenBSD approach isn't even in the same league. Not only it is developer opt-in, but it is also limited to enforcing or restricting syscalls, that's it. If you have a root RCE in say sshd, pledge won't help you. SELinux will.
The infra team at my work is now keeping it on for new EL9 installs due to pressure from the security team, but for the past 10-15 years they kept it disabled.
I'm hoping it sticks. Just check audit logs when you get an error, it is not that hard, right?
SELinux suffers from a reputation problem. It gained that reputation early on, while default policies were still very immature and overly restrictive.
One crucial change for the better was leaving third-party software in a permissive state. From that point onwards, disabling SELinux is cargo-cult sysadmin'ing.
SELinux is not hard if you understand its basic principles. But no one bothers, because SELinux is the bogeyman.
Yes, writing policies means getting knee-deep in macros, and it's hard because many services try to access anything and everything. But almost no one needs to write a policy.
At most you need to tell SELinux that some non-default directory should have some label. That's not hard.
But that's exactly what I would like to do! I've never seen a real guide for how to set up a policy for a custom daemon I wrote myself. Or when a specific software doesn't come with a policy.
It's true that there is a lack of simplified documentation. But that lack is also the result of the folks that would otherwise contribute to such documentation not even giving SELinux a chance.
Many years ago I decided to face the Bogeyman and went from knowing very little about SELinux to writing a policy from scratch in about a month. The policy is simple enough (but realistic) that it might help in the absence of a guide:
> At most you need to tell SELinux that some non-default directory should have some label. That's not hard.
In my experience, it's not just directory labels ("semanage fcontext -a -e ..." and friends). You also need once in a while to set some booleans ("semanage boolean ..."). Yes, it's not hard once you know about it.
When you are working in an industry where peoples lives are at stake, then it matters. It matters less if your business is just selling internet-widgets or what not.
Because pretty much everyone on the internet tells you to disable SELinux instead of trying to understand it. I'm always rolling my eyes when I open some deployment instruction for RHEL (clones) and they have as step one: Disable SELinux.
Few will instead read the RHEL provided documentation. Then they could maybe figure out whether there's simply a tunable (getsebool -a) which would enable the desired behavior, or if properly labeling files (semanage fcontext / restorecon) would do it, or even take the steps to add to an existing policy to allow for a specific scenario which somehow was not implemented. Even adding your own policies "from scratch" is certainly doable and provides a great safety net especially for networked applications.
Anyway... we all know disabling security or not implementing it in the first place can really save you a lot of time. At least in the short run.
> I’ve never seen one where disabling SELinux wasn’t a normal part of provisioning a server
This is so funny because whenever I suggest Fedora Silverblue to a moderately experienced Linux user who wants a simple distro, the first thing I do is recommend turning SELinux on permissive mode, and I get a bunch of comments hand wringing about how you shouldn't do that.
It's almost like a silent filter working in the background of your OS that doesn't even tell you when it blocks something is a pretty user hostile feature and no one wants to learn how to speak SELinux so they can effectively use it.
Sometimes it seems like Linux people don't want others using it. Even when they belong to evangelist platforms, they like to create huge barriers for entry and then blame new users for not "getting it."
It's a symptom.
Seems that label-based MAC is too tightly coupled with other parts of the system. With path-based, there's no need to disable it (entirely) because each part have their own separated strict scope.
I was expecting to see something about how Debian's updates are slow. Instead I learned something about SELinux, which is cool. However, I don't think it's fair to extrapolate from this that Debian is less secure in general. A case has been made here that Debian is less secure for containers and server usage. For desktop users who just want sandboxed applications, I don't think Red Hat's SELinux implementation does much to protect them.
Sidenote: I don't like the implication that community-driven projects are inherently less secure.
> Lack of Resources: Debian as a community-driven project lacks the resources to develop and maintain comprehensive security policies comparable to those provided by Red Hat.
> It's hard to completely dismiss the idea that SELinux was a NSA plot to keep userspace capabilities out of reach on consumer OSes.
It should be trivial to dismiss given the widespread usage and real world advantages it provides.
And no, a single use server doesn't make SELinux useless. It still means SELinux can lock down whatever services are offered on that box better than pretty much anything else can.
Indeed, since the dawn of virtualization and automated deployment, shared servers are a legacy behavior. Well, on Debian's world, at least : for RHEL, you may pay per instance, so there is a financial incentive to share said instances.
Ergo, RHEL and friends are inherently less secure than Debian.
SELinux still offers a lot of additional protection in the case of RCE. There are literal examples of it working in the wild, e.g.
For several versions of the OS, this worked quite well, but once dual-sim devices2 started coming out, this became more problematic. Furthermore, when SELinux3 became common on Android, this became more problematic since the radio SELinux context that rild started with was too restrictive for the implant to function. - RoidRage Bootstrap Methods (https://wikileaks.org/ciav7p1/cms/page_28049453.html)
If you look at the detail pages, you’ll see that “not yet assigned” doesn’t mean that a fix hasn’t been implemented yet. But you are right that not all CVEs get fixed as quickly as I claimed. However, my experience has been that high-profile ones that surface in tech news usually are.
> Sidenote: I don't like the implication that community-driven projects are inherently less secure.
I don't like it either, but it may be true anyway. Although I don't think it would be resources so much as focus. The Debian community is not that small.
Yup. I love Debian and use it on all my home computers. I think the author hit it on the head when he described the security as inconsistent. Some maintainers put a great deal of thought into the security implications of the software they are packaging, including contributing to the AppArmour profile. Others ignore it, and others yet are openly opposed to it.
RedHat can declare that everything on the system is going to have SELinux policies following consistent guidelines on what to lock down, and all employees will work with the security team to make this happen. That is harder to do in a community driven project like Debian where ownership and work is widely distributed and entirely voluntary. It can really only happen when the goals are already a strong part of the culture and there is buy-in for specific rules to achieve those goals. For example, Debian's strong free-software requirements have been there from the beginning and so most Debian volunteers are self-selected to agree with or at least tolerate them, and even that has frequent arguments. Security culture is much more mixed, and there are a lot of people in the free software community who think that security starts and ends with fixing bugs when they are found, and push back hard on suggestions that anything more is needed. It is going to take a long time to change that culture.
I don't like the implication either. And I agree with you that focus
is different. It seems unfair to compare Debian and Redhat this way.
One is a "bottom-up" DIY distro where you can start with almost a
kernel and basic userspace and build-up. The other is a more mature
product targeted at commercial, public facing infrastructure.
The former strongly implies that, if you're using it for the latter
case, then you really better know what you're doing. But this
capability/competence versus task-fit gets glossed over in the
paragraph where the author basically says; because Redhat chose to be
a bag of dicks, jumping ship to Debian is the "logical move". It isn't
if you don't know what you are doing. And it's sad that RH exited this
space leaving a civil cybersecurity hole. The lack of a truly Free
and "OOB secure" OS seems the case in point.
There are other reasons to doubt the security of Debian, but "you're
using it wrong" isn't the best one to discuss.
> Sidenote: I don't like the implication that community-driven projects are inherently less secure.
As a heavy open source contributor, I don't like it either. But I'd be kidding myself if I thought volunteers approach all aspects of software development with the same rigor as someone doing it professionally. I'm guilty of that myself; I do the things I find fun, and often don't do the things I find tedious (or have to force myself to do them because I know that future-me will be pissed off at present-me if I don't).
Still, though, there are plenty of for-profit organizations out there that don't feel it's cost-effective to be rigorous about security or some other thing. And many (most?) developers and ops people are evaluated not on how bug-free and secure their work product is, but by how quickly it gets done and shipped to customers.
> For desktop users who just want sandboxed applications, I don't think Red Hat's SELinux implementation does much to protect them
Does, like, anything on mainstream Linux distributions really sandbox applications by default?
Let's say I run a browser, a mail client, Signal, Discord, whatever on my laptop. If one of them has a code execution vulnerability, does anything prevent that app from reading/writing all of my home directory, take screenshots, send keystrokes to other applications etc?
I haven't used anything but Linux on my laptops and PCs for at least a decade, and I genuinely don't know the answer. Back when I started with Linux, the answer was surely a "no", but maybe anything something has improved in this regard?
Flatpak apps are sandboxed to some degree, it is pretty common for them to request access to a bunch of locations they don't really need so that the developer doesn't have to make any code changes from the non flatpak version.
I don't know much about the specifics but I think Wayland fixes a lot of the security problems related to keylogging and screenshoting.
> > Lack of Resources: Debian as a community-driven project lacks the resources to develop and maintain comprehensive security policies comparable to those provided by Red Hat.
Given that Google uses Debian internally for their workstations [1], employs a number of Debian developers [2], and has discovered and fixed security issues in Debian [3], I find this argument to be entirely disingenuous.
Sure, Red Hat has a well funded security team. But so does Google, and all of the other Debian users in "big tech".
I disagree that it's disingenuous. I would love to see Google and other corporations that make use of Debian fund the development of good default AppArmor profiles for many common daemons. Right now they simply don't exist and users are left to fend for themselves.
The point made in the article is that security is hard and often thankless work. So it's not something that's conducive to volunteers doing in their free time often. It does take funding to move the needle on this here, and I think Red Hat is proof of that.
> Containers are increasingly the preferred method for developers to deploy their software – myself included. A common misconception is that if you run something in a container, it’s inherently secure. This is absolutely not true. Containers by themselves do not solve a security problem. They solve a software distribution problem. They give a false impression of security to those that run them.
To the extent that containers are a software distribution method outside of a single authority, they are a security nightmare. They are the exact equivalent of shipping a developer's laptop off to the datacenter and replicating it as a production image.
> They are the exact equivalent of shipping a developer's laptop off to the datacenter and replicating it as a production image.
If you're building your containers on a developer laptop and then pushing them to the registry from there, yes.
You can also not do that and instead have all builds happen on a CI server that isn't ever touched directly by anyone, like you should really be doing to build any artifact that gets deployed to production, container or otherwise.
The proviso could have been read either way, and your claim that it's an exact equivalent of shipping off a developer laptop makes no sense if what you meant was "you're downloading untrusted code from strangers". I read it first the way you apparently meant it but chose to respond to the meaning that made your second sentence make sense rather than the one that made it a non sequitur.
Using images from untrusted sources is a not-quite-exact equivalent of downloading code directly from npm and shipping it off to production.
Depends how you set it up. Mostly people doing this properly would build their own images in a CI environment under their control. At least that's how I do it.
The reason docker containers are absolutely everywhere is that it's a convenient way to ship software that skirts around the notion that most Linux distributions are spaghetti balls of needless complexity with distribution and version specific crap that you need to deal with.
Back in the day I had to package up my software as an rpm to give it to our ops department who would then use stuff like puppet to update our servers. I also got exposed to a bit of puppet in the process. Not a thing anymore. Docker is vastly easier to deal with.
From a security point of view the most secure way to run docker containers is some kind of immutable OS that only runs containers that is probably neither Red Hat or Debian based because having package managers on an immutable OS is kind of redundant. Which is more or less what most cloud providers do to power their various docker capable services. And of course the OS is typically running on a vm inside another OS that probably also is immutable.
Docker removed the need for having people customize their servers in any way. Or even having people around with skills to do things like that.
Being container focused also changes the security problem from protecting the OS from the container to protecting the container from the OS. You don't want the OS compromised and doing things it shouldn't be doing that might compromise the one thing it is supposed to be doing: running your docker containers. Literally the only valuable thing it contains is that container.
And it indeed matters how you build and manage those.
> They are the exact equivalent of shipping a developer's laptop off to the datacenter and replicating it as a production image.
I hear that a lot, but it's not really true, or it is true only if developer created the image manually. Does anyone do that?
As soon as you use a Dockerfile you have reproducible builds, allowing you to use a different base image, or even perform the installation without containers at all.
> As soon as you use a Dockerfile you have reproducible builds
That is extremely optimistic. As soon as you do anything involving an update - `apt-get update` or similar - it's not reproducible any more, and of course you do need to do those things in most images.
And if you don't need to do that, you can probably avoid doing the whole Dockerfile thing in the first place (although that may not be so easy if you're not set up for it).
> As soon as you use a Dockerfile you have reproducible builds
Depends on how you build your containers. If you have a build step, which pulls your dependencies from a trusted source and versions are locked down, then MAYBE. I've seen developers have all that in place, then in their deployable container they start by doing "apt-get update && apt-get upgrade" in the Dockerfile and install some runtime dependency that way.
There is also another problem, which I believe is what OP is referring to: People will write docker-compose file, Helm charts and what-have-you, which pulls down random images from Docker hub, never to upgrade them, because that breaks something or because: It's a container, it's secure. Fair enough if you pull down the official MariaDB image, or KeyCloak, you still need to upgrade them, and often, but they are mostly trustworthy. But what happens when your service depends on an image created by some dude in Pakistan in 2017 as part of his studies, and it has never been upgraded?
I had this discussion with a large client. They where upset that we didn't patch the OS right when new security updates came out, which to me was pointless when they shipped a container with a pre-release version of Tomcat 8 that was already 18 months out of date and had known security flaws.
I always saw this as a mistake. We are basically all use containers (well, here; in the real world I almost never encounter devs even knowing what they are, let alone having ever worked with them) and a lot of these containers are made by vendors and maintainers; why can't containers have this rigidity and so must be by default secure? Solve both distribution and security at the same time. It would be easier to actually set rules for containers as they have restricted functionality so at least you know that if you fire up application Bla, it is rock solid by default instead of having to assume security wise they are worthless. As most on Dockerhub for instance is commercial, wouldn't this be a pretty basic demand to have?
Wait, the author is criticizing Debian for not having as heavy-handed a system as SELinux enabled out of the box? That thing that causes so much pain that everyone disables it immediately unless they have fairly extreme security needs?
Never heard of anyone suggesting to disable AppArmor.
As for the efficacy of the two, I'm less interested in the feature sets of the two. I think what'd be more interesting is replicate exploitation scenarios with their default policies and see which subsystem succeeds in mitigating the exploit and which fail.
> I'm less interested in the feature sets of the two. I think what'd be more interesting is replicate exploitation scenarios with their default policies and see which subsystem succeeds in mitigating the exploit and which fail.
The feature set is exactly what dictates which systems are more likely to prevent exploitation, though.
App Armor simply isn't as granular, and simpler to bypass (e.g. by making a hardlink to a file to override AppArmor policy).
AppArmor may be good enough in many situations, but SELinux gives you much more control, so you can be much closer to perfect to protect against unknown situations.
Still don't care about the feature set, show me a useful benchmark. If SELinux prevents a hypothetical that's great, but security is a tradeoff and I'd opt for convenience and simplicity to sacrifice potentially negligible risk.
For example, I'm seeing that SELinux didn't mitigate ShellShock where AppArmor did (despite being an attack vector that isn't really common). But these are the things I want to know.
> Still don't care about the feature set, show me a useful benchmark.
"I don't care to understand the differences in security thees systems provide"
That's what you're saying here. Which means you're not going to be evaluating these systems in any way that matters.
> If SELinux prevents a hypothetical that's great, but security is a tradeoff and I'd opt for convenience and simplicity to sacrifice potentially negligible risk.
You're asking for benchmarks, but already here your willing to dismiss the results because you don't really care about them, you care about "convenience and simplicity" and security being good enough, right?
In that case, sure AppArmor is good enough for most people.
But so is a flimsy chain look. If you want to actually secure something and not just deter, you'd want a deadbolt and sturdy door, right?
The point is, the feature sets matter, precisely because so many attacks are hypothetical. You have to see and speculate what attackers might do and have things in place to prevent that. SELinux facilitates that a lot more than AppArmor does.
That's the fundamental point here, and not something you will likely find a nice graph to support. If you ask an AI to generate one for you it might be able to though. If you really still think you need it.
> For example, I'm seeing that SELinux didn't mitigate ShellShock where AppArmor did (despite being an attack vector that isn't really common).
Well that's nonsense. SELinux can protect against anything AppArmor can since AppArmor provides only a subset of features.
I sense you might not be interested in this since you've said you just want benchmarks, but here's a page from RedHat not only explaining how SELinux prevents Shellshock from being able to do damage, but even walks you through exploiting it on an SELinux enabled system so you can test it yourself[1]. There's also a blog post from Dan Walsh explaining how SELinux constrains shell shock [2].
I'm also less confident AppArmor is as effective against container escape exploits like these [3] [4]
> But these are the things I want to know.
Mmm. Well, there's no benchmarks. But if you do research you will find the examples you want. SELinux will have substantially more examples because it's employed in wider use due to RedHat and Android.
If you really want to compare, look at the Debian and RedHat security advisories, and look how many RedHat has saying that SELinux provides protection if enabled, and how many Debian and Ubuntu have being unable to say the same for AppArmor.
But really, again, you should bother to understand the feature sets and actual technology. This approach you want looking for benchmarks, it's not going to necessary be accurate or representative, and I say that confident that your methodology would still show SELinux as the better option (in terms of security, not usability/convenience).
The key is that you should strive to understand systems you want to use, not just look for a blog article that can provide justification for an intuition or desire.
> Well that's nonsense. SELinux can protect against anything AppArmor can since AppArmor provides only a subset of features.
This is from Dan Walsh who is quite famous in the SELinux community [1]. See the part "Why didn't SELinux block it?"
Sure you can configure SELinux with a stricter policy, but virtually nobody does that in practice, they use the defaults, mostly because SELinux policies are really only used to fix stuff SELinux default policies break, usually using the audit2allow or whatever its called.
> I sense you might not be interested in this since you've said you just want benchmarks
If the EDR/antivirus industry can have various test suites and testing organisations, the same methodologies can apply to OS security subsystems. Let's see what happens with your default RH/SELinux, Ubuntu/Debian & AppArmor, and for shits and giggles OpenBSD, and see how they fare against exploits and vulnerabilities harvested over the years, and make reproducible labs. That's what I'm asking; I'm not going to scour security advisories to do tit for tat comparisons, I would rather see a well thought approach to this.
We do this kind of comprehensive benchmarking with all sorts of software such as compression, cryptographic libraries, compilers / languages, etc. Traditionally this would've been harder in the days when virtualisation and utility computing was nascent, but the infrastructure part is pretty achievable these days. Just need to someone to expend the effort (and I'm not volunteering).
> Dan Walsh who is quite famous in the SELinux community
Yes, I know who Dan Walsh is, it's part of why I linked to his article in this case.
> See the part "Why didn't SELinux block it?"
Yes....did you understand it? Or just read the title and assume you were right?
SELinux didn't prevent the exploit code from running, which is exactly normal and exactly the same as AppArmor.
What it did was prevent the code from being able to do anything.
So in that sense, SELinux absolutely stopped Shellshock to the same extent AppArmor would have.
> If the EDR/antivirus industry can have various test suites and testing organisations, the same methodologies can apply to OS security subsystems.
There's fundamentally less need because thees are open source systems and the design is sufficient to judge. You're just being incredibly lazy and justifying not wanting to learn these designs, or you otherwise don't want to put in the work to do so.
Imagine you see the design for a speedboat and a submarine. It's sufficient look at the designs alone to see that one cannot operate submerged underwater for an extended period of time. There is no need for tests to demonstrate that point.
The reason for EDR/antivirus test suites is because most solutions were closed source black magic and some kind of comparison is needed. That isn't as true in this case.
> Let's see what happens with your default RH/SELinux, Ubuntu/Debian & AppArmor, and for shits and giggles OpenBSD, and see how they fare against exploits and vulnerabilities harvested over the years, and make reproducible labs.
This already exists in the form of security advisories, as I said you are just being very lazy. As I said, feel free to check or ask an LLM to do it for you, to ask how many Debian security advisories were mitigated by AppArmor vs how many RedHat says were mitigated by SELinux. OpenBSD doesn't apply since it has no type of similar system.
> I'm not going to scour security advisories to do tit for tat comparisons, I would rather see a well thought approach to this.
Right, as I said you're being lazy. The information exists, you just don't want to put in the effort, you want a nice blog post you can blindly refer to.
> Just need to someone to expend the effort (and I'm not volunteering).
No one will because there is no need. The designs of these systems show what they can and cannot restrict and that is enough. You should put in, at least, a minimum of effort to understand why these abstract systems are not really comparable to antivirus solutions and why the testsuites you think make sense, don't.
Can't say that I haven't disabled AppArmor on a server or two to make things work short term. Fixing the AppArmor is a bit easier than fixing SELinux policies though.
Not everyone. I suspect those that don't aren't as vocal about it as those that do.
As a datum, I have a laptop that's running Fedora, the install is on the order of ten years old (routinely upgraded to new releases), and it's never had SELinux disabled.
I fell in love with this article at this sentence:
> Still. Many in the open source community have interpreted Red Hat’s decision for what it really was: A dick move.
I've had a short essay in draft for a while about the difficulty of a small business trying to make money using The Red Hat Model (https://opencoreventures.com/blog/2023-04-red-hat-model-only...). Red Hat seem like an outlier who're doing well with that model, but smaller places like Sidero or Bitfield had to find other ways to monetise their open source efforts, and sometimes that had pushback from the community.
Red Hat, though, were acquired by IBM, and IBM made it harder for an otherwise thriving ecosystem to exist. Not impossible, but harder. IBM makes money hand over fist (billions according to https://www.ibm.com/annualreport/). Was there really a reason to make Red Hat harder to redistribute? The interviews I've read come down to "our Red Hat team works hard and we don't want to give that away to low effort projects", though if you've got an interview with a different perspective I'd love to read it.
IBM acquired Red Hat in 2019, at which point their revenue had been stuck at "why isn't our revenue going back up?" for eight years straight, in the hopes that controlling Red Hat would let them squeeze dollars out of it by making it a premium offering to multinationals and governments. Looking at their revenue since, there's a small trend upward, so was there a reason? Unfortunately, yes. Did it work out? Way harder to say but IBM themselves would probably say yes to that one, too.
I’m pretty sure the Red Hat model is to profit off the community efforts while creating convoluted complications in the name of security so they can send their high paid consultants to your business and get paid even more.
Was it professional when they let SSH vulnerabilities exist in RHEL7 forcing perfectly useable machines to upgrade to 8 for remediation?
Don’t get me wrong, they’re the new “nobody got fired for” company (technically still the same). That doesn’t imply Debian and Ubuntu are less secure except in name. Go to Google cloud and see what CIS hardened images exist.
Your perspective is an oversimplification if not completely wrong.
This is the most bizarre thing I've read in ages, and so incredibly wrong and unrepresentative of the current state of reality that I'm having a hard time wrapping my head around it.
> Lack of Resources: Debian as a community-driven project lacks the resources to develop and maintain comprehensive security policies comparable to those provided by Red Hat.
And Linux in general has less resources to develop and maintain comprehensive security policies comparable to those provided by Microsoft.
Yet here we are, with Microsoft products so "secure" that they're insecure unless you have a PHd in b****, being so convoluted and over-built that people have to migrate away from it just to recover the actual security they used to enjoy back when they were able to wrap their head around the whole stack.
If devs want things to be more secure, stop developing more acronyms and just educate the userbase on the acronyms they already have.
After letting Russians waltz into their C-level emails (as well as those of US gov't 365 users) and steal Windows source code using basic password spraying for over six months before patching the hole, "Microsoft" and "secure" shouldn't be in the same sentence ever again.
Wasn't caught for two months and wasn't fixed until months after. How is Microsoft allowed anywhere *near* the bidding process for gov contracts anymore?
Same. People will always scream "it's not that hard just RTFM", but it's actually quite complex AND unique to RedHat's world. So of course when you are in a company that has a fleet of a mix of Ubuntu and Debian and RedHat, which is more common than you'd think, it becomes the oddball server nobody likes working on. And nobody wants to spend hours learning it in and out for just that. I don't think I ever worked at a shop that didn't end up disabling it completely out of frustration.
I never had any real problems with selinux, I've been using CentOS since version 5 something and with even just a cursory understanding of selinux I got by. Plus you could just disable it entirely by changing one setting so distro hopping for just this one thing seems a bit extreme.
The same reason why many people choose WhatsApp, Telegram, Slack, Discord over things like Signal or Matrix. They are just easier to use. It is about priorities. Maybe some day we solve the usability problems.
I get it for Matrix, but Signal really has had the same user experience as WhatsApp for years now. But anyway, your point still stands. That's why user-friendliness is an important part of security (and why Signal work is so important regarding secure messaging apps).
I think it is still impossible to backup one's own messages in Signal and then retrieve them back in another phone. It was possible on Android via root but basically impossible for unrooted phones which is a dealbreaker for Apple devices my friends and family use.
Signal has to provide 100% of the features and convenience of Whatsapp and some more without compromising security for it to be a viable alternative.
At most a little over one year ago, I installed Signal Desktop to open a link in a message I had received on my desktop. This is, apparently, deliberately unsupported, since the app claims that "[f]or your security, chat history isn't transferred to new linked devices". So no, the user experience of WhatsApp is miles ahead of Signal, at least if you want to use a real computer.
1. You can enable SELinux on debian if you want to.
2. I've never had a conversation with anyone who is enthusiastic about SELinux.
3. I've never run into someone who was good at explaining SELinux policies, how to create them, update them, or explain their decision process other than "well... the app seems to need to do x, so we should let it."
4. I have run into plenty of people that disable SELinux out of the gate to avoid the headache of it.
5. I have run into plenty of people that avoid Redhat distros.
This is akin to someone writing an article about how Oracle and Microsoft got databases wrong because they didn't embrace some security feature that only DB2 has and that more than half of DB2 users out there think is a giant pain in the neck.
> Learn the basics of SELinux, including type enforcement, Multi-Category Security (MCS) Enforcement, and Multi-Level Security (MLS) Enforcement, with the help of some friendly cats and dogs!
Be interesting to know if anyone had any numbers on actual security issues in practise.
Complexity is generally really bad for security. It results in people working around the system or just turning it off. Security is not just "in theory" - a perfectly secure system that most users disable is an insecure system.
It reminds me a bit of the idea of making people change their password every month. Sure, in theory it reduces time a compromised credential can be abused for. In practise though it means nobody can remember their password, people start using really poor passwords and writing them down on post it notes. The net result is much worse security practically speaking, even if its better theoretically.
Debian uses AppArmor by default, probably because of the Canonical influence (there are more Debian developers and maintainers paid by Canonical than by RedHat).
But you can run Debian with SELinux (as well as with other LSMs, MACs, etc like Tomoyo).
At my last jobs, we disabled any of SELinux, AppArmor and Auditd on Debian/Ubuntu, just for the sake of performance. And we never detected any security issue for our usage and requirements. So I'm not an expert in this field.
Not sure what the purpose of the article, or the whole blog, is. You want to influence the choosing of Debian Vs RHEL Vs Oracle Linux in some place? As I'm not sure, will stop here.
I came in expecting not to, but I completely agree with this article. I moved off RHEL after using CentOS exclusively for a decade because of the changes in 2023. I loved SELinux, it's a technology you need to sink your teeth into if you want to understand it, but it has decent tooling (like audit2why) and isn't too hard to modify to get working if needed (SELinux booleans are a powerful way to modify base policies without having to recompile).
I do a lot in Kubernetes, and there's been more than one CVE with a line like "Affects all versions of docker/containerd, unless running SELinux," which gave me a lot of reassurance that the effort put into making SELinux work was worth it.
Now that I'm on Debian, I'm slowly building a new set of policies for the OS. Thankfully SELinux has an excellent reference policy[1] to base off of. I'm hoping my new debian base images for my homelab & elsewhere will have a nice restrictive default SELinux policy by the end of the year. I hope there's more community effort here as well, SELinux really can't compare to AppArmor, and is absolutely necessary for proper security.
Honestly I'd love if the wider community took another stab at replacing SELinux with a KSM that had similar functionality but better tooling and design. I'd pick it up in a heartbeat, but right now SELinux is what we have.
I do a lot in Kubernetes, and there's been more than one CVE with a line like "Affects all versions of docker/containerd, unless running SELinux," which gave me a lot of reassurance that the effort put into making SELinux work was worth it.
I've seen this too, but I usually see AA mentioned in the same situations as an equivalent mitigation to SELinux.
I can't say I've seen the same, so I dug into it. There's a good list from RedHat[1] on CVEs SELinux mitigated. I went through them:
- CVE-2016-9962 - Bypasses Apparmor [2], mitigated by SELinux
- CVE-2022-0492 - Apparmor and Seccomp also protect against
- CVE-2019-5736 - Mixed, blocked by the default SELinux policy in RHEL (not Fedora), not blocked by the default AppArmor policy[3]
- CVE-2021-3156 - This one is not a good one for RedHat to put on the list. SELinux by default doesn't protect against it, Debian 10 at the time had a Linux security feature enabled (fs.protected_symlinks) that helped mitigate it, and additionally CVE-2021-23240 came out which had similar effects but only occurred on SELinux systems.
- CVE-2019-9213 - Not mitigated by AppArmor, mitigated by SELinux
- CVE-2019-13272 - Not mitigated by AppArmor, not mitigated by default SELinux policy, but easy to mitigate by enabling boolean. I'd consider this a win for SELinux, but only just.
While digging into this more, I came across this BlackHat talk[4] which really quantifies how SELinux improves security (though doesn't contrast it with AppArmor). I also came across a paper on usability of SELinux and AppArmor[5] which brings up an interesting point: If the tool is too complex, even if it's more powerful, more often than not it won't end up having better results.
That's all to say, I think if you're willing to invest a lot of time into it (say you want to make security your niche in your development career), SELinux is still the best. But I can see why many may gravitate towards AppArmor so as to not make perfect the enemy of good. That said, I still wish Debian had a choice between the two, right now SELinux isn't really doable without a lot of work.
SELinux can be frustrating without the proper background about what it is, how it works, and how it helps you. There is a surprising amount of tooling for it actually.
This is a surprising article because I kind of see this in light of the old Linux/BSD wars?
“Red Hat owned making this policy apply to most of the popular software they distribute. On Debian the users have to set everything up.” — this sentiment is directly parallel to how BSDs see themselves as providing a whole consistent operating system, Linux meanwhile just wants to ship a kernel.
“Debian doesn't care enough about security.” — says everyone who runs OpenBSD.
“With SELinux policies, containers are isolated from the system.” — you could almost say they are “in jail,” maybe we could package this up as a syscall, hm, but what to call it...
IDK what BSD looks like in 2024, but in ~2004 you would have seen this exact same article about Debian, but comparing to FreeBSD instead of RHEL.
"On Debian the users have to set everything up.” — this sentiment is directly parallel to how BSDs see themselves as providing a whole consistent operating system, Linux meanwhile just wants to ship a kernel."
Linux is just an OS kernel. If you want a consistent OS, use RHEL, Ubuntu, Fedora, Android or something else.
> The ugly truth is that security is hard. It’s tedious. Unpleasant. And requires a lot of work to get right.
I use Red Hat-based distributions at work and Debian/Ubuntu in my personal life. A few years ago, I bit the bullet and learned enough of SELinux to run my workstation and all my servers in enforcing mode. The author of this article is right to credit Red Hat for all the work they’ve done to provide users with default SELinux policies that work out of the box. At one time, I considered installing SELinux on my Debian system and modifying Red Hat’s policies to work with the Debian packages. I realised how much work would be involved so I chose the path of least resistance: AppArmor (which does the job).
Debian is a desktop operating system for human persons who are responsible for their computers. Red Hat is a enterprise operating system for corporate persons where the human persons using their computers are not responsible or in control of their computers. It's apples and oranges.
These aren't "attack surfaces left exposed" this is "users allowed to control their own computer and decide for themselves". And I notice the vast majority of this complaint about insecurity is not about running applications on Debian or RHEL, but instead about the systems built up for running things containerized and trying to mitigate all the problems that causes. Debian concentrates more on actually having an OS you can run applications on rather than a system for deploying containers.
>In the end, the choice between Debian and Red Hat isn’t just about corporate influence versus community-driven development. It’s also a choice between a system that assumes the best and one that prepares for the worst. Unfortunately in today’s highly connected world, pessimism is a necessity.
In the end it's about weather you think you should control your computer or weather someone else will control your computer. Pick appropriately for context.
I suspect Debian is used on more server installs than desktop ones. While it doesn't come with enterprise support options like RedHat it is most certainly used on servers, many of which are in corporate environments and are running multiple services (in containers often) or are otherwise multi-user.
That may be true for Debian itself (although I know a lot of people who have been running it for years as their daily system and still are to this day, including myself for 15+ years and counting), but Debian is also the base for many other distributions, including Ubuntu and its derivatives (like Mint), which are mostly used on desktops rather than servers.
If someone means “Debian and derivatives” then they should say “Debian and derivatives” not just “Debian” IMO, particularly when comparing to RedHat which also has a number of significant derivatives.
TBH I've always considered Ubuntu (and by inference its derivatives) more of an “inspired by” in relation to Debian, given it is generally closer to Testing then Stable and has many notable changes on top, more so as bigger changes have increased over time (snaps being so ingrained that they are almost required, for one).
SELinux is Mandatory Access Control system. MAC is not that useful for most servers:
The real risk comes from network-facing services and they are much better protected by seccomp and cgroups, usually configured in systemd, and Debian uses that extensively.
Seccomp can even protect vulnerable system calls. SELinux is not able to do that.
For systemd services, there's also the option to use the service unit directives that systemd provides to limit caps (CapabilityBoundingSet, NoNewPrivileges, etc), filesystem access (ProtectSystem, ReadWritePaths, etc), other "system" access (ProtectProc, PrivateUsers, RestrictNamespaces, etc) and syscalls (SystemCallFilter). I find these an easier and more direct way to harden than writing apparmor / selinux profiles.
`systemd-analyze security <service unit name>` gives a nice list of things to consider tweaking. You don't have to fix everything or pay attention to the exposure ratings, just use it as a guide.
I did this for chrony, haproxy, nginx, tor and unbound on my Debian router. I also have some timer units to run scripts to update DNS blocklists and such, which have the same kind of hardening. For the services, some of them have caveats and can't be fully hardened, eg unbound needs PrivateUsers=no because it fails if it can't find an unbound:unbound user to switch to, even if it was already started as unbound:unbound by systemd. And SystemCallFilter makes it easy to get overzealous and only allow the exact set of syscalls that you see the service making, only to have a service update or glibc update that starts making a new syscall and requires another round of debugging, so do it in moderation :)
With the sheer volume of local exploits found in the Linux kernel, I don't really consider these SELinux/AppArmor mitigations to be that useful. Sure, they reduce the attack surface a bit, but if I actually need isolation between workloads, it's best to do it below the kernel (with a VM).
If an attacker gets execution in userspace, it's best to assume they can also get into the kernel via some 0-day local privilege escalation...
I think that both AppArmor and SELinux are unusable in practice due to lack of better tools for generating those configurations.
There needs to be better graphical tools for this, like a "profiler" or similar that watches a process for a specific time for errors in the config and that incrementally adds features while the process is running.
In my opinion, systemd sandboxes are where it's at. [1] They are seccomp based sandboxes, but have a lot of isolation and sandboxing features that are very easy to use, and they can also be incrementally enhanced with both SELinux and AppArmor profiles.
The discussion of container security is completely lacking any mention of userns remapping, which is an excellent container security feature that's extremely easy to enable and use compared with AppArmor/SELinux (and would work extremely well in parallel with them).
That way if someone does manage to break out of a container they have the privileges of a dummy user that doesn't exist on the host, so unless they are using a kernel exploit, they don't have any privileges to be able to do any damage.
I disabled selinux after learning about dontaudit rules and having them waste my time.
That's not to say on very specific systems that need to be hardened, I do enable selinux and am glad it's an option. And if I have to use a security layer, I take the object based selinux over the path based apparmor.
RHEL doesn’t have any indication as to how the contrib repositories work. SELinux might be nice, but it won’t have any impact if I install a malicious package. And yes, I need contrib as I need the ability to use graphical drivers and play proprietary formats.
For my personal machines, I have an early install script that turns SELinux off and makes the system boot with mitigations=off. Personal machine, behind my own firewall etc. I'd hate to be a system administrator, and despite my existing habits, this article convinces me that, if I was, I would try to actually understand SELinux.
I'm way more worried about how a compromised xz-utils made it past the package maintainer and into the Debian repos. Mitigating supply chain attack vectors like this seem like the bigger priority by far and low hanging fruit. I don't follow Debian leadership but haven't come across any reaction or policy change to address this from them?
If you care about security, consider the security-oriented Qubes OS relying on hardware virtualization and running everything in Debian and/or Fedora VMs: https://qubes-os.org. My daily driver, can't recommend it enough.
I wonder if this article title is a deliberate reference to 'The Insecurity of OpenBSD' article which also addressed a lack of SELinux or similar systems.
Honestly? SELinux is a nice idea, practically obscene, like many others, Solaris RBAC a "so important security feature" essentially no one use for real to name another.
Today most security breaches came from crappy applications with an immense set of dependency put into production because someone want them, there is no protection for them, adding long and painful system stuff is only a way to have also badly configured systems.
Debian issues are more in a complex custom setup, preseed it's a nightmare compared to NixOS, which is much more important than SELinux, regularly disable on most deploys.
Meanwhile everyone agreed that the convenience of magic integration with browsers and other things was more critical than security, in the default config, for a password manager of all things, when debian changed the default keepassxc package to omit optional added attack surface plugins. Not unavailable, just not installed by the default.
I wonder how many people that agree with this nonsense position also agreed with the keepassxc nonsense position.
I've seen poorly implemented SELinux policies make a workstation unusable and fill up /var with audit.logs that are tens of gigs.
You have to do 100% coverage testing on whatever program you're using. (Good luck if you don't have the source code.) Otherwise, you don't have any guarantee that your program won't seemingly be killed randomly.
Good luck, x2, if you have some snake oil "endpoint security" that keeps overriding your SELinux policy changes.
I really don't get this hatred of systemd. Systemd solves many programs that have plagued Linux forever.
Sure, SysV-init was super simple, and it was great back in the 1980s or even the 1990s when your server ran just a handful of daemons.
But systems get more complex and featureful over the years. In the year 2024, my standard Fedora Linux desktop, has 73 daemons running in the background... Dealing with sysv single-threaded and start-and-forget architecture is not great on modern systems.
You might say, well, why not Upstart!? Well. Upstart added a lot of complexity to the init process, while adding very little overall benefit.
Systemd added a lot of complexity, no doubt about that. But, it also added a TON of features that reduced complexity elsewhere on the system. I mean, just to rattle a few things off:
* systemctl. Holy fuck! A SINGLE command, that works across all modern Linux to control system services.
I do not miss the days of service, chkconfig, /etc/init.d/xxx start, update-rc.d, insserv, rc-update, rcconf, sysv-rc-conf, ntsysv.
And every distribution having their own special init scripts that worked in a very specific way to that distro.
A great example of this is RHEL5's sshd sysv init script, a 500!!! line shell script to start the ssh daemon... Compared to RHEL9's sshd 24 line systemd unit file.
* cron - scheduled tasks being handled by a screwball 3rd party service that had no built-in method of ensuring a cronjob would be retried in the event of a failure... systemd timers fixed that nonsense.
* at - see above
* autofs - Here's a lovely service that is stupidly complex. Whenever I have to setup autofs I get the feeling that the developers purposefully asked themselves "what can we do to make our application hard to configure?" then, once they had an answer to that question, they went back to the drawing board and asked, "Okay, configuring autofs is hard, but, what can we do to make it even more obtuse?". systemd automounts are drop-dead simple, a 5 line unit file saying what to mount and on what condition.
* Parallelization and dependencies - Systemd can start services in parallel and only after their dependencies are ready, unlike SysV’s linear approach. This isn't just for faster boot times; it's also about reliability. Ever had a service fail because another wasn’t ready? Systemd handles that for you.
* systemd has overall good built-in security! All services are started in their own cgroup and with posix capabilities and r/w access to the filesystem easily restrictable.
You claim systemd is a "badly written multifunction blob" but that's mostly not true, look at `/usr/lib/systemd/systemd-*` and `/usr/bin/systemd-*`. Systemd is split out into multiple purpose-specific executables in almost every place where it makes sense to do it.
I could go on all day, but I'll stop here. I agree that the system isn't perfect, and they've made some choices I disagree with. But overall, systemd is a MASSIVE improvement over everything that came before it.
Never used this. All my wealth is still mine. No one stole it. Two decades of Linux on the desktop. Some almost always on. If risk is lower than 1/2decades it's not worth learning. Insecure debian it is.
Would generation of SELinux policies be a good use case for LLMs?
"Generate a SELinux policy for daemon X. This daemon accesses it's config file in /etc and it's runtime data in /var/x. It listens on network. All other activities should be disabled"
While I agree the syntax of the policy is a big part of the difficulty, I think it's equally difficult for many apps/services to find out what activities it needs.