Personally, I'll be keeping it disabled. I used to use selinux, but eventually stopped as I didn't find the constant headache of trying to figure out why something wasn't working was related to SE or not worth it.
If you want to do something weird like allowing nginx or apache to serve your home directories, it takes maybe 5 minutes to google the correct se context to set on the directory and apply it, otherwise any standard installation works out of the box.
There's literally no reason to turn it off nowadays, other than laziness/unwillingness to learn.
Unlike with most open source software for things like selinux that disable a host of stuff by default, documentation cannot be an after thought. And why invent weird vague terminology, what is a context?
Things are complex as it is and people need to get things done. To have to waste man hours that could be used more productively more than once only to figure out its caused by selinux 'silently' without any proper error messages or logging and you do the sensible thing and disable it.
> documentation cannot be an after thought
Take a look at selinux-policy-doc, it's extensive.
> what is a context
It's a type. But 'type' as it refers to files means something else. It's just a label that describes what kind of file it is. user_home_dir_t is user files, httpd_sys_content_t is content that can be served by apache, nfsd_fs_t are files that that can be exported by NFS.
> proper error messages or logging
Every single denial is logged and it contains all the details why it was denied. You're asking for SELinux to log why it wasn't allowed which is done by setroubleshoot but isn't possible in general.
Not quite true. Some need to have debug turned on.
See my other comment.
Additionally, if you're running a desktop system and software installation is a frequent task, you'll have to deal with that constantly, often many times for the same application. At some point you're going to give up and say it's not worth it. Most people can't even stand the GUI based notifications of Outpost or ZoneAlarm in my experience.
Also, it's probably the best protection from a zero-day exploit there is.
I can see an argument for a defense in depth approach here, but if there's only one service per VM and that service is already restricted to only the stuff it needs to talk to on the network level I'm not convinced it makes much sense. As far as I know at least, SELinux won't be able to prevent kernel exploitation or the like.
If you already broke into the service and can make syscalls, well, there's really not much gain from that under my architecture at least - you can still only access what's already in that one service.
Then again, three were a plenty exploits against general FS or common calls like poll...
Things you cannot limit in this way because they are used by apps in question.
But that's what selinux limits -- the "blast radius" from a compromised service.
If your Web server is on a private network with your app server and your load balancer, there's no way it can attack your database server. Basically, policies are implemented at the network and host level rather than the process and file level.
In contrast, network interfaces are well controlled. If people started with 'deny all' in their fw permissions then added the minimal necessary holes then lateral movement would be a lot harder.
I think the best argument for selinux/apparmor is that a lot of application specific security hardening can be packaged into standard configs. In contrast, there is little to orchestrate this for VMs/containers/physical hosts. Inter-VM/CNTR network control seems difficult (open vSwitch).
If we could have easily configured, hypervisor enforced firewalling and encrypted tunneling then that would be more useful confinement.
Sometimes is as simple as not having time for that. If I've urgent work to do, I can't stop my work and well my boss/client "hang on, I need to learn to configure SELinux first!".
The fact that it DOESN'T work out-of-the-box immediately and needs for me to configure it (regardless of how well documented it is) is its major pitfall. These things need to work without adding ANY burden to the user. Otherwise, they'll end up disabled.
audit2allow and audit2why are often helpful but at least as often they offer the wrong solution.
Did you know that some selinux violations are not even reported by default?
This was the case for me; selinux transitions are prevented by a filesystem mounted _nosuid_. The failure is not reported because transition failures are apparently so common and unimportant selinux does not report them by default.
Took me a looong time to troubleshoot this.
If it were actually that easy, then you wouldn't need this step since SELinux would just do it for you. Clearly that isn't the case.
Maybe it could be made more convenient by a tool that parses the log and automatically runs the commands when you answer the question with "y".
Right, it requires intervention by a person who doesn't actually understand the minutiae of what the software is trying to do, so obviously they can make a properly informed choice as to what is and is not a security violation. Modern operating systems are so broken...
Supposedly it should see the card arrive through hotplug and add it to the device list, but that has never happened.
I did have to kill pulseaudio 1st though..
might try that if you have not already
But also, worth remembering that this is a proposal for an experiment. One of the things they're going to judge is how much does it affect regular users.
In my case impact to the business would likely be less if my dev machine was compromised than if prod was. It'd depend on the dedication and skills of the attacker though.
That seems crazy to me, as I'm able to SSH into prod with a single command and password. I guess I'm further on the ops side, but still.
Personally, I wish more operating systems came with an installer checkbox for "this will be used as a VM within an environment that already has its own security mechanisms; stop doing anything related to security/multitenancy (i.e. automatic login, no sudo password, no screen lock, etc.) and just let the host handle all that." Every time I set up a development VM I have to go to a bit of trouble to create an environment almost-but-not-quite like that, thwarting all the carefully-designed-in, yet redundant, security.
Makes me miss MS-DOS and Windows 3.1/95 a bit; those were truly "Personal Computer operating systems", in the sense that they assumed that there was a 1:1 correspondence between physical access to the hardware and the privilege to read/write/execute arbitrary data on said hardware. It'd be neat to have a version of Linux (or Windows 10) with that sort of "retro" security model (but still with process sandboxing; just because any user has rights, doesn't mean any program should, because programs aren't always the authorized agents of a user's will.)
Either way, it sounds like Vagrant may help. Most baseboxes target headless server dev and don't have a DE but a few do, and of course you can build your own base box and then customise via a provisioner for each different project.
There are some repos I have admin access to, those I could push malicious code directly, execute a build and authorize a push to production.
Most products I work on have a fairly long build time and that'd be a fairly noisy process (emails sent off). It'd work if you are just trying to get a shell, but it'd assuredly alert someone that something bad is happening.
Password? Are you not using keys?
Yes yes, ubikeys or smartcards exist... but it is the same second factor: what you have. Password is what you know.
Biometrics theoretically are what you are but really what you have on yourself.
Isn't AppArmor overly broad for something like that to be true? Exploited server software would happily provide access to the data of all users, since usually they all are withing the same trust boundaries anyway.
I think it's not a good idea to enable "security features" simply because it feels like they help, without even a threat model.
Instead of having one self-contained app that either works on it doesn't on your AppArmor-enabled OS, you have "dependencies".
If a media-player you download doesn't support AppArmor, you can just use one that does. I think it's harder to avoid dependencies like that, if a library used by a whole bunch of programs doesn't support AppArmor.
not so much the fact that they are using dynamic libraries, but the foolish idea that you should keep all of your applications and libraries in a single namespace in a messy pile. plan 9 showed y'all the way, but you don't listen, b/c y'all are obstinate.
Limbo, the main language for Inferno, only makes use of dynamic loading.
Well, there are totally different reasons (code quality and silly politics) preventing this one from being adopted.
Also, both kinds of policy are resident in files. I don't understand your point there.
editx2: Oh, guess seccomp doesn't use eBPF yet? Suppose that raises a bunch of questions about permissions necessary for specifying programs that might dereference kernel pointers and such by emitting `bpf()` calls.
Ideally, every single application would be sandboxed in some way.
Now imagine raising that pain to the power of 10, but instead of only weird USB or "device not found" errors nothing works anymore because "AppArmor" is blocking syscalls. I can already see the StackOverflow answer that just tells you to set it to permissive, and you know what, that fixes it everytime, too!
This is just a non-starter. Maybe they should look at deploying ASLR first, it's not 2000 anymore you know. (Update: they just managed to get this done with stretch, released 4 months ago! That's only like 7 years late for a crucial actual security feature)
Last I checked, the Year of Linux on the Server has been every year for many, many years. And it's the servers and their vulnerabilities which are exposing tens of millions users information at a time, not individual Linux desktop users.
Too late for servers, too early for desktops. That always felt like a succinct summary of SELinux.
That's a big jump. Any stats on the so called container usage by "everyone"? Even here in the HN-bubble everyone isn't running containers
Dunno, taking @revelation's word for it? Mentioned they were missing it. It sounds funny to me though. I'm not a Debian user.
edit: The Debian page on security features confuses me (https://wiki.debian.org/Security/Features).
But i guess pampering devops is so much more lucrative in the kudos department...