Hacker News new | comments | show | ask | jobs | submit login
Let's enable AppArmor by default (why not?) (debian.org)
106 points by boramalper 8 days ago | hide | past | web | 80 comments | favorite





If people rant and rave about how hard pulse audio is to configure, then I can only imagine how much kick back there will be about having to configuring apparmor etc. Just saying.

Personally, I'll be keeping it disabled. I used to use selinux, but eventually stopped as I didn't find the constant headache of trying to figure out why something wasn't working was related to SE or not worth it.


Disabling SELinux is a mistake, especially now that there are tools that will scan the audit.log and tell you exact commands on how to fix each thing it finds.

If you want to do something weird like allowing nginx or apache to serve your home directories, it takes maybe 5 minutes to google the correct se context to set on the directory and apply it, otherwise any standard installation works out of the box.

There's literally no reason to turn it off nowadays, other than laziness/unwillingness to learn.


I can think of a great reason to disable SELinux; you actually use linux for a workstation and don't want fight with it every time you install a new application.

do you also do chmod -R 777 /?

That seems excessive. Why not just run as root?

Pretty sure that actually breaks stuff

This is correct.

But you don't. Almost all 'applications' run unconfined.

I can think of a great reason not to disable it, exposing $HOME to the world.

Have had SELinux for almost 10 consecutive years. $HOME has never been exposed. I've no idea what you're going on about.

Of course not, you have it enabled. Read OP properly?

Whoops, typo, I meant to say I've had it disabled all that time.

Selinux will become useful when it precisely lists out what it disables with transparent explanations for assumptions made. And clear error messages or alerts when it has just disabled something with proper logging.

Unlike with most open source software for things like selinux that disable a host of stuff by default, documentation cannot be an after thought. And why invent weird vague terminology, what is a context?

Things are complex as it is and people need to get things done. To have to waste man hours that could be used more productively more than once only to figure out its caused by selinux 'silently' without any proper error messages or logging and you do the sensible thing and disable it.


> what it disables with transparent explanations

setroubleshoot

> documentation cannot be an after thought

Take a look at selinux-policy-doc, it's extensive.

> what is a context

It's a type. But 'type' as it refers to files means something else. It's just a label that describes what kind of file it is. user_home_dir_t is user files, httpd_sys_content_t is content that can be served by apache, nfsd_fs_t are files that that can be exported by NFS.

> proper error messages or logging

Every single denial is logged and it contains all the details why it was denied. You're asking for SELinux to log why it wasn't allowed which is done by setroubleshoot but isn't possible in general.


> Every single denial is logged

Not quite true. Some need to have debug turned on. See my other comment.


How about lack of necessity? I isolate my systems in many ways, using vms, tunnels and network isolation, but SELinux just doesn't fall into that, if anything's getting compromised it's the service in question that's the problem. Just because something exists doesn't mean you have to learn it and it doesn't make you lazy for refusing to do so. Proper protections extend far beyond using tools like this.

Additionally, if you're running a desktop system and software installation is a frequent task, you'll have to deal with that constantly, often many times for the same application. At some point you're going to give up and say it's not worth it. Most people can't even stand the GUI based notifications of Outpost or ZoneAlarm in my experience.


SELinux is all about minimizing and isolating collateral damage after a service is compromised, so if you care about that at all I'd suggest leaving it on.

Also, it's probably the best protection from a zero-day exploit there is.


Yeah what I'm saying is that I've already done so myself and have no need for SELinux. Proper network and VM isolation is far better than SELinux. I don't want it forced on me and will disable it. Not out of laziness, but due the lack of a purpose.

Security is best implemented in layers. VMs are not impermeable. Case in point: the VENOM vulnerability: https://access.redhat.com/articles/1444903 , which was thwarted by SELinux's protections for virtual machines (sVirt).

That's pretty sweet, if I had to manage my own VMs I'd definitely look into SELinux for that feature.

I can see an argument for a defense in depth approach here, but if there's only one service per VM and that service is already restricted to only the stuff it needs to talk to on the network level I'm not convinced it makes much sense. As far as I know at least, SELinux won't be able to prevent kernel exploitation or the like.


It absolutely prevents kernel exploitation by stopping certain syscalls. I guess you can argue that it doesn't stop "The exploit" in syscalls it allows but it does prevent syscalls from being run.

I'm referring to remote kernel exploits - things that would introduce a new attack vector. Say a bug in a say a network driver or protocol that would already be at ring0 - SELinux does not prevent such an attack. Something that would allow a new way in to a host other than the service in question. If you have ring0 you can just disable SELinux.

If you already broke into the service and can make syscalls, well, there's really not much gain from that under my architecture at least - you can still only access what's already in that one service.


Proper vm isolation with selinux is better than either separately. What if you get a QEMU VM break out.

SELinux is quite possible and easy to disable with any local root kernel exploit. They only thing it might make harder is actual act of exploitation by shutting down syscalls and sensitive parts of file system. Compared to these, VM are rare. You typically need one of those to attempt breakout anyway.

Then again, three were a plenty exploits against general FS or common calls like poll... Things you cannot limit in this way because they are used by apps in question.


> it's the service in question that's the problem ..

But that's what selinux limits -- the "blast radius" from a compromised service.


Precisely what I'm saying, the limitation to the blast radius is already there if you've done the isolation yourself and it's only that one service that gets hit.

If your Web server is on a private network with your app server and your load balancer, there's no way it can attack your database server. Basically, policies are implemented at the network and host level rather than the process and file level.


You're entirely correct, but we've become infected with SOE-think. Eventually you are always going to have a compromised node, and defence/detection in the node is hard and constantly evolving. The kernel attack surface is still large and complicated.

In contrast, network interfaces are well controlled. If people started with 'deny all' in their fw permissions then added the minimal necessary holes then lateral movement would be a lot harder.

I think the best argument for selinux/apparmor is that a lot of application specific security hardening can be packaged into standard configs. In contrast, there is little to orchestrate this for VMs/containers/physical hosts. Inter-VM/CNTR network control seems difficult (open vSwitch).

If we could have easily configured, hypervisor enforced firewalling and encrypted tunneling then that would be more useful confinement.


The risk is that an app compromised could lead to local access which allows local (root/kernel) exploits.

Building policy by just auditing is a bad idea - you'll end up with overly general roles. Policy building is best left to the author of the applications so that they can test against it and lock it down.

So what's your suggestion? Just disable SELinux for that application if it's failing?

> There's literally no reason to turn it off nowadays, other than laziness/unwillingness to learn.

Sometimes is as simple as not having time for that. If I've urgent work to do, I can't stop my work and well my boss/client "hang on, I need to learn to configure SELinux first!".

The fact that it DOESN'T work out-of-the-box immediately and needs for me to configure it (regardless of how well documented it is) is its major pitfall. These things need to work without adding ANY burden to the user. Otherwise, they'll end up disabled.


I mostly agree with you but sometimes it is extraordinarily difficult to find out what went wrong.

audit2allow and audit2why are often helpful but at least as often they offer the wrong solution.

Did you know that some selinux violations are not even reported by default?

This was the case for me; selinux transitions are prevented by a filesystem mounted _nosuid_. The failure is not reported because transition failures are apparently so common and unimportant selinux does not report them by default.

Took me a looong time to troubleshoot this.


> Disabling SELinux is a mistake, especially now that there are tools that will scan the audit.log and tell you exact commands on how to fix each thing it finds.

If it were actually that easy, then you wouldn't need this step since SELinux would just do it for you. Clearly that isn't the case.


SELinux can't tell which of those accesses were intended, and which were actual security violations it should keep preventing. That requires human intervention.

Maybe it could be made more convenient by a tool that parses the log and automatically runs the commands when you answer the question with "y".


> SELinux can't tell which of those accesses were intended, and which were actual security violations it should keep preventing. That requires human intervention.

Right, it requires intervention by a person who doesn't actually understand the minutiae of what the software is trying to do, so obviously they can make a properly informed choice as to what is and is not a security violation. Modern operating systems are so broken...


I still can't get PA to directly work with my USB sound card. I ended up configuring alsa dmix for it (which is definitely not easy to configure), and pointing PA at dmix.

Supposedly it should see the card arrive through hotplug and add it to the device list, but that has never happened.


I have a mediabox which decided it was time to install PulseAudio for kodi. Which, that perceeded to complexly screw over audio over HDMI. (No idea why it was refusing to play audio until I disabled pulseaudio).

I've not had issues on this on Fedora (24+) -

I did have to kill pulseaudio 1st though..

might try that if you have not already


dmix is hard to configure?

Configuring ALSA to work properly with my USB sound was the single hardest thing I did on my gentoo box. The documentation ranges from terrible to nonexsitent and it is unclear how to generalize many of the example configurations.

Having done all 3, I'd order them from the hardest as selinux, pulseaudio, apparmor. Out of these 3, apparmor has the most user-friendly tools to apply changes. It also allows you to easily turn off a single profile without disabling the whole system. Given those, I don't expect that many issues.

But also, worth remembering that this is a proposal for an experiment. One of the things they're going to judge is how much does it affect regular users.


Ubuntu has used AppArmor for years. Debian users will benefit from those years use.

People rant and rave about how pulseaudio is broken on their system. Pulseaudio is pretty easy to configure for most things. Sometimes it even (unexpectedly) configures itself!

Good idea for the server at least. For my development machine I need be able to disable apparmor easily(when it breaks my desktop). Increasing security sometimes decreases usability, especially when I just need Debian/Ubuntu for development and browsing.

Isn’t compromising your development machine way worse than a random server?

My dev machine can't access prod data. My dev machine doesn't deploy to prod, the attack vector would be inserting malicious code, which would need to pass CR before being compiled on a build server and then deployed to prod.

In my case impact to the business would likely be less if my dev machine was compromised than if prod was. It'd depend on the dedication and skills of the attacker though.


I'm really curious if it's actually true your development machine can't pivot into prod. You don't have sufficient access on the build/ci/deploy servers to pivot into prod without code review? You don't have passwords saved/stored that are reused in critical/prod infrastructure? You have no way of accessing prod?

That seems crazy to me, as I'm able to SSH into prod with a single command and password. I guess I'm further on the ops side, but still.


Depends on what you mean by "development machine." My Macbook has those credentials on it, but the Ubuntu VM running on it, which I do most of my development within, does not.

Personally, I wish more operating systems came with an installer checkbox for "this will be used as a VM within an environment that already has its own security mechanisms; stop doing anything related to security/multitenancy (i.e. automatic login, no sudo password, no screen lock, etc.) and just let the host handle all that." Every time I set up a development VM I have to go to a bit of trouble to create an environment almost-but-not-quite like that, thwarting all the carefully-designed-in, yet redundant, security.

Makes me miss MS-DOS and Windows 3.1/95 a bit; those were truly "Personal Computer operating systems", in the sense that they assumed that there was a 1:1 correspondence between physical access to the hardware and the privilege to read/write/execute arbitrary data on said hardware. It'd be neat to have a version of Linux (or Windows 10) with that sort of "retro" security model (but still with process sandboxing; just because any user has rights, doesn't mean any program should, because programs aren't always the authorized agents of a user's will.)


Wait, youre using a vm for dev, using the vm GUI/desktop environment?

Either way, it sounds like Vagrant may help. Most baseboxes target headless server dev and don't have a DE but a few do, and of course you can build your own base box and then customise via a provisioner for each different project.


There is a pretty strong ops/dev divide where I work.

There are some repos I have admin access to, those I could push malicious code directly, execute a build and authorize a push to production.

Most products I work on have a fairly long build time and that'd be a fairly noisy process (emails sent off). It'd work if you are just trying to get a shell, but it'd assuredly alert someone that something bad is happening.


> I'm able to SSH into prod with a single command and password.

Password? Are you not using keys?


If you have a passwordless key, you're nullifying half of the security.

Yes yes, ubikeys or smartcards exist... but it is the same second factor: what you have. Password is what you know. Biometrics theoretically are what you are but really what you have on yourself.


Yeah, passwordless keys suck, I agree. It's a little better because my hard disk is encrypted, but still not enough. I just got my Yubikey in the mail, and I want to move everything to Yubikey (for GPG) & an SSH agent, then I can redo all my keys; but I'm dragging my feet a little

"it can prevent exploited server software from accessing data it does not own and executing arbitrary code"

Isn't AppArmor overly broad for something like that to be true? Exploited server software would happily provide access to the data of all users, since usually they all are withing the same trust boundaries anyway.

I think it's not a good idea to enable "security features" simply because it feels like they help, without even a threat model.


That's not how AppArmor works provided you lock down your server software properly -- say the server running is NTP -- that NTP server is only able to read /etc/ntp/* and /usr/sbin/ntpd only able to write /var/log/ntp* only able to execute /usr/sbin/ntpd Now you've radically limited what an exploit of this particular server can mean.

Not a good example, chmod, chown and chroot accomplish pretty much the same thing.

How? AFAIK, the only way to do it is to create a different user for each app, like Android, which seems absurd to me.

As an Ubuntu user, I suffered from AppArmor a few times and removed it. However, these days it seems to work. Worth a try, Debian.

It looked like AppArmor was done for. Thanks to OpenSUSE for saving it!

If wonder if the way most Linux "apps" work right now is what's keeping Linux from adopting more advanced security measures by default, such as Grsec, AppArmor, SELinux, or even ASLR...

Instead of having one self-contained app that either works on it doesn't on your AppArmor-enabled OS, you have "dependencies".

If a media-player you download doesn't support AppArmor, you can just use one that does. I think it's harder to avoid dependencies like that, if a library used by a whole bunch of programs doesn't support AppArmor.


i'm pretty excited about flatpak et al, because gnu/linux will finally get application sandboxing that is reasonably easy to enforce. the dynamic lib bullshit that the community got hooked has been a huge impediment to implementing good application-level security on the linux desktop imo

not so much the fact that they are using dynamic libraries, but the foolish idea that you should keep all of your applications and libraries in a single namespace in a messy pile. plan 9 showed y'all the way, but you don't listen, b/c y'all are obstinate.


Did plan 9 not have dynamic libs, or were they namespaced in some way? Would love a link to read about this

Plan 9 had dynamic libraries.

Limbo, the main language for Inferno, only makes use of dynamic loading.

http://www.vitanuova.com/inferno/docs.html


> such as Grsec

Well, there are totally different reasons (code quality and silly politics) preventing this one from being adopted.


There's some kind of bind here between "putting burden on end-users" and "putting burden on application developers." Either you (a) ship $LSM with some defaults that are necessarily general (so as to avoid breaking applications) and let the user fit filters to their circumstances, or; (b) push for developers to write/maintain filters baked into their applications (ie. using some kernel features like `seccomp` and what-have-you).

seccomp and LSMs do different (and complementary) things.

How so? I understand they're different, but I thought there was some overlap. Don't LSMs work by hooking syscalls anyway?

Seccomp can only disable syscalls. LSMs are more nuanced. Three most important difference is that it is easy to check if seccomp is available and the policy is not kept in or on a file.

I was under the impression that seccomp was a bit more flexible (via ptrace() and BPF fanciness) - although I guess you'd need other co-operating processes in userspace? I've only played around with it a tiny bit.

Also, both kinds of policy are resident in files. I don't understand your point there.


It is flexible to an extent, but you still can't e.g. deference a pointer to a struct passed as an argument. And that's where much of the interesting detail is for many calls.

Thanks for the clarification, although I recall messing with eBPF and kprobes before - pretty sure you can dereference pointers?

editx2: Oh, guess seccomp doesn't use eBPF yet? Suppose that raises a bunch of questions about permissions necessary for specifying programs that might dereference kernel pointers and such by emitting `bpf()` calls.


Please do, that is already how I use Ubuntu since quite a few LTS releases.

Ideally, every single application would be sandboxed in some way.


The Linux desktop of today still occasionally requires teaching users how to install new udev rules so they can properly use that USB serial converter or other gadget sanely.

Now imagine raising that pain to the power of 10, but instead of only weird USB or "device not found" errors nothing works anymore because "AppArmor" is blocking syscalls. I can already see the StackOverflow answer that just tells you to set it to permissive, and you know what, that fixes it everytime, too!

This is just a non-starter. Maybe they should look at deploying ASLR first, it's not 2000 anymore you know. (Update: they just managed to get this done with stretch, released 4 months ago! That's only like 7 years late for a crucial actual security feature)


Since the Year of Linux on the Desktop has still not come, perhaps major distributions would be wise not to use that as a reason not to implement security features.

Last I checked, the Year of Linux on the Server has been every year for many, many years. And it's the servers and their vulnerabilities which are exposing tens of millions users information at a time, not individual Linux desktop users.


Last I checked everyone running Linux on the Server has skipped ahead to containers so now very little of these syscall filtering games even apply anymore.

Too late for servers, too early for desktops. That always felt like a succinct summary of SELinux.


> Last I checked everyone running Linux on the Server has skipped ahead to containers so now very little of these syscall filtering games even apply anymore.

That's a big jump. Any stats on the so called container usage by "everyone"? Even here in the HN-bubble everyone isn't running containers


Huh, didn't know that Debian was missing ASLR. Any idea about the reasoning there?

What makes you think debian is missing ASLR?

> Update: they just managed to get this done with stretch, released 4 months ago! That's only like 7 years late for a crucial actual security feature)

Dunno, taking @revelation's word for it? Mentioned they were missing it. It sounds funny to me though. I'm not a Debian user.

edit: The Debian page on security features confuses me (https://wiki.debian.org/Security/Features).


Funny thing is that udev is in the hands of the systemd devs, operating under the Freedesktop banner, so one would have thought they had the desktop in focus.

But i guess pampering devops is so much more lucrative in the kudos department...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: