Hacker News new | past | comments | ask | show | jobs | submit login
Subuser turns Docker containers into normal Linux programs, limiting privileges (subuser.org)
224 points by bpierre on Feb 12, 2016 | hide | past | web | favorite | 72 comments



This is really cool! The first time I tried to run a GUI app in a container, I discovered there's a LOT of practical gaps between "okay I have containers" and "I want to run a graphical application". Subuser bridges those gaps nicely.

My favorite part of reading the docs for subuser is how completely direct and honest they are about security of various features. E.g., if you enable some features of X bridging, you're accepting security risks, because that's how X works. (And if you don't need those features, don't enable them; it looks like subuser has it down to a one-word config flag, which is amazing.) By comparison, if you start out on your own... there's lots of brief tutorials on the internet about using xephyr, or xpra, or doing something really wildly unsafe like just plain mounting the X sockets in. But not only are all these things rough to get started on by yourself, the brief explanations thrown around are often not clear on the security implications. Subuser seems like a great way to do the right things... out of the box. And with excellent truth in packaging.

I've accumulated a lot of scripts in my personal `~/bin` over the last couple years that do stuff like "mount cwd and interesting dir $x, make sure image with $prog is here or fetch it, now pass args and invoke $prog with $HOME=..." -- and while these have been really useful to me, they've been ad-hoc and almost impossible to usefully share with others. Subuser's format for making valuable configuration like this sharable is extremely interesting.


FWIW I think this is pretty cool. The main docker use-case is not desktop software, and some of it's choices are unlikely to suit that use-case well.

So, good to see a project look at that aspect. I would quite like to be able to run all my apps which access untrusted services (e.g. all browsers) in a container which I can easily wipe/reset.


You are correct that Docker has some server centric designs. But luckly, the Docker project has split of runc (Docker's container runtime) and since images are just bare directory trees, there is really no claustrophobia about Docker being a bad choice...


spectacular. now build a package repo, and give me "sbu-get install postgresql-9.5" which runs on both fedora and debian... and we are golden.

I wonder how it compares with zeroinstall or nix


Are you used you download a zip or tar.gz from upstream full of bundled dependencies?.

I don't like that.

As this is opensource and I get the source, I will download a full OS image or containerized binaries from third parties. Mmmm so spectacular.

Apps running on chroots? We've that since the 90's, the relaxed security model of Xorg and it's client/server architecture was there already, and it was the reason to left windows 3.11/95 behind.

But marketing campaigns are spectacular in effect in current money surrounded culture. Yes. Their network effect is spectacular. I would say so.

Edited: missing question mark


Subuser is not a revolutionary peice of software. You are right that it is a LOT like a zip file or tar.gz file full of bundled dependencies. It is, however, a big improvement on just that. Subuser contains the subusers, so they cannot mess with the rest of your system, making a lot eisier to trust those bundled dependencies. It also provides some rather primitive update mechanisms for those bundled dependencies. But it's main advantage is the containment and the "blank slate".

In the future, I would like to expand on the vision, to include the ability to do deduplication, so that all those zip files with bundled dependencies take up less space. And deduplication done right, will save bandwidth too. Indeed, when you have these "deduplicated zip files", with deduplication done algorithmically, you'll save even MORE space and MORE bandwidth than with a traditional dependency resolving file manager. Take for example debian. Debian uses packages. These packages have dependencies. The dependencies are shared. This saves space over a case where you have a bunch of zip files with unshared dependencies. But when you update a dependency, even if only one line of one file has changed, that dependency must be downloaded completely anew. But with algorithmic de-duplication, updating one line of one file means you only have to download one line of one file anew.

Space savings are there for a similar reason.


It is revolutionary - if you manage to build the ecosystem. Remember that Docker itself was not considered revolutionary ("yawn.. Use lxc").

I would really recommend you look at nix, zeroinstall, click packages and http://0pointer.net/blog/revisiting-how-we-put-together-linu...


zeroinstall and click are interesting. But zeroinstall doesn't do anything on the security front. Click is rather confusing, I haven't found a webpage for it yet, or any information about how to like install and use a click package on Debian(I think that this is because it's not possible/supported). However, I don't see nix or redhats various efforts(they have annoced a new universal package format every year or two for a decade now) to be very serious. The problem with these efforts is that they always want to impose some opinion on HOW things should be packaged. And I don't think that is useful as a global standard.


you should really kickstarter this project if you are really serious. I think this can really be something if pushed hard. For all its hate, systemd is a one man effort that changed Linux. And so was git.

There are people out there who would love to financially support this if you ask and have a good sense of what you want to do. IMHO your post above (about zeroinstall and click) are jumping the gun.

Would love to see what you come up with.. once the excitement has died down ;)


I keep wondering how much traction systemd would have gotten without having a long standing project like udev latched onto it (never mind the consolekit replacement logind).

I'm just waiting for some project to be wholly dependent on the existence of networkd or some such...


0pointer... after break half of the internet... and need to patch the other half... now releases a pid 2 silently (see release notes for last release), because people was right, because of architecture.

Fortunately still I'm free to use my non market targeted, written by myself, configuration management tool, to pick how i boot my personal systems.

Sorry, will not buy more blog posts from that domain.


> to include the ability to do deduplication

But... Won't it be an ugly kludge?

From what I understood, Subuser does introduce the duplication problem just because of the way it works. It didn't exist with traditional package management when done right (because shared components are packaged separately), and it didn't exist with Docker (because it uses layers). If software (any software) essentially adds some issue and then tries to fight it off, it's very likely that things will be ugly in architectural sense.

> But when you update a dependency, even if only one line of one file has changed, that dependency must be downloaded completely anew

Are you aware about debdelta?


> and it didn't exist with Docker (because it uses layers)

There's a big spectrum of 'solving the deduplication problem' and Docker is towards the 'when all you have is a sledgehammer' end.

Saying "you can manually arrange Dockerfiles so that they cooperate and share layers" is not solving things in an architectual sense! You essentially have to be using a single custom tool (Dockerfiles) to create your images and then you need to apply thinking power to consider how best to arrange your images (e.g. having a 'base' package.json to install a bunch of things common to many apps, then additional an additional package.json per-app).

It's getting better with 1.10 (layer content is no longer linked to parent layers, so ADDing the same file in different places should reuse the same layer) but it's still pretty imperfect. I created https://github.com/aidanhs/dayer to demonstrate the ability to extract common files from a set of images and use it as a base layer, which is another improvement. Even better would be a pool of data split with a rolling checksum, like the one bup creates - short of domain-specific improvements (e.g. LTO when compiling C), I think this is probably the best architectural thing you can do.


Oh, sorry. Yes. I think, I'm with you on this. I didn't meant to say about the quality of this approach. I just meant that Docker has its means to not duplicate by having shared base layers - so we can say the problem [mostly] didn't exist in systems Subuser had started from - but it's completely another matter whenever the approach it takes is good or not.


No, algorithmic deduplicaton is not a kludge! It is the oposite of a kludge. It is a beautiful way of letting the computer do hard work for you! Rather than trying to deduplicate things by hand (aka, traditional dependency management) you let the computer do it for you :)


Oh, no, I guess I wasn't clear on what I mean. Sorry.

Data compression (which deduplication essentially is a subset of) absolutely isn't a kludge. I meant, introducing the duplication (by design) and then inventing some workaround to get back to square one - that's the suspicious part. It adds complexity, when there could be none.


Square one, being, in your opinion cramming all of the dependencies together in one place? There is a big advantage to immutable dependencies though... Your code NEVER breaks. If the dependencies are immutable, and the architecture stays the same, then your code will run. Of course, you can still use apt-get or another package manager to build your images, so the whole updating of dependencies thing is no worse than where you started. It's just, you have the option of not changing things, and not breaking things as well.


I must be misunderstanding something.

Isn't it Subuser that crams all the dependencies together, in one place (image)? So there was that proposal so images are deduplicated, in a sense shared files are automatically detected and stored only once, saving disk space. But then, isn't it that Subuser is completely unaware of any metadata a particular file may have, so it can't really tell the difference between libz and libm, or know that all those binary-different libpng16.so have the 100% compatible ABI and are interchangeable (but not all libpng12.so do)?

From what I understood, Subuser is a package manager (plus permission manager) that doesn't know a thing about what it's packaging - only the large-scale image.

Package managers have all libraries separate, that's the whole point why package managers were invented in the first place. If a package management uses some database, dependencies may lay together in the filesystem, but the're completely separate in package manager's database. If package management system uses filesystem as its database, then packages are separate in that regard, too. There's immutability as well, sometimes enforced, sometimes along the lines of storing `dpkg -l | grep '^ii' | awk '{ print $2 "=" $3 }'` output to save the exact state.


I will not trust some binary logic that interacts with my data, just because it runs with a dferent numeric uid on my system. That's the point.

People is calling "containers" to operative system images pulled from third party repositories under certain laws.

If we left all that behind, yes, it's better if your firefox cannot read you ssh private key for example. But I don't need docker for that, sorry.

I need a third party blob, when I don't want to embrace the postgresql source code for free and it's manual, and make myself more free tomorrow.

But the... how many millions was it? dollars campaign behind certain technologies (and providers), has more power than any thinking, regarding network effects.


Running something in subuser is no more of a binary blob than installing it through apt-get. Indeed, it uses apt-get by default! https://github.com/subuser-security/subuser-default-reposito...


Your needs and point-of-view are pretty specialised. But just because you don't need to use it doesn't mean no one else could benefit.


You realize, that I (the author of subuser) literally live off of money I beg off my parents. I get NO money from developing this. In terms of XOrg seecurity, I've worked on that: http://subuser.org/news/0.3.html#the-xpra-x11-bridge


So you put your efforts on market trending topics.

I can understand.

Money surrounded culture does not exclude those that don't have the money, they are simply part of the game.

This doesn't solve any problem I have currently or I cannot solve using tools I've already tested on my usage, but I hope you had fun writing the software and sharing it.


Is there a better forum for "truely free software"? I honestly don't know of one, and I would love to find it if it exists. Personally, I too am dissatisfied with the comercial aspects of startupy-venture-opensourcedome. There has become a great deal of dishonesty and tension. The itch that I'm scratching here, is, for example, the problem of the freecad project, which is really hard to build and run from source as it uses some non-standard runtime dependencies. The typical solution has been to have all the developers run ubuntu. Subuser should make things better, because we can run ubuntu in subuser and then run subuser everywhere... This is better than using virtual machines due to the performance and integration requirements of the software at hand.

The other thing that is good about subuser is the isolation that it provides. If you download some new freecad plugin from some random person, and that person made a mistake and their plugin damages your system, then that sucks. But subuser can help contain that damage. It also has some protection agains malicious software. Short of a kernel bug, it's pretty hard to break out of subuser's containers. So if you EVER download development versions of software that are from non-vetted sources, you should consider installing and running that software through subuser. It will make you safer, and if enough people do it, and make subuser images, then it'll also save everyone in the free software world a lot of time :)


Hey, timthelion, sorry if any comment was harsh.

I did look better to your software, even if I avoid docker, and it looks nicely engineered and documented.

Congrats. Really. Thanks for your effort and for sharing it.

To don't just another comment, regarding the apt-get stuff of other comments: I build my packages and repos the same, since the 90's... but I was not an early docker adopter, because I was unable to run private registries. I cannot compare them sorry :-)

Have a nice day, and sorry for hijacking your nice project with my docker rage.


For postgresql, it would likely be better to use Docker directly. Subuser adds support for things like sound and GUI to Docker, for a simple database, you don't need subuser.


I already run my servers on docker. I was alluding to a containerized package manager.

Will the golden age of a single package manager across distro be finally possible through Docker?


Maybe... I think that if the question is "through runc compatible containers" than the answer is a definite YES, but through Docker is probably a no. Even the Docker folks tell me that I should stop using Docker and start using runc. Docker has a lot of design baggage from the server space, and when you cut that all away, you don't get much but a tiny bit of code that creates a namespace ;). But that tiny bit of code is really really revolutionary :D It is the difference between secure and reproducible isolation and "normal unix"...

What I would like to have in the future, that Docker does not provide, is deduplication. If I have two images with the same file in both, then I should be able to have that take up no more space than if there was one file in one place. See https://www.youtube.com/watch?v=2Sc6_XgNXyk


Well, I mean... Docker does provide deduplication on the same base image. So two containers created "FROM ubuntu:14.04" which are both reading from (for example) /etc/issue are using the same data. But you specifically mention across different images, and I just don't think there are enough similarities to warrant that functionality. I mean how many files are the same across CentOS 7 and Ubuntu 14.04? I just don't see the benefit that you do.


What about Nix (https://nixos.org/nix/)?


That's essentially what Docker hub already is. you can do

docker pull postgres

and get a postgres service which runs in a container.


Nah. With command "apt-get/yum/dnf/etc. install postgress && service postgress start" I will have postgress service up and running. Command "docker pull postgress" will just download image of postgress, which I will need to run with lot of parameters, which I need to figure out.


you are kidding right?

the commands needed are right there on the docker hub page

https://hub.docker.com/_/postgres/


No. I never will read this page because "yum install postgress && systemctl enable postgress" just works, even in docker container (with systemd).


Awesome! Now that I know this I can finally get to learn docker. The functionality provided by this tool is exactly what I was doing with several shitty Python scripts. Thanks mate!


I hope you find it useful. Please file a bug report if you have any problems!


Great so you made capisicum with docker .

https://www.cl.cam.ac.uk/research/security/capsicum/

Docker docker docker docker


How does capisicum solve X11 insecurity? Subuser uses XPRA...


The X11 part would be solved by xpra. Capsicum, however, feels like the better tool for the job of privilege isolation here. One of the more important things is that you're able to lock down access much more finely than you can with docker. You can use this to ensure that the locked down application can only communicate with xpra and put files only in certain folders, for example, without being able to see or interact with other processes.

Your approach has the problem of needing the inherent insecurities of docker. Because everything within docker has to be managed either by root or someone within the docker group, you have a greater surface area exposed where if an malicious app is able to get hold of the docker socket file, it now owns your system. A capability-based security system, on the other hand, wouldn't be able to touch the docker socket, even if it was run as root.


but wouldnt it be easier to chroot / limit a program with a pre / post hook script and leverage existing package managers ?


Perhaps - and most folks do (in principle) just want to put files on a computer and start a process.

But context changes and needs have become more sophisticated, particularly needs in isolation, consistency, elasticity. I think that the capabilities of traditional Unix kernel mechanisms, packaging systems, and userland tools, have lagged behind demand.

There's still opportunity for these needs to be met with refinements to standard 'nix mechanisms. I'd contribute to such a project, if it became the explicit design direction of a major distro.


Subuser DOES leverage existing package mangagers. All subuser "packages" are are scripts that call existing package managers: https://github.com/subuser-security/subuser-default-reposito...


Your comment relates more, I think, to the general Docker project than subuser specifically.

The answer to your question is that it all depends on your definition of "easier".

docker/docker hub hide a lot of complexity in linux namespacing/cgroups/seccomp filtering. You can absolutely achieve the same goal with native linux tools, as long as you have the time and inclination to learn those tools.


Of course not! The existing package managers don't have a cute logo and an unrealistic valuation.


Unrealistic valuation? Is that what you call a couple hundred "stars" on github? If you ask me, stars aren't hardly worth squat. Perhaps a pull request every month or two... But I'm glad to see subuser getting some attention on HN now. Perhaps the PRs will start rolling in :D


I guess it would only be fair for Docker to cut you a couple of million, seeing as they're swimming in cash (apparently?)


Unfortunately for me, I was asked to leave the Docker project after I got upset about SUSE wanting to add EULAs to Docker images: https://github.com/docker/docker/issues/7153

So I guess I don't get any millions. :(


Like I said: Docker wants to be a linker and a binary format:

http://adamierymenko.com/docker-not-even-a-linker/

If everything (mysql, pgsql, etc.) were a library you could achieve much of what it does with -static and privilege dropping on launch.


How is Docker + Subuser different than just using firejail?


firejail is security centered. It allows you to secure the apps you already have on your system.

Zeroinstall is distribution centered. It allows you to easilly distribute software across distros.

Subuser is like zeroinstall + firejail.


I was always taught that Docker is not suitable for securely separating stuff, just for compartmentalisation. Something about running as root iirc?


> Something about running as root iirc?

The daemon runs as root, so your containers are only as secure as the daemon itself.


> > Something about running as root iirc?

> The daemon runs as root, so your containers are only as secure as the daemon itself.

This is not correct. The daemon only starts and monitors processes in this case. You can start a process as a different user. You can also use user namespaces to make "root in the container" not root on the host.


Nn, misread


You may be thinking about this issue: http://reventlov.com/advisories/using-the-docker-command-to-... -- which incidentally is the first thing that came my mind, too. I'm not familiar with docker, but I'm unsure how subuser could work around this.


Subuser doesn't work around this issue. As a USER you can become root using docker. However, that doesn't mean that your SUBusers can become root. And that's what you are trying to protect your system from. The "less trusted" code that is running in the subusers.


You may be thinking of the lack of user namespacing in Docker. Until v1.10 root in a container was the same user as root outside the container.

This did not automatically lead to allow a contained user to breakout, but made it easier.

In 1.10 user namespacing reduces this risk as the root user in a container can be remapped to another user account outside the container.


What I would very much like is a program that allows me to run a Windows program like Photoshop on Linux through something like Docker. I know I can run PS for instance with Wine and CS2, but due to professional needs I need CC .. Gimp won't cut it. This would free me up to great extend from Microsoft and Apple.

I understand this is solely for Linux programs?


If you want to run a Windows program on a Linux machine, you'll have to run it through a full copy of Windows in a virtual machine. Docker isn't going to help you here...



So wine can do it? That's pretty sweet.

It seems like it would make sense to combine this with subuser to make sure you keep a known-good version of wine and its configuration. I've known wine compatibility to drift unpleasantly from time to time, or accidentally lost obscure config changes I made once...

(I have fond memories of a time when I could play Supreme Commander on linux in wine... but I've since lost that machine, and I haven't been able to reproduce the dang setup again since :( If subuser had existed back then, I'd probably have a snapshot and I'd be happily playing right now...)


> So wine can do it? That's pretty sweet.

Probably not. Technically, wine was able to run various versions of Photoshop for years. But it was never stable nor complete enough to use it professionally, with wacom tablets and such.


I still can't understand why can't this work in docker-machine in osx. Or can it?


There is not any really strong reason why it cannot work. I just haven't got an OSX machine, and now PRs have come in yet.


Very cool stuff. How compatible is this with docker-machine?



From that page:

Warning: Being a member of the docker group is equivalent to having root access.

Not a great way to limit privileges.


The user using this has to be in that group, the programs being run do not.

For a single-user machine, it may be okay. But I don't trust Docker's security very much.


any particular reasons you don't trust Docker security?


The daemon has had many security issues. They also only recently grew support for ACLs (which are only available as plugins), so any user that can write to the Docker socket is essentially equivalent to root.


I'm not a docker user, but I think on binaries I can always use still capabilities, to limit what they can do.

I suppose most of them will be needed in this case, and it's still a concern.

Just did want to remember to people about linux capabilities after read your comment:

    man -k cap | grep capabilities


It works with Docker. If you write a Dockerfile for a program, and create a permissions.json file, then your program will work with subuser. Subuser is a "very thin layer" that makes it easier to use Docker for desktop apps.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: