Hacker News new | past | comments | ask | show | jobs | submit login

And then of course the system inside the container needs to start up, configure, run init scripts, ... Did you count that in those 100 syscalls ?

Take the example here: https://github.com/kstaken/dockerfile-examples/blob/master/n...

Which does something a lot of these functions will do : get nodejs, use it to run a function. Just the apt-get update, on my machine just those instructions, ignoring actually running the function (because it's insignificant) does close to 1e6 syscalls.




Lightweight application containers do not run init or anything like that! They're just chroots but with isolated networking, PIDs, UIDs, whatever.

For example, on my FreeBSD boxes, I have runit services that are basically this:

exec jail -c path='/j/postgres' … command='/usr/local/bin/postgres'

Pretty much the same as directly running /usr/local/bin/postgres except the `jail` program will chroot and set a jail ID in the process table before exec()'ing postgres. No init scripts, no shells, nothing.


I don't understand the criticism. FreeBSD jail is more like chroot than like a container. A container, as I understand it, runs it's own userland. Otherwise, you can't really isolate programs in it. If that postgres was compiled with a libc different from the one on the host system, or let's say required a few libraries that aren't on the host system, for instance, would it run ?

Does it have it's own filesystem that can migrate along with the program ? Does it have it's own IP that can stay the same if it's on another machine ?


You're correct. Containers do contain their own userlands, a fact many gloss over. PgSQL will have to load its containerized version of all libraries instead of using any shared libraries linked by the outside system.

This is often done via a super thin distribution like Alpine Linux to keep image size down, despite the COW functionality touted by Docker that's supposed to make it cheap to share layers.

The difference is that unlike a fully virtualized system, the container does not have to execute a full boot/init process; it executes only the process you request from within the container's image. Of course, one could request a process that starts many subservient services within the container, though that is typically considered bad form.

What people really want is super cheap VMs, but they're fooling themselves into believing they want containers, and pretending that containers are a magic bullet with no tradeoffs. It's scary times.


Even a basic chroot runs its own userland! "Userland" is just files.

In my example, /j/postgres is that filesystem that can migrate anywhere. (What's actually started is /j/postgres/usr/local/bin/postgres.) Yeah, you can just specify the IP address when starting it.


What system? Your link just starts a nodejs binary, no init process. And you also don't seem to realise that a docker container is built only once? Executing apt happens when building the image (and then is cached for if a rebuild happens later), not when starting it.


These steps are only run for initial creation of the container image. Running the container itself is only the last step from that file: Executing the node binary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: