Hacker News new | past | comments | ask | show | jobs | submit login
Explaining Docker and Containers (youtube.com)
44 points by rschachte 7 months ago | hide | past | web | favorite | 19 comments

Am I the only one which thinks the docker vs vm slide is just wrong as there is nothing running on the docker daemon at all and even super simple docker pics on docker Inc. always talk about the docker engine as abstraction between os and container. This also lacks a bit but is a way better analogy for all the Linux namespacing cgroups etc. Which habens in reality on the kernel. The daemon doesn't run anything at all but maintaines state.

IMHO this are the kind of simplifications which just make people more confused about a complex topic like that on the long run And in addition the last points are also applying fully to micro vms but are shown as great benefit of docker. I can't like it unfortunately

This video is just ... all over the place. It's too advanced for a basic audience and too simple for people who have been admins for years. I get what he's trying to say and it's a good production job, but it's trying to hit too wide an audience.

I have almost no experience with Docker so this may be a beginners question. How would one get to the output of a Docker image such as logging? On a (virtual) O.S. I can log on, but would I have a similar solution on a Docker image? Or would I have to redirect all logging to a central server or database?

You can log inside the container as normal (if it is a long running/persistent one) and exec a shell on it to work with the logs, or you can log to a mounted volume (to local drive, another container used for storage, a remote drive, etc), or send the logs to a central log server or an elasticsearch database to have a search engine/dashboard for your logs. There are more options, but the best depends on your use case.

You can think of a docker image as a static template of instructions. A container is a running instance of the docker image. This running instance is not very different from a virtual machine (I am abstracting many details here). You can ssh into the running containers just like a normal virtual machine.

This whole channel is brilliant. Reminds me of the quote. If you cant explain something in simple terms, you don't understand it well enough

Funny statement as he didn't understand it well enough

I initially read it as "If you can explain something in simple terms, you don't understand it well enough” and I found it very true.

When I run a docker container, is it booting an entire OS (minus kernel) to run that command?

docker run will run a little base image (such as ubuntu), but the host kernel is indeed shared.

The Docker Client and the Docker Daemon (together called the Docker Engine) are running on the Host OS.

Each container shares the Host OS kernel.

Not only is it sharing the kernel, but to the original question, it's not booting an entire OS. It's running a single process; your application. There is no overhead of another init process (although some people use dumbinit or tini for basic process and signal management).

Now a container will have an entire copy of a basic OS it builds from (ubuntu, alpine, etc.), so its dependencies are self contained and it should run the same anywhere. However they're usually installed with apt-get or yum or apk commands in the Dockerfile, so if you don't do rebuilds and there's a security issue in a dependency, it can be difficult to detect.

Security problems are still isolated to that container, but that container shares a kernel. If someone can get root inside your container, and use a cgroup exploit, they could break out of the container. It's unlikely, and the layering does provide protection, but it's something to be aware of.

The process in the container runs at the same speed as a native process as if it were outside the container as well.

> Not only is it sharing the kernel, but to the original question, it's not booting an entire OS. It's running a single process; your application.

This is the first time I've been anywhere close to understanding the benefits of Docker (I've been watching and not understanding for years!). Thanks for your comment

When you’ve been in java version dependency hell then you try replicate the same install with Docker, and succeed in 5 minutes it all becomes very clear. I am very very far from being a Docker expert but love it.

so if my base image is say ubuntu, `docker run` runs the entire ubuntu os, then also runs the process on top of that os?

Container ~= process which has been isolated from the host system using linux specific kernel features. If you launch systemd and all ubuntu services, yes, it will effectively run the whole os, otherwise, it's just a process like any other with no overhead. Why the image? Well it contains the filesystem of your container - all the libraries your process needs to convert its' logic to kernel syscalls, all in their own filesystem bubble, so there are no clashes, wrong versions, etc.

Imagine that you copied all the files from your friend's hard drive onto your computer that is running a different version of linux and then executed a single program from it. In that case, you've copied an entire operating system onto your computer, but you certainly didn't execute another operating system.

Of course, the program would likely fail to start because things would be in unexpected paths, it'd see unexpected versions of libraries, etc. So now imagine that there's a Linux feature that allows you to remap filesystem paths for this process and all its children. This is called a mount namespace, but the old school feature called chroot achieves similar goals. If you then add on a few more isolation features, you have a modern container.

It's possible to do this because both of these systems are linux systems and the linux kernel's program-facing surface area maintains compatibility across a wide variety of versions, so the programs don't care very much about the underlying kernel version. With containers, the same kernel is just presenting a different view of the system to different applications.

depends what you mean by running an entire ubuntu os.

It's not running systemd, its just running that one process you asked for. It be as if you booted ubuntu kernel and instead of running systemd to run all sorts of services, you run the single process you wanted. You're running on "ubuntu", but not really, i.e. the file system is provided by ubuntu, but you're not using the normal ubuntu environment with all the services that would start at boot.

You can also have docker containers run systemd if you want, but that's a much more advanced use case.

You need to stop thinking of it as an OS. It's just an easy/lazy way to grab all the dependencies your process may need. Here's a silly analogy:

You're trying to containerize a simple football game. To run the game you need to provide a playing field that meets all the requirements of the rules, your dependencies.

You could go through all the rules one by one and write down what is required, like you need a line here of that length, another there that has to be at least x far away, but no more than y, and so forth. After the tedious task of collecting all your dependencies you give it a test run and it all looks good, but then boom the game suddenly stops. Whoops you forgot a dependency and didn't provide the goals, or added the wrong ones.

Alternatively you can take the easy route. You know they play regulation football in Ubuntu stadium, so you just take a copy of that. It comes with a lot of overhead - you don't need the ranks, concession stands or parking lots to simply play football without all the faff, but it's included because league football needs it.

Edit: When you do take the base image of Ubuntu stadium and docker run it, the default is set to play a game of football, because that's the most likely thing you want to do there. It doesn't activate the media coverage and stadium announcer, doesn't sell tickets and fills up the parking lot because none of that is required to play. You could if you wanted to, but you could also tear it all out and the default game would still run.

No, it probably only starts a bash shell in the container. In most cases docker containers only run a single application, not the entire os. It is possible to run everything, but that kind of eliminates on of the big advantages of docker.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact