Hacker News new | comments | ask | show | jobs | submit login
A Workshop on Linux Containers: Rebuild Docker from Scratch (github.com)
568 points by mastabadtomm 10 months ago | hide | past | web | favorite | 72 comments



In the same vein:

Building a container from scratch in Go (Liz Rice) @ Container Camp 2016 -> https://www.youtube.com/watch?v=Utf-A4rODH8

What's a container really let's write one in go from sctach (Liz Rice) @ Golang UK Conference -> https://www.youtube.com/watch?v=HPuvDm8IC-4

Cgroups, Namespaces and beyond: What are containers made from (Jerome Petazzoni) @ DockerCon 2015 - https://www.youtube.com/watch?v=sK5i-N34im8

Building Containers in Pure Bash and C (Jessica Frazelle) @ ContainerSummit 2016: https://containersummit.io/events/nyc-2016/videos/building-c...

First two are basically the same talk, but it doesn't hurt to hear the same ideas more than once.


Docker in ~100 lines of bash: https://github.com/p8952/bocker


I found this introduction to Linux namespaces very good

https://medium.com/@teddyking/linux-namespaces-850489d3ccf


So basically it is just the usage of particular API the underlying OS provides.


Yup. It's all namespaces and cgroups. The base "docker" command adds a little value with os respositories, network overlays, etc, but not a lot.

Personally, I think K8S with their cri-o (https://github.com/kubernetes-incubator/cri-o/blob/master/RE...) runtime is going to eventually eat Docker's lunch.

As a company, Docker needs Swarm/Enterprise to take off in order to have a differentiator. That isn't happening.

Kubernetes, on the other hand, just needs cri-o (mostly done) and an image repository / builder to kill off Docker.

See this for how small the gap really is: https://github.com/p8952/bocker/blob/master/README.md

Basically, Docker has almost no moat. They have mostly only brand recognition. Not discounting their efforts, but they are playing checkers and Google is playing chess.


One important thing for me is that Docker also handles other operating systems than Linux through their respective hypervisors.

Docker might well die but first I will have to be able to locally build and test containers on Windows and macOS, before deploying them to the Kubernetes cluster–without fooling around with Virtualbox, Vagrant and the like.


Well they have multi-platform. Windows containers are intrinsically tied up with Docker (the current Windows containers docs start with a Docker EE Basic install).

My personal expectation is that Docker will be bought by Microsoft, probably this year or next ...


>Basically, Docker has almost no moat. They have mostly only brand recognition. Not discounting their efforts, but they are playing checkers and Google is playing chess.

Well how do you build images without docker?


Right. That's the entirety of their moat, plus brand recognition. What would Google have to spend to overcome that?


I think this is how I most commonly recognize someone that is familiar with lxc +/- docker and containerization in general. A firm grasp of the fact that they're just beefy processes, isolated wiht namespaces and cgroups is the best, most succint way to describe docker (without even mentioning the benefits), but it also requires that the hearer knows what namespaces and cgroups are.

If I had to rank understanding in explanations of docker:

1. "Makes your application really portable"/other explanations that only cover the benefits of docker not how it works

2. "Lightweight VMs"

3. "LXC + some other stuff"

4. "processes + namespaces + cgroups (+/- image management tools, etc)"

100% agreed on the point you made, pretty sure docker+swarm and other orchestration efforts have basically lost the competition. Kubernetes just has the mind share, and even better than that -- it's actually good.

Kubernetes also has multiple competing container runtimes all rushing to fit the CRI (Container Runtime Interface). Just some of the stuff out there:

* cri-containerd - https://github.com/containerd/cri-containerd

* runv (hypervisor based) - https://github.com/hyperhq/runv

* clear containers (hypervisor based) - https://github.com/clearcontainers/runtime

* kata containers (hypervisor based, collaboration of runv + clear container) - https://katacontainers.io/

* frakti (combination of runv & docker, enabling switching @ runtime) - https://github.com/kubernetes/frakti

A bunch of the projects are in their infancy, but the CNCF/kubernetes and the community are doing the right thing investing in lots of them, and letting the good ones bubble to the top. I don't run a production kubernetes cluster but I've found cri-containerd to be really easy to use and haven't had any major problems with it, mostly small configuration things.

The last one, frakti, gets me really excited, because it lets you be flexible about which containers run in more protected environments and which don't. Also really exciting is frakti v2 (https://github.com/kubernetes/frakti/tree/containerd-kata), which is kata + containerd.


Kinda feels like I'm I a foreign country and run into someone that speaks my language :). Thanks for the additional info, like frakti, for example...didn't know about that.

Explaining docker is frustrating for me because I was a Unix admin back in the 90's. So, for me, it's pretty easy to see what it is. And, it isn't new. There isn't much it does that Solaris zones or BSD jails, didn't do. And those predate docker by years. Namespaces, cgroups, and pivot_root. That's basically all of it. Kudos to docker for marketing it better.

Explaining it, on the other hand, to a broad audience...


That's sorta true, but the layered-userspace-filesystem support is a pretty big feature that Docker pioneered Linux support for.

Did userspace filesystems/layered filesystem-in-a-box implementations exist before? Sure: from ZFS to squashfs to tar, all the components were around.

But Docker's popularity is due to its integration; not its novelty. By bundling a filesystem-in-a-box abstraction layer with varying degrees of native OS support into something like a mountable artifact/image, hiding the image caching mechanics behind a nice CLI and image-configuration file format (love 'em or hate 'em, Dockerfiles are a phenomenal example of minimalism and convenience: add some meta commands, and everything else is just shell), and integrating all that with the privacy/isolation (namespaces/chroots) and resource management (cgroups/quotas) stuff, Docker made a really, really powerful model of thinking about "containers" as a single concept.

The thought model is Docker's big achievement: other competitors may unseat Docker eventually, but the concept of "container" as a single, unitary thing that's almost like a VM image will stick around, and the more piecemeal understanding of how containers work will be largely unnecessary, and thus save a lot of time for a lot of people.


LXC had most (if not all) of what you're describing, and a long time ago Docker was just a simple wrapper around LXC. A lot of the inspiration for containers came from other operating systems (FreeBSD Jails and Solaris Zones) as well as previous work such as Xen.

I think what made Docker popular was that it was easier for developers to use, with small bits of information like what ports the container wants to listen on and so on. From an administration standpoint, LXC was already more than good enough.


I agree with all of that, but none of it is their intellectual property. It's mostly brand recognition that keeps them afloat.

K8s could easily release a command line clone and unseat docker in fairly short order.

Not dimininishing the value of what the docker folks bundled together and marketed. They did a terrific job. It just isn't very protected.


Yeah I didn't either, until not too long ago but now I'm pretty excited to use it.

As a person who wasn't a linux admin back in the 90s, and doesn't normally hang out on mailing lists, the first time I saw lxc was some random article on lwn.net (I don't hang out there, but they have some super high quality articles), and I definitely didn't put together how much of a difference it was poised to make, or even know why it was a good idea (people were still getting used to vagrant everywhere at the time) -- docker definitely did the community a service by bringing the hype train, if only so that once the hype subsided containers would be here to stay.


As a person who wasn't a linux admin back in the 90s,

Nobody was a Linux admin in the ‘90’s: we ran HP-UX, IRIX, OSF/1 DECUnix (and Ultrix), AIX, Solaris and NetBSD; those were our Linux we grew up on like you grew up on a Linux ISO on your parents’ PC bucket.


I ran a small ISP on a pair of Linux (Slackware probably) boxes, a Livingston Portmaster, a bank of Hayes modems, and a T1 in the mid-90s. So some of us were Linux admins then.


Me too. Maintained a small (100 or so) fleet of Slackware Linux desktops for a support org in the early 90's. Rare, but did exist. Was the install 13 1.44mb floppies? Seems to ring a bell. Lots of waiting for the prompt to switch out the floppy. And a more intimate relationship with "dd", "kermit", etc, than I remember ever encountering again.

Also, whoever wrote x3270. Thank you!


That's crazy: Linux isn't rightly usable even now 20+ years later, and you ran it in mid '90's when it could barely boot a shell reliably. And Slackware no less, which meant dumping tape archives everywhere instead of package management.

Crazy.


We have a 1999 VA Linux box still "in production" (just to see how long it will go, at this point. Nothing mission-critical.)


From outside (I know nothing about zones, jails) it looks like saying Dropbox is not new (in 2007) because rsync was not new.


Look into either. They are pretty much the same thing as docker. It's isn't a superficial similarity.

For example, filed in 2003: https://patents.google.com/patent/US20050021788


I’m about a 1.5

Any good resources you could suggest for learning more about what you describe as the relevant areas?


I want to point out that I'm by no means an expert -- the real experts are the people in the talks that I mentioned, the core contributors to the libraries.

I think a good place to start is those talks (and stuff from any container-centric conferences), along with lots and lots of practice using containers.

In general, I'm pretty sure if you read up on chrooting, processes on linux, process isolation on linux, LXC, then the relevant standards/tools that underly docker like runc (https://github.com/opencontainers/runc), you'll have a pretty deep understanding.

Also, for day-to-day use, I honestly think you can do just light research on the above topics and start using docker and know way more than the average developer. As you use it more and more, you'll gain more intuition, and when you bump up against certain issues, you'll probably gain some intuition as to where things are going wrong (though honestly the toolchains are pretty stable now).

The goal of a lot of these projects is to be so stable you don't have to worry about it, so I don't feel too guilty about it, in the same way I don't feel too guilty about not ever having cache-line-optimized a program in my life.


That's why some people believe that in 10 years we won't have docker, we won't have kubernetes, but that it will be intuitively integrated into the OS through new system design patterns that still have to appear out of all the crazy experiments we're doing.


That already exists and has existed for several years now: SmartOS. imgadm(1M) and vmadm(1M) are core parts of the OS and do what Docker does, and more. Built not as an experiment, but to power a large scale commercial cloud business. And freeware / open source since before the project went live by virtue of OpenSolaris (now illumos).


That might be true of Docker. Kubernetes on the other hand is a way to deploy and manage distributed applications, which are by definition bigger than any single OS.


Not if the OS starts to intrinsically view itself as a node in a distributed system. Mainframes sort of see SMP and separate nodes connected over a network as two sides of the same coin, just loosely vs. tightly coupled.

https://www.ibm.com/support/knowledgecenter/en/SSGU8G_12.1.0...


Also Plan9:

"Since CPU servers and terminals use the same kernel, users may choose to run programs locally on their terminals or remotely on CPU servers. The organization of Plan 9 hides the details of system connectivity allowing both users and administrators to configure their environment to be as distributed or centralized as they wish. Simple commands support the construction of a locally represented name space spanning many machines and networks. At work, users tend to use their terminals like workstations, running interactive programs locally and reserving the CPU servers for data or compute intensive jobs such as compiling and computing chess endgames. At home or when connected over a slow network, users tend to do most work on the CPU server to minimize traffic on the slow links. The goal of the network organization is to provide the same environment to the user wherever resources are used."

http://doc.cat-v.org/plan_9/4th_edition/papers/net/


Most Network protocols are a state machine in the kernel. That the end result of a tcp state machine is a communication between two computers is more of a lucky coincidence, that results from smart state machine design.


Yes, I've given talks on what containers actually are and which technologies they combine. Containers oldest technology would be mount namespaces from 2002 - the youngest are user namespaces from 2013 (on linux anyway, solaris and IBM had containers before).

People are always amazed that container products are actually mostly just the glue around what the kernel already provides and that their history goes back almost two decades.


Maybe even further. Mainframes have had similar for decades. Yes, perhaps more like actual VM's, but the namespacing, cgroup stuff, network overlays, etc, are very similar.

My mainframe folks, when talking about docker, universally yawn.


So do the Linux guys. Virtuozzo (later OpenVZ) has been around for almost twenty years.


cgroup namespaces are a little younger and came arrived in 4.6 I think http://man7.org/linux/man-pages/man7/cgroup_namespaces.7.htm...


cgroups v2 probably, cgroups v1 is definitely older.


Nah, I'm talking about the cgroup _namespace_ specifically, not cgroups in general.


Can someone explain why starting a docker container usually has a 200-300ms start up penalty? Its fine if I want to start a web server but is long if its a just a precompiled script that runs on-command and then dies.

Is it Docker that is taking 200ms to start the container or just the nature of the OS APIs?


Mostly it's Docker-specific, but all of the things causing this latency tend to be what makes Docker useful in the first place. In particular, networking and mounts.

For starters, if you're just running the typical Docker set up to talk back and forth using a UNIX or TCP socket, all of _that_ networking and HTTP/JSON encoding+decoding will add some overhead. This is even worse if you're going through a proxy layer like the socket which Docker for Mac creates and forwards to the Linux VM. OK, so there's at least a few milliseconds of latency built in for this, the socket forwarding hop on my Docker for Mac seems to add ~5-15ms for instance.

Once the daemon receives the request, runc, the command that actually does the work to create a container without all that hoopla, is invoked.

Then, to actually start up a container quite a few mounts get set up either for devices (potentially including tty in the case of `run -t`) or for the union filesystem. Take a look at the output of `mount` in a container sometime, around 30-40 mounts (and all those overlay layers...) and a lot of that gets set up on the fly because each container has its own unique mount namespace and view of the world.

Then after all of that, you need to set up networking too (`docker run --net none` will skip this), otherwise containers can't talk to the Internet, to each other, or listen on ports of the host. Remember, each has its own network namespace, created from scratch. So, Docker's doing all manner of adding port proxies, modifying iptables, attaching containers to interfaces, and so on. Otherwise, `docker run alpine wget -qO- https://google.com` or `docker run --net=internalnet` wouldn't work out of the box.

The work isn't easily done concurrently since there are so many dependencies - e.g., you need to have a process in order to set a network namespace, in order to run that process, you need to have a root FS prepared to pivot root into, and so on.

Meanwhile to all of this happening the Docker daemon is doing god knows what, it could be pulling images, or running other containers already, supervising processes, etc. While many of those things are concurrent due to use of goroutines, they do eat up resources and generally too much work saturates and slows down the daemon. That's not really unique to Docker though, any program doing all this stuff at once would have that issue.

Anyway, that's why Docker takes way longer to start a process than good old execve() in your shell.


Could be as simple as the init() process. Have you tried a slimmed down guest distro like Alpine and compared startup times?


A container is launching a process in a way that the kernel lets it believe it can be root, have any illusion of a filesystem and network connectivity that you want it to believe, and be fooled into thinking it is the only process running on this kernel.

Therefore, it is as light weight as launching a process?

Is that about right?


Pretty much. Usually you run your own init that starts the desired process. If you as a process have id (pid) 1 then you need to manage orphaned processes (and zombies) and general signal handling (e.g. SIGTERM).

There is a lot of cgroups and capabilities tweaking involved also. You can also use bpf (e.g. seccomp-bpf to restrict system calls etc.

In general you i) add the illusion of king in your little own world ii) provide necessary handling that the kernel expects from a king iii) restrict resources (e.g. memory, system calls,...) and facilitate communication with the rest of the worlds.


There is also now a `--init` flag for docker run and for dockerd that will run a Pretty Reasonable PID1 (tini) automatically.


Or rather, rebuild runc from scratch.

These days, Docker isn't really "about" the running of containers (all the logic for which is encapsulated outside of the Docker project itself, in runc.) Docker—the thing that Docker Inc produces—is the tooling that gets container images fed into a daemonized instance of runc.

So: the Docker Registry daemon (the thing you can push/pull images to/from, such as runs on Docker Hub); the `Dockerfile` format, and the build logic that uses it, and the CI bots that use that build logic; the local daemon that holds a mini-registry that builds and pulls write to; and the tooling to move and create and dump images between all those places.

If there's a tutorial that replicates that stuff, I'd love to read it.


I dunno about that. There are multiple issues with routing and DNS that are controlled by the docker daemon. Ever tried getting ipv6 working correctly? You end up hitting a whole host of issues that lead straight back to dockerd.


Do you realize that "that stuff" is merely a binary packages system?


People use Docker because it's easy to use and easy to get support, and it's easy to hire people who have experience with it.

Similarly, you could roll your own Dropbox solution with rsync, but good luck teaching your Mom how to do that when she calls you on the phone and she just wants to know how to sync her photos.

By the way, it's pretty difficult to write a binary package system that is reliable, distributed, easy to use, easy to troubleshoot, well supported, and easy to sell to both the engineers and the execs at a company. Docker has done that. It's not a small thing.


Eh, if it's a binary package system it's a very nice one--perhaps an historically nice one.

Unlike most package systems, most people that develop in docker containers are writing the equivalent of a package manifest (Dockerfile), and they're doing it for things that weren't previously packaged. While there are plenty of exceptions, the standard for deployment of webapps has long been piecemeal deployments; not RPMs or whatnot. Docker's convenience features changed that.

Also, the layering system/virtual filesystems are integrated into Docker such that it yields a ton of convenience (in "extending" others images, speeding up deployments/caching, and making huge/arbitrary changes to the filesystem highly reversible) without making users manually manage most of it. There are times when the abstraction leaks (looking at you, hard limit on number of layers), and all of the component technologies that went into it existed before, but as I've posted elsewhere in the comments here, the big advantage of docker is that it integrates those technologies in a way that provides a simple mental model for reasoning about containers, and a very convenient/beginner-accessible toolset for packaging really complex dependencies. Even the nicest "typical" packaging tools out there (rpm/dpkg are pretty crufty, but stuff like Nix and FPM are getting quite nice) still have much rougher edges.


> Unlike most package systems, most people that develop in docker containers are writing the equivalent of a package manifest (Dockerfile)

Dockerfile is a set of build rules. You need to have build rules for RPMs and DEBs (unless you use FPM that you somewhat praised later, in which case there's no good place to put the rules and you get terrible packages with missing metadata).

Unlike most package systems, Docker was designed by programmers for programmers, so you don't need to learn those yucky tools for sysadmins and you can stay oblivious to how the OS actually works (despite that you should actually know the OS you're dealing with, so it's a dumb idea on its own).

> and they're [writing Dockerfiles] for things that weren't previously packaged.

Which is only a progress because programmers are putting the code into packages. Exactly the same could be achieved with RPMs and DEBs, even if you needed something that was already shipped by the distribution (you'd just put your things on the side, just as you are doing now with virtualenv or gems or whatever).

> While there are plenty of exceptions, the standard for deployment of webapps has long been piecemeal deployments; not RPMs or whatnot. Docker's convenience features changed that.

But Docker didn't change that because it allowed anything that was previously impossible or even difficult. Quite the contrary, it was easy, it just required to actually know how the OS works, which is not a common knowledge among web programmers. OS packages were ignored by programmers solely because they were sysadmins' tools, and as such they were boring. I see no other reason, given that Docker gives virtually no technical benefit beside heavy brittle magic on network configuration, so all your packaged daemons can listen on the very same address 0.0.0.0:8888 and still communicate with each other.

> the big advantage of docker is that it integrates those technologies in a way that provides a simple mental model for reasoning about containers

This mental model, it is what exactly? Because there's virtually no mental model with the packages. A tarball with necessary files, that's all.


> You need to have build rules for RPMs and DEBs.

Yes, but as you said just after that, those in a Dockerfile are much more popular with a wide range of programmers. That might be because the dockerfile abstractions/API is better and simpler, or because the programmers like writing shell scripts but not RPM build rules, or for some other reason.

> you should actually know the OS you're dealing with, so it's a dumb idea on its own ... it was easy, it just required to actually know how the OS works, which is not a common knowledge among web programmers.

I'm really tired of this attitude. Of course you should know the OS you're dealing with. What you need to know about it depends on what you're doing with it. If I want to do kernel work, I don't need to know the best design principles for an ES7 web framework. If I want to make a website, I don't need to know how to write Apache 2 from first principles, and nor do I need to know how to manually chroot/install quotas/set up namespaces and capabilities to build a container system from scratch.

> This mental model, it is what exactly? Because there's virtually no mental model with the packages. A tarball with necessary files, that's all.

If it was just a tarball, it would be less powerful--it's the whole bunch of technically-unrelated things (resource management, networking, capabilities, namespaces, tarball-ish features, layering, dockerfile API, nice CLI with pluggable backends, standardized container interface) all unified under the abstraction of "this is a single unit, just like an RPM package". That concept is powerful exactly because it hides the fundamentals of the specific component technologies from people who don't need to know them--at least not at first.

Saying Docker is just "package maintenance for stupid people who only play with Duplo legos", you're being ignorant of the real needs of people at best, and deliberately elitist at worst. It's like saying "Dropbox is just for people who don't want to learn how file syncing over the network works--real programmers will just use curlftps and SVN".


> I'm really tired of this attitude. [...] What you need to know about [the OS] depends on what you're doing with it.

You want your system deployed, so you should know how to deploy. If you don't know how most of the things are deployed, you're likely to create a monstrosity that doesn't fit in any way to what the OS can do sensibly.

It's like arguing that a web programmer doesn't need to understand HTTP protocol, because he only works with it through half a dozen layers of abstractions (how currently it's done).

> If I want to make a website, I don't need to know how to write Apache 2 from first principles, and nor do I need to know how to manually chroot/install quotas/set up namespaces and capabilities to build a container system from scratch.

Of course, because that's what sysadmins do. But you should know how to configure the said Apache. Docker hides that away behind a heavy magic, which is bound to break apart for non-trivial requirements.



A similar project is 'mocker', a "crappy imitation of Docker, written in 100% Python": https://github.com/tonybaloney/mocker


> I keep hearing statements like "Docker is basically

> just cgroups", "Docker is just chroot on steroids",

> which is totally incorrect.

But this project basically proves that the characterization is pretty accurate.


go back to the top level of this discussion and ctrl-f for "runc" then it becomes more clear. Docker is more than we usually think it is, because it makes all the stuff around containers so easy to use.


Agree. I'm working through the rubber docker lab now. The fact that docker (a) handles all the details of cgroups, overlays, namespaces; (b) is super easy to use; and (c) runs cross-platform is impressive.


I like how this includes a very simple, readable C extension for Python that exposes some missing syscalls: https://github.com/Fewbytes/rubber-docker/blob/master/linux....


I've been looking for something like this for a while now. I find building things from the bottom up is a great way to learn how things work (naturally).


Also taking things apart is a great way to know how to start a bottom up build.


Agree 100% on this. Genuine ask - Could you please share what you have been building?


I read the OCI runtime/container specs when I was first starting to move all my work projects' dev environments to docker (Lando, to be precise - in case you haven't used it, it's awesome!). That was very illuminating, but there was a lot to it that I didn't understand because of the relatively dense language. Wish I'd had this! Thanks for posting.


Yeah, we really could've done a better job with explaining how to write your own configurations. I tried to make the conversion from images to runtime specified in some manner[1], but really the trade-off between granularity and ease-of-use was tipped to the "extreme granularity" side of the scale.

Though if you're writing low-level container configurations you really should already know enough to write a (simple) container runtime yourself (because the security pitfalls with a slightly misconfigured container can be pretty bad -- even though runc does quite a lot of things to keep you safe that are outside of the spec).

[1]: https://github.com/opencontainers/image-spec/blob/v1.0.0/con...


How would you describe a linux container in one sentence? I'm working with them for some years already, but I'm still struggling with a consceise description aimed for e.g. fresh CS undergrads.


"A Linux container is just a process that runs in a context where it thinks it exists in isolation, usually with some resource limits applied."

If you wanted the more "fundamental" explanation, rather than a "how to use them" explanation.


I've tried through a joke:

> This doesn't build.

> Well it works on _my_ machine.

> Fine, then we ship _your_ machine.


How about: "There is no such thing as a 'Linux container.'"

Followed by: "Now let's talk about namespaces and cgroups."


Correct, you grok it! Only Solaris and illumos kernels have true containers (resource limits) optionally applied to zones (virtualized OS instances providing full blown UNIX servers) running at bare metal speed as a bunch of processes in the global zone. FreeBSD jails come close (they were the inspiration for zones), but are more akin to chrooted jails than containers. Nowadays they are conceptually more like zones than they were in the beginning.


There is the idea of "high level concepts", concepts that build on top of other concepts. So it's probably impossible to explain without some insight knowledge into operating systems, programming, multi processing, security, packaging, CS history, etc.

Explaining it to undergrads would therefore require a "model", i.e. an abstraction of the reality that simplifies what is really there, but is still apllicable to make predictions about outcomes in the real world.

So I would start with analysing how I would explain unix user management to a newby who didn't learn filesystems yet. Then transfer some of that thinking to processes instead of files. E.g. this process doing X and that process doing Y shouldn't conflict with each other but might get problems when they both try Z. Therefore they need to be separated in an abstract way. And that's what namespaces are for. yadda yadda.

Another approach might be taking ideas from explaining virtual memory to newbies. I.e. each process has their own virtual filesystem, their own virtual network, and the underlying operating system will figure out how to make things work out without conflicts (if possible).


"Container" means two related but different things nowadays: (1) an abstraction to group some processes and give them their own pid, fs, and network spaces; and (2) an archive that contains some application(s) and everything that's required to run them (minus the kernel), plus metadata on what it is and what it requires from the host kernel to actually run.


A container has to be 1, but can be 2


"A Linux container is a way to run multiple distros on the same kernel at once."

Then define distro to be all of the user-space stuff in a Linux system: The init system, the libraries, the user-visible applications, and the data.

Then get into how it works, which is all namespaces: A namespace for the filesystem, a namespace for the networking stuff, and so on, including a namespace for RAM... which we don't usually think of as a namespace, but it's what the MMU does, and the kernel uses the MMU.

In short, separate what from how, and ensure they understand the what first.


A Linux container is a virtualization system that virtualizes the entire userspace, not the hardware itself (which is what standard VMs do).


Great, something to pass the time tonight. Many thanks!




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: