Hacker News new | past | comments | ask | show | jobs | submit login

Ignorance admission time: I still have no idea what problem containers are supposed to solve. I understand VMs. I understand chroot. I understand SELinux. Hell, I even understand monads a little bit. But I have no idea what containers do or why I should care. And I've tried.



Containers are just advanced chroots. They do the same with the network interface, process list and your local user list as chroot is doing with your filesystem. In addition, containers often throttle resource consumption of CPU, memory, block I/O and network I/O of the running application to have some QoS for other colocated applications in the same machine.

It is the spot between chroot and VM. Looks like a VM from the inside, provides some degree of resource usage QoS and does not require you to run a full operating system like a VM.

Another concept that is now also often automatically connected to containers is the distribution mechanism that Docker brought. While provisioning is an orthogonal topic to runtime, it is nice that these two operational topics are solved at the same time in a convinient way.

rkt did some nice work to allow you to choose the runtime isolation level while sticking to the same provisioning mechanism:

https://coreos.com/rkt/docs/latest/devel/architecture.html#s...


Unfortunately containers provide about the same security as chroots too. Nothing even close to a true virtual machine with not much lower cost.


Chroot does not provide security, just a restricted view on the file system. Container can provide pretty ok security, but fail with Kernel Exploits. VMs provide better security, but also fail with VM exploits (which there are quite regularly some).


Actually many of the VM exploits are related to qemu device emulation or paravirtualization drivers, which are closed by the use of Xen stubdom. Only very few were privilege escalation via another vector, both in kvm and in Xen. I have no idea about other hypervisors.


And in turn most QEMU vulnerabilities are closed by SELinux of your distribution enables it. Libvirt (and thus the virt-manager GUI) automatically confine each QEMU process so that it can only access the resources for that particular VM.


Seccomp, apparmor, and namespacing (especially user!) do add a lot more security than plain old chroots, but still not at the level of a VM.


But couldn't containers have been designed that way? One thing I have in mind is one of windows 10 recent features, which consist in running certain applications using the same hardware level memory protection mechanism than VMs, so that the application is safe from the OS/Kernel, and the OS/Kernel is safe from the application (can't find the exact name for this new feature unfortunately).


Containers can't be designed that way as long as the primitives to build them that way (which are mostly part of the Linux kernel) are missing. That's a core part of the article. Containers aren't an entity by themselves, they're a clever and useful combination of an existing set of capabilities.


It is like that... but in Zones or Jails, not in Linux "container toolkit"


No, the Windows 10 feature he's talking about uses Hyper-V internally. It's called, unsurprisingly, Hyper-V containers: https://docs.microsoft.com/en-us/virtualization/windowsconta...


Actually I found it. It's called Windows 10 Virtual Secure mode

https://channel9.msdn.com/Blogs/Seth-Juarez/Windows-10-Virtu...

(or Windows 10 isolated user mode, which seems kind of similar)

https://channel9.msdn.com/Blogs/Seth-Juarez/Mitigating-Crede...


Oh yeah that's another use of Hyper-V, somewhat similar to ARM TrustZone. It's used to implement "Credential Guard".


You can design all you like, but implementation takes work.

Seccomp only landed for Docker in about 1.12


in Linux...

as the article shows, this is not the point for Zones and Jails.


Is there any fundamental difference between containers and shared-kernel virtualization (OpenVZ) that I am missing?


OpenVZ was an early container implementation that required patches to the Linux kernel that never made it into mainline. Parallels acquired the company behind OpenVZ, Virtuozzo, and then worked to mainline much of the functionality into what are now Linux namespaces.


Oh really? I didn't know that namespaces is linux are the openvz changes. Thought they were a completely new implementation mostly driven by Google?


They aren't, but share some similarities. OpenVZ can be considered an inspiration for LXC. (Which was mostly implemented by RedHat and not Google.)


Correction. The LXC project was mainly Daniel Lezcano and Serge Hallyn from IBM. Then some cgroup support from Google. And then Canonical hired Serge Hallyn and Stephane Graber to continue work on LXC around 2009 where they have continued to develop it till today. Docker based off LXC in 2013.


Very helpful. Thanks.


I'm with you, but I've found a single use case that I'm running with, and potentially a second that I'm becoming sold on. So far, the most useful thing for me is being able to take a small application I've written, package it as a container, and package it in a manner when I know it will run identically on multiple remote machines that I will not have proximity to manage should something go wrong. I can also make a Big Red Button to blow the whole thing away and redownload the container if need be, since I was (correctly) forced to externalize storage and database. I can also push application updates just by having a second Big Red Button marked "update" which performs a docker pull and redeploy. So now, what was a small, single-purpose Rails app can be pushed to a dozen or so remote Mac minis with a very simple GUI to orchestrate docker commands, and less-than-tech-savvy field workers can manage this app pretty simply.

I'm also becoming more sold on the Kubernetes model, which relies on containers. Build your small service, let the system scale it for you. I don't have as much hands-on here yet, but so far it seems pretty great.

Neither of those are the same problems that VMs or chroot are solving, as I see it, but a completely different problem that gets much less press.


Everyone says containers help resource utilization but I think there killer raison d'etre is that they are a common static binary packaging mechanism. I can ship Java, Go, Python, or whatever and the download and run mechanism is all abstracted away.


Does this mean we're admitting defeat with shared libraries and we're going back to static libraries again?


Disk space is cheap. And we've got multi CPU core servers.

So now we have the issue that you have lots of applications running on the same server, and how do we make sure the right version of some shared lib is on there. And that we won't break another program by updating it.

Containers solve that. No more worrying if that java 8 upgrade will break some old application.

So now every application stack is a static application.


Isn't just about disk space though. It also allows you to quickly make API compatible vulnerability fixes without a rebuild of your application.


This isn't a virtue. Containers solve problems in automated continuous-deployment environments where rebuilding and deploying your fleet of cattle is one click away. In the best case, no single container is alive for more than O(hours). Static linking solves way more operational problems than the loss of dynamic linking introduces, security or otherwise.


> This isn't a virtue. Containers solve problems in automated continuous-deployment environments where rebuilding and deploying your fleet of cattle is one click away.

This has literally zero to do with containers and everything to do with an automated deployment pipeline.

As a quick FYI: Those are not unique to containers.


> rebuilding and deploying your fleet

...this applies only software developed and run internally, which is a small fraction of all the software running in the world.


I agree that moving towards static linking, on balance, seems like a a reasonable tradeoff at this point, but it is hardly as cut and dried as a lot of people seem to think.

As one very minor point, it turns vulnerability tracking in to an accounting exercise, which sounds like a good idea until you take a look at the dexterity with which most engineering firms manage their AWS accounts. (Sure, just get better at it and it won't be a problem. That advice works with everything else, right?)

One's choice of deployment tools may slap a bandaid on some things, but that is not the same thing as solving a problem; that is automated bandaid application.

And odd pronouncements like any given container shouldn't be long lived are... odd. I guess if all you do is serve CRUD queries with them, that's probably OK.

As a final point, I feel like the container advocates are selling an engineer's view of how ops should work. As with most things, neutral to good ideas end up wrapped up with a lot of rookie mistakes, not to mention typical engineer arrogance[1]. Just the same thing you get anywhere amateurs lecture the pros, but the current hype train surrounding docker is enough to let it actually cause problems[2].

My takeaway is still the same as it was when the noise started. Docker has the potential to be a nice bundling of Linux capabilities as an evolution of a very old idea that solves some real problems in some situations, and I look forward to it growing up. In the mean time, I'm bored with this engineering fad; can we get on with the next one already?

[1] One very simple example, because I know someone will ask: Kubernetes logging is a stupid mess that doesn't play well with... well, anything. And to be fair, ops engineers are no better with the arrogance.

[2] Problems like there being not even a single clearly production-ready host platform out of the box. Centos? Not yet. Ubuntu? Best of the bunch, but still hacky and buggy. CoreOS? I thought one of the points was a unified platform for dev and prod.


Linking with static libraries takes more time (poor programmer has to wait longer on average while it is linking), also when it crashes you see from the backtrace or from ldd which version of Foo is involved.


Mostly, yes. Notice that Go and Rust (two of the newer languages popular at least on HN) also feature static compilation by default. Turns out that shared libraries are awesome, until the libraries can't provide a consistently backwards compatible ABI.


Go has no versioning, in Rust everything is version 0.1... then one day you update that serialization library from 0.1.4 to 0.1.5 and all hell breaks loose because you didn't notice they changed their data format internally and now your new process can't communicate with the old ones and your integration test missed that because it was running all tests with the new version on your machine. This makes you implement the policy "Only rebuild and ship the full stack" and there you are, scp'ing 1GB of binaries to your server because libleftpad just got updated.


Except outside of Javascript nobody on earth makes libleftpad and whose binaries are 1gb?


In D it is part of the standard library. ;)

https://dlang.org/library/std/range/pad_left.html


Except outside of Javascript nobody on earth makes libleftpad


Static libraries can't be replaced/updated post-deployment - you need to rebuild, whereas, shared libraries in a container can be - which is useful if you're working with dependencies that are updated regularly (in a non-breaking fashion) or proprietary binary blobs.


> Static libraries can't be replaced/updated post-deployment

And that's great news. Immutable deployment artifacts let us reason about our systems much more coherently.


No, they prevent an entire class of reasoning from needing to take place. It is still possible to reason coherently in the face of mutable systems, and people still "reason" incoherently about immutable ones.


Is rebuilding and redeploying a container really any different from rebuilding and redeploying statically linked binaries?


For a lot of applications: no, it's very similar, and if you have a language that can be easily statically compiled to a binary which is free of external dependencies and independently testable, and you've setup a build-test-deployment pipeline relying on that, then perhaps in your case containers are a solution in search of a problem :-)

But there are more benefits like Jessie touches upon in her blog post, wrt flexibility and patterns you can use with multiple containers sharing some namespaces, etc. And from the perspective of languages that do not compile to a native binary the containers offer a uniform way to package and deploy an application.

When I was at QuizUp and we decided to switch our deployment units to docker containers we had been deploying using custom-baked VM's (AMI's). When we first started doing that it was due to our immutable infrastructure philosophy, but soon it became a relied-upon and necessary abstraction to homogeneously deploy services whether they were written in python, java, scala, go, or c++.

Using docker containers allowed us to keep that level of abstraction while reducing overheads significantly, and due to the dockers being easy to start and run anywhere we became more infrastructure agnostic at the same time.


Not everyone has container source code - or it might be impractical. If you run RabbitMQ in your container would you want to build that from source as part of your build process?


"Container source code" is usually something like "run pkg-manager install rabbitmq" though.


It would be nice to have a third option when building binaries: some kind of tar/jar/zip archive with all the dependencies inside. It would give the pros of static and shared libraries without everything else containers imply. The OS could the be smart enough to only load identical libraries once.


That's equivalent to static linking, but with extra runtime overhead. You can already efficiently ship updates to binaries with something like bsdiff or cougrette, so the only reason to bundle shared libraries in an archive is for LGPL license compliance, or for poorly thought out code that wants to dlopen() itself.


Upgrading a library that has been statically linked isn't as nice as a shared lib + afaik the OS doesn't reuse memory for static libs.


A container image is a tarball of the dependencies.


Yes, but containers also provide more stuff that I might not want to deal with.


The OS can be smart enough to load identical libraries once. But it requires them to be the same file. This can be achieved with Docker image layers and sharing the same base layer between images. It could also be achieved with content-addressable store that deduplicated files across different images. This would be helped by container packaging system that used the same files across images.

Page sharing can also depend on the storage driver; overlayfs support page cache sharing and brtfs does not.


That's basically what OS X does with bundles.


jars already support this.


yes, I think we should have the same capabilities in a language agnostic way.

Signed jars are a little painful to use (you can't easily bundle them), but that's a minor issue.


This is why something like 25% of containers in the docker registry ship with known vulnerabilities.


Or you could have done the same thing years earlier with AMIs?


But AMIs are full VM images as opposed to container images, aren't they?


Most Docker images also contain a full OS.


Yes, everyone overlooks this and talks about how Docker containers are "app-only images" or something. They're not app-only. They may be using a thin OS like Alpine, but there's still a completely independent userspace layer. The only thing imported from the host is the kernel. If you made VM images in the same way, they'd also be 200M.

The benefit of "containers" is that you don't need to siphon off a dedicated section of RAM to the application.


I'm very new to containers, but I think I'm starting to get the hype a bit. Recently I was working on a couple of personal projects, and for one I wanted a Postgres server, and for the other PhantomJS so that I could do some webscraping. Since I try to keep my projects self-contained I try to avoid installing software onto my Mac. So my usual workflow would be to use Vagrant (sometimes with Ansible) to configure a VM. I do this infrequently enough that I can never remember the syntax, and there's a relatively long feedback loop when trying to debug install commands, permissions etc. I gave Docker a try out of frustration, but was simply delighted when I discovered that I could just download and start Postgres in a self-contained way. And reset it or remove it trivially. I know there's a lot more to containers than this, but it was an eye-opener for me.


You can do this with Vagrant already. Before Vagrant, people distributed slimmed-down VM images for import and execution. Why is this ascribed as a unique benefit of containers?


Yeah this fits my experience exactly. I suppose I use docker a lot like a package manager (easy to install software and when I remove something I know it will be cleaned up).

Nearly every time I install actual software on my mac (beyond editors & a few other things) I feel like I end up tripping over it later when I find half my work wants version N and the other wants version M


Am also a huge newcomer to this.

Yeah, I think a lot of it is better resource utilization compared to VMs. At the same time, though, I don't think containers are the thing, but just a thing that paves the way for something very powerful: datacenter-level operating systems.

In 2010, Zaharia et al. presented [1], which basically made the argument that increasing scale of deployments and variety of distributed applications means that we need better deployment primitives than just at the machine level. On the topic of virtualization, it observed:

> The largest datacenter operators, including Google, Microsoft, and Yahoo!, do not appear to use virtualization due to concerns about overhead. However, as virtualization overhead goes down, it is natural to ask whether virtualization could simplify scheduling.

But what they didn't know was that Google has been using containers for a long time. [2] They're deployed with Borg, an internal cluster scheduler (probably better known as the predecessor to the open-source Kubernetes), which essentially serves exactly as an operating system for datacenters that Zaharia et al. described. When you think about it that way, a container is better thought of not as a thinner VM, but as a thicker process.

> Because well-designed containers and container images are scoped to a single application, managing containers means managing applications rather than machines.

In the open-source world, we now have projects like Kubernetes and Mesos. They're not mature enough yet, but they're on the way.

[1] https://cs.stanford.edu/~matei/papers/2011/hotcloud_datacent...

[2] http://queue.acm.org/detail.cfm?id=2898444


The big missing "virtualization" technology is the Apache/CGI model. You essentially upload individual script-language (or compiled on the spot) functions that are then executed on the server in the context of the host process directly.

This exploits the fact that one webserver only differs from another by the contents of it's response method, and other differences are actually unwanted. You can make this a lot more efficient by simply having everything except the contents of the response method be shared between different customers.

This meant that all the Apache mod_x (famously mod_php and mod_perl) can manage websites on behalf of large amounts of customers on extremely limited hardware.

It does provide for a challenging security environment. That can be improved when starting from scratch though.


I think the modern equivalent of what you are describing is basically the AWS Lambda model of "serverless" applications. In the open source world, there are projects like Funktion[1] and IronFunctions[2] for Kubernetes

[1] https://github.com/funktionio/funktion

[2] https://github.com/iron-io/functions


I get that that's what they're saying, but it just isn't. Functions are just a way to start containers on an as-needed basis, then shut them down when not needed.

Mod_php is 3 syscalls and a function call, and can be less if the cache is warm. Despite the claims on that page, there is no comparison in performance.

"Extremely efficient use of resources"

It is utterly baffling that one would use those words to describe spinning up either a container or a VM to run these lines of code (their example), and nothing else:

  p := &Person{Name: "World"}
  json.NewDecoder(os.Stdin).Decode(p)
  fmt.Printf("Hello %v!", p.Name)
Number of syscalls it needs to switch into this code ... I don't know. I'd say between 1e5 and 1e8 or so. Probably needs to start bash (as in exec() bash) a number of times, probably in the 3 digits or so.

So I guess my issue is that functions use $massive_ton_of_resources (obviously the lines of code printed above here need their own private linker loaded in memory, don't you agree ? It's not even used for the statically linked binaries, but it's there anyway. Running init scripts of a linux system from scratch ... yep ... I can see how that's completely necessary), but when they're not called for long enough, that goes to 0, at the cost of needing $even_more_massive_fuckton_of_resources the next time it's called.

Of course, for Amazon this is great. They're not paying for it, and taking a nice margin (apparently about 80%, according to some articles) when other people do pay for it.

And the really sick portion is that if you look at how you're supposed to develop these functions, what does one do ? Well you have this binary running "around" your app, that constantly checks if you've changed the source code. If you have, it kills your app (erasing any internal state it has, so it needs to tolerate that), and then restarts the app for the next request. Euhm ... what was the criticism of mod_perl/mod_php again ? Yes, that it did exactly that.


A container needs 10-100 syscalls, depending how much isolation you want. A single unshare() and exec gets you some benefit. You are out by orders of magnitude.


And then of course the system inside the container needs to start up, configure, run init scripts, ... Did you count that in those 100 syscalls ?

Take the example here: https://github.com/kstaken/dockerfile-examples/blob/master/n...

Which does something a lot of these functions will do : get nodejs, use it to run a function. Just the apt-get update, on my machine just those instructions, ignoring actually running the function (because it's insignificant) does close to 1e6 syscalls.


Lightweight application containers do not run init or anything like that! They're just chroots but with isolated networking, PIDs, UIDs, whatever.

For example, on my FreeBSD boxes, I have runit services that are basically this:

exec jail -c path='/j/postgres' … command='/usr/local/bin/postgres'

Pretty much the same as directly running /usr/local/bin/postgres except the `jail` program will chroot and set a jail ID in the process table before exec()'ing postgres. No init scripts, no shells, nothing.


I don't understand the criticism. FreeBSD jail is more like chroot than like a container. A container, as I understand it, runs it's own userland. Otherwise, you can't really isolate programs in it. If that postgres was compiled with a libc different from the one on the host system, or let's say required a few libraries that aren't on the host system, for instance, would it run ?

Does it have it's own filesystem that can migrate along with the program ? Does it have it's own IP that can stay the same if it's on another machine ?


You're correct. Containers do contain their own userlands, a fact many gloss over. PgSQL will have to load its containerized version of all libraries instead of using any shared libraries linked by the outside system.

This is often done via a super thin distribution like Alpine Linux to keep image size down, despite the COW functionality touted by Docker that's supposed to make it cheap to share layers.

The difference is that unlike a fully virtualized system, the container does not have to execute a full boot/init process; it executes only the process you request from within the container's image. Of course, one could request a process that starts many subservient services within the container, though that is typically considered bad form.

What people really want is super cheap VMs, but they're fooling themselves into believing they want containers, and pretending that containers are a magic bullet with no tradeoffs. It's scary times.


Even a basic chroot runs its own userland! "Userland" is just files.

In my example, /j/postgres is that filesystem that can migrate anywhere. (What's actually started is /j/postgres/usr/local/bin/postgres.) Yeah, you can just specify the IP address when starting it.


What system? Your link just starts a nodejs binary, no init process. And you also don't seem to realise that a docker container is built only once? Executing apt happens when building the image (and then is cached for if a rebuild happens later), not when starting it.


These steps are only run for initial creation of the container image. Running the container itself is only the last step from that file: Executing the node binary.


I am not quite sure what it is that you want. It seems obvious to me that containers should have more overhead than CGI scripts; it also provides a better isolation story. I mean, you already said it:

> [the Apache/CGI model] does provide for a challenging security environment. That can be improved when starting from scratch though.

And the number of lines of code in the example probably doesn't quite matter so much, because that's all it is: an example. I am sure that you can run more lines of code than that.

> Euhm ... what was the criticism of mod_perl/mod_php again? Yes, that it did exactly that.

I mean, that's also basically my point, that Lambda is basically the CGI of the container world. Lambda and CGI scripts really do seem like they are basically the same thing; I still speculate that they will be used to fill similar use cases. I am not really opining on which one is actually better.


You can share resources between VMs (frontswap etc. and deduplication, using network file systems like V9FS instead of partitions) but it complicates security.

It is still safer than containers as one kernel local root bug does not break a VM, but breaks a container. The access to hardware support also allows compartmentalized drivers and hardware.


I will show you some use cases:

- have different versions of libs/apps on the same OS (or run different OS's) - tinker with linux kernel, etc without breaking your box (remember the 90's?) - building immutable images packed with dependencies, ready for deploy - testing distributed software without VMs (because containers are faster to run) - if you have a big box (say 64gb, eight core or whateva) or multiple big boxes, you can manage the box resources through containerization, which can be useful if you need to run different software. Say every team builds a container image, then you can deploy any image, do HA, Load balancing, etc. Ofc this use case is highly debatable


These comments are helpful. Thanks. Sounds like for a given piece of hardware you might be able to fit 2 or 3 VMs on it, or a lot more containers. But without the security barriers of VMs.

That being the case, why not just use the OS? And processes and shared libraries?


The article touches on the technical details of this briefly, but the underlying point here is that containers effectively do use the OS, and processes. Like Frazelle says in the article: "a 'container' is just a term people use to describe a combination of Linux namespaces and cgroups." If that's nonsense to you, check out some of her talks, they treat those topics in a friendly way. At the most basic level, though, a container is just a process (or process tree) running in an isolated context.

Sharing library code between processes running in containers is more complicated, since it depends on whether and how you've set up filesystem isolation for those processes, but it's possible to do.


The isolation means that don't have to worry about containers interfering with each other. It is more about separating and hiding processes rather than protecting from hostile attacks.

The other big advantage is containers provide a way to distribute and run applications with all their dependencies except for the kernel. This means not having to worry about incompatible libraries or installing the right software on each machine.


It can be easier to run a jail (or container) and assign it an IP and run standard applications with standard configs than to run a second instance of something in a weird directory listening in a special way.

The other big difference between this and a VM is that timekeeping just works.

You're not necessarily restricted to friendly-only tenants, either. Depending on how you configure it, there can be pretty good isolation between the inside and the outside and the other insides. You lose a layer of isolation, but it's not impossible to escape a virtual machine either.


> That being the case, why not just use the OS? And processes and shared libraries?

That's essentially what a Linux container is: a process (that can fork) with its own shlib. If you have lots of processes that don't need to be isolated from each other and can share the same shlib, then no, you don't need this mechanism.


Okay, so it's a nice self-contained packaging mechanism that obviates dependency hell. Sounds a bit like a giant lexical closure that wraps a whole process. And from which escape is somewhat difficult. Makes sense.


> the same shlib, then no, you don't need this mechanism

And if you want to use a modern development tool chain you don't really have this choice. They produce statically linked binaries that need, at minimum, their own process and TCP port (if you run a proxy which when you think about it is pretty wasteful).

There is no good reason (other than ease of tool chain development) for that, and it's probably cost hundreds of millions or even billions of dollars in servers and power, but there you go.

PHP and Java are essentially the only languages with good support for running without containers, and Java isn't even used that way usually.


I think it's important to make the distinction that containers do provide a level of security isolation, but that in most cases it's not as much protection as it provided by VM isolation.

There are companies doing multi-tenant Container setups, with untrusted customers, so it's not an unknown concept for sure.

what I'd say is that the attack surface is much larger than a VM hypervisor , so there's likely more risk of a container breakout than a VM one.


> There are companies doing multi-tenant Container setups, with untrusted customers, so it's not an unknown concept for sure.

I'm a little shocked to hear this (given everything everybody else has said about container security), but I guess it means the security of containers can be tweaked to be good enough in this environment.

Examples?


How do you make a docker container secure? Run it in a bsd jail :p. But I'm sure that people with the right expertise can do this. For the rest of us Docker is mainly a packaging mechanism which helps alleviate accidents and makes deployment a little more predictable.


I don't understand Docker to be honest. It was a big pain to have unexplainable race conditions when I tried to use it for production apps.

Ended with a spectacular data loss, of my own company's financial data. Luckily I had 7-day old SQL exports.


In my experience it does two things that VMs don't do as well:

1. More efficient use of hardware (including spin up time) 2. Better mechanisms for tying together and sharing resources across boundaries.

But in the end they don't really do anything you couldn't do with a VM. It's just that people realized that VMs are overkill for many use cases.


They make shared folders and individual files a lot easier than VMs, also process monitoring from the "host".


Very much not worth the cost of reduced security and reliability. You also have vastly more complicated failover due to no easy migration.


Increase server utilization by packing multiple non-hostile tenants on it, quickly create test environments, have a volatile env. You can have all of those with VMs although at much higher CPU, RAM usage cost.


With one big limitation: they must all run the same os kernel (so you cannot run say a Windows or FreeBSD container on a Linux host).

In fact, nobody guarantees that say Fedora will run on an Ubuntu-built kernel. Or even on a kernel from a different version of Fedora. So, IMO, anything other than running the exact same OS on host and in container is a hack.


> In fact, nobody guarantees that say Fedora will run on an Ubuntu-built kernel.

"nobody guarantees" just means that you can't externalize the work of trying it and seeing if it works. I don't think that's a huge loss, considering the space of all possible kernels, configuration switches, patches and distro packages is huge.

It's like refusing to use a hammer because nobody can assure you that hammer A was thoroughly tested with nail type B.


No, its like using a nailgun A with nails Y when it's only guaranteed to work with nails X. Or like using a chainsaw A with chain Y when it says you should use X. But hey, at least you are not trying to use nails on a chainsaw... ;-)


No. It's like shooting yourself in the face because your friend survived it.


> nobody guarantees

As long as the ABI is stable and you don't reach out to something that would have moved within /{proc,sys,whatevs}, you're good. [0]

[0]: https://en.wikipedia.org/wiki/Linux_kernel_interfaces


Measure the "much higher" before deciding. Especially after you apply solutions to reduce the memory and disk cost.

I'd say the "much higher" is nowadays a relic of the past.


Same with me. This plays right into the complexity issue.

Even if you understand them, you have to understand the specific configuration (unlike VMs, where you have a very limited set of configurable options, and the isolation guarantees are pretty much clear).


Eliminates the redundancy of maintaining an OS across more than 1 service.


They're VM's but much more efficient and start faster. There's a clever but shockingly naive build system involved. That's pretty much it.

Going beyond this you get orchestration - which you can certainly do with VM's but it's slow; and various hangovers from SOA rebadged and called microservices.

But they're really, really efficient compared to VM's.


> The're VM's

They are definitely not VMs.

> But they're really, really efficient compared to VM's.

I think that the virtualisation CPU overhead is below 1%. Layered file systems are possible with virtual machines as well so disk space usage could be comparable.

What do you mean that they are "really, really efficient" ?


5-10%, realistically, with some very informal testing. Not really a particularly big deal.

Really really efficient relates to how many containers can be run on a given system vs VM's. About 10x as many.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: