Hacker News new | past | comments | ask | show | jobs | submit login

That's not what containerization is about as can be easily demonstrated by simply running your container on a kernel that's built without some config option you rely on.



Wait, what? So this isn’t a problem because you can just make a container for your containers that preserves the original API? Then what do you depend on for that container?

It’s like Office Space, “what would you say it is you do here?”


Containers standardize user space, and let processes think they're listening on ports that are already taken. They do some resource isolation, and provide similar security guarantees as painting the back of your laptop with snake oil.

They don't help at all with kernel API drift, and barely help with libc drift.


I didn’t say anything about security guarantees. Where did that come from?

You’re saying that using a container — which defines a specific version of an OS — shouldn’t protect against differences between that version and a new OS version? And shouldn’t save any effort whatsoever in reproducing the environment in which I got code working?

The sole purpose is so that I can leave an app configured to talk to port 8000 even though my machine is already using port 8000?

That feels like it falls short of the conventional case for virtualization or containerization.


You asked what they do.

One of the main selling points is security. (They don't do much for security, but that sort of thing doesn't matter much to sales).

The most useful feature is they reduce the pain associated having negligently-large build and runtime dependency trees.

They also add a few ulimit style tricks. Think "better than chroot, arguably worse than BSD jails", but with a standardized cross-linux-distro build system (dockerfiles)


>You asked what they do.

I asked for their ostensible justification. A valid answer should not involve making up a justification I never referred to, with the implication I had done so, with you refuting said justification, even if other people proffer said justification.

And the reason that I asked, was because the first reply eye-rolled at the problem I ran into, and then (seriously, as best I can tell) suggested I should have solved it by nesting a VM within the existing VM to control the dependency chain that led to the problem — even though that was what the first VM exists to solve!

To clarify, here is that (dubious) reply again:

> That's not what containerization is about as can be easily demonstrated by simply running your container on a kernel that's built without some config option you rely on.

That is, surrounding the container by a controlled environment would avoid the problem… but I was already controlling the environment by a VM that exists to solve that problem! Hence, (sardonically) wondering what the first one is accomplishing.

> The most useful feature is they reduce the pain associated having negligently-large build and runtime dependency trees.

No, the most useful feature is locking down a machine spec that Just Works. When I have to go and debug things about that machine that suddenly fail, for reasons Docker (not the machine) introduced, I lost the most useful feature.

Sating my desire to have my app point at port 8000 and no other … is way, way down the list.


Justification? For what? Seems like you're the one who feels entitled for something, it's you who needs justification.

> and then (seriously, as best I can tell) suggested I should have solved it by nesting a VM within the existing VM

You were the first to suggest anything like that. My answer did not offer any suggestion for solution at all, read it again.

> No, the most useful feature is locking down a machine spec that Just Works.

That's not a feature Docker ever had or intended to have. Seems you have misunderstood something about what containers actually are. They're very explicitly not "virtual machines lite", and it sounds like you want an actual virtual machine instead.


There are things that containers acually do (I discussed those above, and won't repeat), and things they ostensibly do, but do not actually do, such as security sandboxing, or providing a fixed, "just works" environment for your binaries, like a VM would.

As far as I know, the people making the incorrect claims/assumptions are distinct from the people that built commonly-used containerization software.

If you want to understand what containers are, start with the mental model that it is a chroot with /dev, /sys and /proc mounted, and with processes running with uid=0. Their sandboxing isn't quite that bad, but they are closer to chroot environments (or bsd jails) than a VM. Next, package a few things by writing dockerfiles, then deploy them to a raspberry pi or something.

If you still care, then follow up by reading the kubernetes tutorials.

Edit: This message is directed to SilasX.


> shouldn’t protect against differences between that version and a new OS version

What does "should" in this context even mean? They don't, because they're not meant to do that. They can be used to shield you from some differences, and for many use cases that subset is all that matters, but they don't and can't shield you from all of them.


I was asking to clarify what your very confusing sentence meant. Because it seemed to be saying that, if I define a VM to use OS 1.2.3, because I know my app will work with that version, then my app on that VM should cease to work when 1.3.0 is released. And that seems dubious.


You seem to confuse containers with virtual machines. Those are very different concepts that solve different problems.

You can't rely on containers alone if your goal is to ensure that your app will work on future operating systems. That's not something containers do. They only let you provide your own user space to run in a quasi-isolated environment, no more than that. You're still using the host's kernel which can still screw you up in multitude of ways (which is exactly what I said in my earlier comment), and you're still limited by API guarantees that the tools you use to make containers are providing you.


This can definetly happen in both directions (kernel too old, app too old).

For example, apache can no longer run with SSL enabled on current synology NAS's docker environments.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: