Hacker News new | past | comments | ask | show | jobs | submit login

A couple of observations from someone not-so-familiar with containers:

If the consensus is that containers for the most part are just a way to ship and manage packages along with their dependencies to ease library and host OS dependencies, I'm missing a discussion about container runtimes themselves being a dependency. For example, Docker has a quarterly release cadence I believe. So when your goal was to become independent of OS and library versions, you're now dependent on Docker versions, aren't you? If your goal as IT manager is to reduce long-term maintainance cost and have the result of an internally developed project run on Docker without having to do a deep dive into the project long after the project has been completed, then you may find yourself still not being able to run older Docker images because the host OS/kernel and Docker has evolved since the project was completed. If that's the case, the dependency isolation that Docker provides might prove insufficient for this use case.

Another point: if your goal is to leverage the Docker ecosystem to ultimately save ops costs, managing Docker image landscapes with eg. kubernetes (or to a lesser degree Mesos) might prove extremely costly after all since these setups can turn out to be extremely complex, and absolutely require expert knowledge in container tech across your ops staff, and are also evolving quickly at the same time.

Another problem and weak point of Docker might be identity management for internally used apps; eg. containers don't isolate Unix/Linux user/group IDs and permissions, but take away resolution mechanisms like (in the simplest case) /etc/password and /etc/group or PAM/LDAP. Hence you routinely need complex replacements for it, adding to the previous point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: