Hacker News new | past | comments | ask | show | jobs | submit login

It seems that isolation is frequently the cause. E.g.:

* Better developer environment. Actually, I'm not sure anymore. It totally makes sense for testing (all the CI/CD stuff), and - thanks to the packaging aspect - it's easy to set up external dependencies (like databases), but I just wasn't able to grasp how the actual development is better with Docker. Developers tinker with stuff, containers and images are all about isolation and immutability, and those stand in one's way.

* PID1. Obviously, isolation is the cause for this. With `--pid=host` it's gone, but no one does that, probably because of nearly complete lack of UID/GID management, thus the security drawbacks. I guess, it has roots in "all hosts are the same" idea, as UID/GID have to be a shared resource and they're harder to manage than just spawning things into a new PID namespace so processes won't mess with each other.

* Networking. Yes, as it was pointed out, it makes sense due to port conflicts, but usually it's inferior over-complicated version of moving port numbers to environment variables. Instead of binding your httpd to [::]:80 and setting up port mapping, bind it to [::]:${LISTEN_PORT:-80}. All the same stuff, but - IMHO - much more straightforward. Sure, there are (somewhat unusual) cases where separate network namespace is a necessity (or just a good thing), but I don't think they're any common.

So, I think, the question is also: is there (and why) the need for isolation in a way Docker does it? Doesn't the way it does unnecessarily complicate things?




Developer environment/experience is vastly better in my opinion.

All of our dev environments are docker images. Setting up a machine for a developer is install source control, IDE & docker, then pull the latest dev image and they are done. Pre-docker it was several pages of documentation and tracking down various coworkers to make sure you installed&configured things correctly. While yes, scripts helped, people always forgot to update something in the script and didn't notice until someone needed to install the dev environment. The immutability forces people to actually update the dockerfiles with the new dependency/tool/config as that is the only way to do it.


Developers tinker with code, but most of the time you don't tinker with the output of that code, like hot patch your binaries or whatever. Same with systems, you build a container from a Dockerfile and maybe Makefile, you don't then go and change a few things you change the source code. We are just pushing the immutability boundaries further and getting more reproducible environments as we do it.


It depends on the project, I guess. Sometimes, it's not that easy.

For scripting languages that don't have a compile-time the code is what gets executed. So with Docker there's either necessity to rebuild the container (extra delays, and quite noticeable ones) or necessity to maintain a separate Dockerfile.dev and mount-binding the code into the container a-la Vagrant.

Even for compiled stuff, it can be a nuisance with that "Sending build context to Docker daemon" phase. Like when you have a fair chunk of artwork assets next to the code. And the advantage of having the intermediate compiler results are also either lost (adding extra build time) or require extra tricks to make things smooth and nice.

And either way, it also means extra work setting up your debugger toolset jump over the isolation boundaries so you can dig into live processes' guts. One's probably going to abandon PID space isolation.

Those consequences are quite rarely mentioned when the immutability aspects of Docker are advertised. It's usually told as "you'll have a reproducible environment" (yay! great!) but never "you may lose that heartwarming experience of having a new build ready to be tested while you switch from the editor to the terminal/browser/whatever window".


You can debug from the host or from another container using `--pid=container:id` which puts you in the process namespace of a running container.

Build time is important, if you can use build layer cacheing it helps a lot, but how to structure it depends on your project. I don't myself use Dockerfile.dev, but I do sometimes mount the code into the container to build and run it directly. I think it would definitely help for more blogs and examples of how to do these things, as there is a lot of room for improvement.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: