Hacker News new | past | comments | ask | show | jobs | submit login

There's nothing inherently about Puppet that means it has to manage multi-service "whole OS"-like installations. It can just as easily be put to the task of a Dockerfile: install dependencies and deployables for a single application. Its robust ability to manage things like user accounts, packages, scheduled jobs (e.g. for alerting, though you would have to install at least a second service for this: _crond) and the like makes it vastly superior to Dockerfile shell scripts for complex tasks.

Think of puppet more like a way of simplifying your Dockerfiles to have fewer crazy shell commands in total, rather than hiding the craziness in layers and hoping it all composes properly. If you do use lots of layers, Puppet can make your life much easier, since it can be better at detecting previous layers' changes and working around them (think redundant package install commands. Even the no-op "already installed!" command takes time; if you're installing hundreds of packages--many people are, for better or worse--that can eat up build time).

Puppet isn't a VM provisioner; it can also be used as a replacement for large parts of your Dockerfile, or a better autoconf to set up the environment/deps for your application to run in.

Edit: syntax.

The point about layer complexity is a great one I didn't even consider. Your "config" step is no longer a mish mash of dozens COPY/RUN/etc directives (resulting in N new intermediate image layers), it just results in a single atomic layer where you run the Puppet bootstrap.

Obviously you could accomplish this with shell scripts as well to constrain your config step into one docker RUN directive, but I prefer the declarative state approach to the imperative one in this case.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact