Hacker News new | comments | show | ask | jobs | submit login

1. This is debatable, but probably too long a discussion to have over HN.

2. Once you're "running a script on the target that loads the correct variables", you're doing configuration management. There needs to be a mechanism for retrieving the metadata from somewhere, usually centralized. You'd be better off delivering this through a CM tool to build and maintain the file.

3. This doesn't work. Not only do you have the conflicts to deal with, and a post-configure script a hacky way to deal with it, but now if you need to change a single cron job, you have two unsatisfying options - make a new meta-package for that one cron job and overwrite the versions delivered by the five other packages, which will break any config validation you're doing (as one would hope you're doing), or I can update all of the packages that provide that cron job and update them all (which works, but now you have to come up with a way to notify your package that just in this case you shouldn't restart services adn the like just because the package was deployed. On top of that, there's an even worse issue, which is - how do you know when the last cron job is removed? If I have five packages that all create that file, either removing the first one removes it, which breaks the other four, or I have to come up with post-remove script logic that tries to programmatically determine if this is the last reference to that file and remove it only then. If I did the latter, my meta-cron job package update would break this model as well, and I'd have to remember to remove that specifically as part of uninstalling the their packages.

4. I guess this works, but now you're dealing with chroot'ed environments, which means deploying not just the specific stack you want, but all of the necessary libraries, and as you say, your original package manager idea doesn't really deal with this.

And tomcat gets a lot more complicated too, when you're trying to manage shared XML resource files. In fact, the whole package manager notion really requires the "X.d" style approaches to loading files.

But all of these challenges are why package managers are the worst solution for deploying configuration files. Package managers are great for deploying static software, shared libraries, and the like. I'll even concede that dropping code on a machine is fine with a package manager. But they're not designed to deal with dynamic objects like config files and system configurations.

In fairness, there's a whole other third class of objects that currently both package managers and configuration management tools do terribly, and that's things that represent internal data structures - database schema, kernel structures, object stores, etc.

I've built a couple of configuration management tools and work for a company that has a few more, so this is something I've spent a lot of years working with. Package maangement as configuration distribution is attractive for simplicity, but falls apart beyond the simple use cases. Model-driven consistency is vastly superior.




3. If you consider base OS packages to be hacky... That's most packages deal with potentially overwriting hand-edited configs. 'deliver foo.conf.new; [ ! -e foo.conf ] && mv foo.conf.new foo.conf'

But you have a good point! Dupe files are hard to manage. Some package managers refuse to deal with it, others have complicated heuristics. The best solution would be to just deliver the files and let a configuration management tool sort out what needs to be done based on rulesets. This can still be accomplished with packages as a deploy tool, and a configuration management tool to do post-install work, instead of the %post section.

4. You already deal with chroot environments using lxc/docker/etc. They're just slightly more fancy. But even with docker's union fs you still have to install your deps if they don't match the host OS's. Unless, of course, you package all the deps custom so they can be installed along with the OS ones. Nothing is going to handle that for you, there is no magic pill. Both solutions suck.

Most configuration management eventually becomes a clusterfuck as it grows and gets more dynamic and complex. In this sense, delivering a static config to a host in a package is simpler and more dependable. I can't tell you how much more annoying it is to have to test and re-deploy CM changes to a thousand hosts, all with different configurations, only to find out 15 of them all have individual unique failures and having to go to each one to debug it. On the other hand, you could break out those configs into specific changes and manage them in VCS. Or even pre-generate all the configs from a central host, verify they are as expected, and package and deliver them. I have done both and both have their own special (read: retarded) problems.

For reference, the sites that I worked at that delivered configuration via package management spanned several thousand hosts on different platforms, owned by different business units and with vastly different application requirements. But you have to adjust how you manage it all to deal with the specific issues. edit Much of it involves doing more manual configuration on the backend so you can 'just ship' a working config on the frontend. Sounds backwards but (along with a configuration management tool!) it works out.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: