Hacker News new | comments | show | ask | jobs | submit login

It's tough to give a definitive answer because every company is different, but I work at a very small strange shop, 5 devs + 1 manager, maintaining 10-15 custom websites (5 of which are on a single VM) and I have been deploying our new apps in Docker (no K8s). I use each container as a "miniature VM" which runs the entire web app (except the database), blasphemous I know. Compared to putting multiple apps on one box, the Docker method adds some minor complexity, but keeps apps isolated. That was my biggest requirement, to prevent devs from cross-pollinating applications, which happened constantly when everything was on a single server. It was much simpler than setting up Puppet on a bunch of legacy machines. I also considered putting each new app on its own VM, but went with Docker because a lot of our apps hardly get any traffic and would have wasted quite a bit of resources to spin up a VM for each (all our servers are in house).

The pros to Docker so far: Dependencies: Dockerfile gives a list of explicit system dependencies for each app. This can be done in other ways with package files or config management but this was not being done before and this is an easy catch all to force it for any different type of environment. Logical Grouping: App environment (Dockerfile + docker-compose.yml) lives alongside codebase in a single git repo Deployment: Deploy to any box with `git clone myapp && docker-compose up` for testing/dev instances or migrations Development: We mount the codebase from a host directory into each container, with git hooks to update the codebase, which works well for us (we have no CI) Plus it's fun!

Cons: Operational Complexity: Devs/Ops teams probably won't want to learn a new tool. I setup a Rancher instance to provide a GUI which makes things a bit easier to swallow. It has things like a drop in shell, log viewer, performance metrics, etc. Network complexity: we never needed reverse proxies before, now we do. Clustering/Orchestration: We don't cluster our containers, but the more we add the more I think we might want to, which would add a whole new layer of complexity to the mix and seems unnecessary for such a small shop. Security?: lots of unknowns, lack of persistence can be bad for forensics, etc. Newness: Documentation isn't great, versions change fast, online resources may be outdated.

Like you, I'm sometimes unsure if this is the right choice. Maybe a monolithic server or traditional VMs + Puppet would be easier, simpler, better? In the end, I think Docker just fit with the way I conceptualized my problem so I went for it. You may never get that "definitely good enough" feeling, but if it fits your workflow and keeps your pipeline organized and manageable, then I say go for it.

Very interesting ! I am a solo guy but I sort of followed the same way you did. And when I had to go with the Kubernetes road because managing multiple Docker over multiple boxes became too complicated, I just went back to one website = one VM... Giving me time to learn all the k8s stuff, which will be probably be useful soon but just not right now.

That's interestig to me, the Rancher bit: I went the route of writing down all my routine docker-compose invocations in a Makefile and I gave that to the devs with builtin documentation (list of targets + examples of workflows), but I see how Rancher could standardize that.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact