
Ask HN: Has any small dev team successfully deployed a complex app with docker? - eblanshey
I&#x27;ve been pondering the implications of using Docker &#x2F; Kubernetes to deploy a fairly complex application to production. We are a small team of a few developers, and although I personally deal with deployment using scripts, I&#x27;m by no means a devops expert or sysadmin.<p>Whenever I look into Kubernetes and everything that goes with it (maintenance, monitoring, logging, etc), I feel like it requires a full-time devops engineer to create and manage all of it. Our team will soon undergo a major rewrite of our application and the decision should be made to use Docker or continue using deployment scripts with ansible.<p>Have any devs here successfully learned docker and kubernetes, deployed it in production, and not regretted the decision later? What benefits did you obtain? Any tips about it for a dev?
======
n42
like another commenter said, start small. if your application(s) follow the
twelve-factor app methodology[1], containerizing them should not be
challenging. that is the first step.

we are in the middle of a somewhat long rollout of Kubernetes by one engineer
(me). by far the most time has been spent on making our applications work as
twelve-factor applications, but a lot of that work happened before I even
touched a Dockerfile.

we're at the point that our development team is currently using Kubernetes
locally as their development environment (but not yet in production). while at
times I have had moments of self doubt and questioned our decision and
approach, there have been many moments where the benefits have been made clear
as day. the engineers are happy with the flexibility and consistency that
Docker containers have brought to development, but it has come with more
operational complexity.

ultimately, you need to decide what your target scale is for your engineering
organization in 3, 6, 12, 24 months.. we are in the beginning stages of a
rapid growth phase for our engineering team, and developing complicated cross-
cutting concerns for our products was becoming cost-prohibitive simply because
our development environment was too complicated to setup, maintain and debug.
containerizing it temporarily relieved that pain and bought us time to then
focus on scaling up those changes to production when appropriate.

[1]: [https://12factor.net/](https://12factor.net/)

------
imauld
> I feel like it requires a full-time devops engineer to create and manage all
> of it.

It does.

Kubernetes is a great piece of tech but it is pretty complicated and does add
a fair amount of overhead on top of whatever operational concerns you
application already has. If you don't have anyone that knows how to build and
manage a cluster going to production with it would be extremely risky IMO.

I would recommend trying GKE, Google's managed k8s service, in staging/dev
before even considering it as a serious path forward. If you are married to
AWS or just don't want to use GCP then kops would be your best bet. I have
friends working with EKS, AWS' managed k8s service, and it doesn't sound
anywhere near as ready as GKE or as flexible as doing it yourself, frankly it
sounds like a real pain. I haven't used k8s on Azure but I have heard that
it's pretty good.

I also don't generally recommend deploying a new application as decomposed
services either. Unless you have done this a bunch of time it will probably
save you a bunch of time to just do it as a monolith and deploy it to standard
cloud VM's or on-prem servers. Also be aware that Docker and by extension k8s
are not the best way to run stateful applications. It can be done but it is
definitely more work to get a k8s based DB working the same way as a non-k8s
DB in terms of data retention. I imagine a complex application will need some
kind of data store so even if you go with k8s you may still end up with
non-k8s instances for you data.

k8s is great but it's overhead con easily outweigh it's benefits if you don't
have a someone who can manage it. Start simple if you can and work from there.

------
atmosx
> I've been pondering the implications of using Docker / Kubernetes to deploy
> a fairly complex application to production.

These systems are managed by dedicated _teams_ most of the times. Kubernetes
has many moving parts and debugging issues can be a nightmare.

Using docker is the way of the future. Docker merges development/production
environments, facilitates CI/CD, simplifies deployments/rollbacks/etc., it
even enforces _best practices_ by separating layers (persistent vs ephemeral),
so on and so forth.

You could deploy your application through ansible docker module on EC2
instances, droplets or what-have-you.

So the more subtle question is _why do you need an orchestrator?_

Container orchestrators solve the problem of _density_. Say, my stack is made
of services running in 128MB of RAM. I want to scale them quickly in and out
on-demand.

If you don't have a density problem, e.g. your application will need 2GB of
RAM anyway, I would say go with docker & EC2 autoscaling. Much easier to
handle, you won't have to debug weird network/logging issues and all the
problems that orchestrators bring along.

If you choose to go with an orchestrator, then for a small team, I would
advice to take a look to Docker Swarm. Swarm comes with service discovery,
load balancing, secrets & config-handling build-in. That's a big win for
smaller teams. The learning curve is rather small, if you're already using
docker. What you will have to handle if you go with swarm is:

    
    
        - Cluster initialisation (if you chose to automate this part, you might not, but you'd better automate the rest)
        - Node-level autoscaling
        - Container auto-scaling
        - Dynamic routing (traefik or nginx + confd will solve this for you)
        - Security (same goes for k8s or any other orchestrator, security requires eye for detail & experience)
    

There are other minor issues (e.g. Swarm internal load balancer won't forward
the real IP of the request to the internal service, which can be a PITA in
some cases. There are workarounds mind you), but all orchestrators have minor
issues and limitations.

Another word of caution about orchestrators: Most teams, don't need
orchestration and don't have use cases that simpler setups cannot solve.
Simple is smart, simple is genius. Keep it simple, until you can't keep _that_
simple anymore.

Oh, don't even think about adding a persistent layer inside the orchestrator!
If a service uses a Redis for caching for example, could be deployed as a
_stack_ in a swarm cluster. If you need persistence that goes beyond the
lifecycle of the container, keep them out :-)

Good luck!

------
dylanhassinger
start small, dont prematurely optimize

