Hacker News new | past | comments | ask | show | jobs | submit login

At a previous job, our build pipeline

* Built the app (into a self contained .jar, it was a JVM shop)

* Put the app into a Ubuntu Docker image. This step was arguably unnecessary, but the same way Maven is used to isolate JVM dependencies ("it works on my machine"), the purpose of the Docker image was to isolate dependencies on the OS environment.

* Put the Docker image onto an AWS .ami that only had Docker on it, and the sole purpose of which was to run the Docker image.

* Combined the AWS .ami with an appropriately sized EC2.

* Spun up the EC2s and flipped the AWS ELBs to point to the new ones, blue green style.

The beauty of this was the stupidly simple process and complete isolation of all the apps. No cluster that ran multiple diverse CPU and memory requirement apps simultaneously. No K8s complexity. Still had all the horizontal scaling benefits etc.




I feel like if you have an AMI you don't need docker. If you have docker you don't need a new AMI each time.

For me it's about knowing "what is running." If I can get a binary, docker image, or AMI that lets me know exactly what is running then that's all I need to really care about. For docker without Fargate, k8s, nomad, or etc. it's probably best to simply have an ansible or salt config which pulls down the binary/docker image and updates a systemd service or similar on server group "a" and then you do the same thing to server group "b".

Occasionally you create a new base AMI with updates and expected scripts/agents/etc. and replace the existing instances.

[edit: typo]


Yeah the Docker step isn't strictly speaking necessary for the deployment, it's more for devs with diverse OS's to be able to isolate dependencies on the OS. At JVM shops, the same .jar unfortunately doesn't reliably and reproducibly run the same way across machines. The JVM can only isolate dependencies within itself - if your app requires eg certain /etc/host values, that's something that can and probably should go into a Docker file that isolates dependencies on that, and make that explicit.

As for the AMIs the benefit of always making new ones and storing them (tagging them with the git commit hash) vs mutating a running ec2 is it makes it incredibly simple to roll back - if a release goes wrong, just spin up new EC2s using the previous AMI and viola.

In general we prefer immutability of all things. State is the devil, especially when it comes to "what's running in prod?"


Yes. I do k8s all day and this solution wins every time for simplicity where possible. I get it -- service discovery, healthchecks, no downtime deployments, the rest of the k8s goodies -- doing all that in a generic way is hard!

So just orchestrate your business logic. There's no shame in that. And you might well come away with a cleaner system than if you tried to wedge everything into Kubernetes or some other orchestrator.

I've come full circle -- started with Mesos, moved to Kubernetes, dabbled with Nomad/Consul. I love what Kubernetes offers. Avoid the complexity if you can. If you don't NEED to pay the price. Just don't.


It’s just funny to use three separate encapsulating technologies inside each other... when do we add another layer?


When it provides benefits which either can't be solved by or clearly don't belong to any other layer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: