

Ask HN: Do you use AWS autoscaling with fully baked AMIs? - renaudg

Is anyone using autoscaling on AWS with fully baked AMIs and blue&#x2F;green deployments (as opposed to base images that get the latest app code from git or S3 at boot time, then get explicitly deployed to for updates) ?
This is what this blog post describes as « phoenix » rather than « snowflake » servers : http:&#x2F;&#x2F;blog.woorank.com&#x2F;2013&#x2F;10&#x2F;phoenix-servers-packerio&#x2F;<p>I’m using Packer and Ansible to build a ready-to-go AMI in a few minutes from a base OS image.<p>But then, what tools do you use to automate the process of getting this AMI live ? Which entails : creating a new Launch Configuration with the AMI, pointing a standby autoscaling group to the new Launch config, ramping up the number of instances in it, start routing live traffic to it (likely by updating a Route53 alias entry to point to the right ELB), and finally scaling down the previously live group. Rinse and repeat.<p>I’m about to roll my own boto-based script to do that, but I&#x27;m sure this has to be a common pattern : anything out there already ?
======
lukeck
We use Cloudformation to define everything in our stack. The AutoScalingGroup
resource has an attribute called UpdatePolicy that describes how instances
with the new AMI are deployed into the existing autoscaling group.

As long as there is enough difference between the current number of instances
in service and the maximum size of the ASG, existing instances won't be
terminated until the new instances are passing healthchecks and the load
balancer is sending them traffic. If the maximum has been reached, enough
existing instances will be killed to provide room for the next batch of new
instances.

------
sdfjkl
I've built just this for a client recently. There's one Ansible playbook that
can be used to build a template for baking the AMI, and that same playbook can
also be used to update a running instances on the fly, so you can update code
without having to replace instances (which I strongly believe is better, and
it's certainly faster than doing the instance cycling dance).

Then there's a second playbook to bake the template into an AMI, update the
launch configs and tell the autoscaling groups to use the new launch configs.
For the latter two I had to write Ansible plugins (which the client might let
me open source, once documented).

------
sthulbourn
The BBC use the pre-baked AMIs exclusively for our deployments, we have a
concept of a "bakery" which creates a full formed AMI which we then use in our
launch config.

The idea being that when you do a deploy, the launch config is updated to use
the new AMI ID. The old instances are terminated and the new ones popup in its
place. You can play with the scaling options to set an update pause time. We
can get a 0 sec downtime with this.

------
ryanfitz
I bake ami using both packer and ansible and then do blue/green deployments
using asgard from netflix. Asgard streamlines the whole process of creating a
new autoscaling group, ramp up new instances and then cut of live traffic to
the older instances. They also expose a simple http api to fully automate
deploys.

I have been doing continuous deployments baking ami's with asgard for a couple
of years now and have had great results.

------
pjungwir
If you're comfortable with Chef, OpsWorks makes this pretty easy. There are
several lifecycle stages, including one for deploying your latest app code.
OpsWorks also lets you autoscale based on load or time of day. There is a
Capistrano plugin to launch new deployments if that's what you're used to.
It's all not as well documented as a lot of AWS, but it's a great tool.

