
Ask HN: Do you bake AMIs for AWS deployments? - pas256
I am curious how people are doing their staging and production deployments on AWS. Do you bake everything into the AMI and do nothing at boot? Do you boot a vanilla AMI and do all configuration during boot? Something in the middle? If you don&#x27;t fully bake, is it because it is too hard to manage?
======
benblack
I make complete AMIs with packer, configure them entirely using environment
variables in userdata, configuration data in etcd, and shell scripts, and run
all services in docker containers, which I also build using packer. With all
services in containers, AMIs are almost never rebuilt and there is no need for
configuration management/mutating infrastructure.

Building containers with packer is easier than switching to Dockerfiles for
existing builds, but does not support fast, incremental build and deploy or
tagging. Even without those features, I see no advantages in traditional CM
other than the convenience of familiarity and legacy.

~~~
shykes
What does a typical packer config file look like for building a docker
container? I think of it as a useless abstraction on top of docker, but that's
probably because I only generate containers and don't need the other packer
targets. I only ever need one AMI, GCE image or vbox: the boot2docker base.
And even that is exported from a container :)

~~~
mitchellh
Since building Docker support into Packer, I've heard a few beneficial points
from some companies I've helped integrate it into as well as some users. I'm
not here to convince you, but just to share what I feel are some valid use
cases. Alas, portability is not the only reason, but is one.

* Software installation/configuration knowledge remains solely in existing Chef/Puppet/etc. code-bases. Dockerfiles can add another format for software to be "installed" (or "packaged" if you prefer). Packer + Docker allows you to use your existing expertise, CI process, etc in order to create these containers.

* Common "image" configuration format: again Dockerfiles represent a Docker-specific way of building images. This is all well and good, but it is still very common to have multiple types of images (AMIs, Docker containers, VirtualBox, etc.). In a world where Docker isn't used everywhere, it is a burden to maintain multiple methods of building images. Packer provides a single way to do it that is flexible to multiple platforms. And even if an org decides to transition completely to Docker, Packer helps get them there. Perhaps they want to switch to Dockerfiles after that, but there is still point #1 above.

* Portability: Packer represents a low-risk way to adopt Docker containers, if you use Docker. Dockerfiles are somewhat of an "all-in" approach to Docker. If you don't like Docker, or Docker isn't good for this specific use case (yet, or ever, doesn't matter), then Dockerfiles have to be translated over to another format. As I'm sure you know, big IT is all about minimizing risk when adopting new technologies (actually, a top point to NOT adopt new technologies that we have to fight!). Packer represents a way to say "yes, Docker is new, but Packer provides a pretty low-risk way to get into it. Let's first build vSphere images, like you're used to, and see how those transition to Docker containers. If you don't like it, we still built automation to build vSphere VMs!"

* Extensibility: Packer is very plugin-friendly. You can hook into almost anything. This allows some nice plugins to exist to help augment the process for building images, whether they be containers or not. If Dockerfiles don't support a command to do something, then Packer plugins can very easily do that for you. Maybe it doesn't make sense for this certain feature to be a core feature of Dockerfiles, OR Packer. Either way, it doesn't matter, because the org can just build a plugin for themselves and use it internally. No harm done.

* Process friendliness: In addition to the portability above, centralizing on Packer for image creation is 1..N less processes to adhere to. Docker has a different process for building containers. Aminator has a different process. Every new process is a new special snowflake CI handler to run them, new education for employees, new maintenance. By using Packer, you can use the same CI runners/parsers/steps (Bamboo, Jenkins, etc.) to build any sort of image.

And to answer your question on "what does a Packer config file look like" here
is a basic, but fairly typical config file:

    
    
        {
          "builders": {{
            "type": "docker",
            "image": "ubuntu",
            "export_path": "image.tar"
          }],
    
          "provisioners": [{
            "type": "shell",
            "scripts": ["base.sh", "nginx.sh"]
          }]
        }
    

Pretty simple. Easily human and machine editable/readable. Also, if you
happened to want to build an AMI too, it is not much different. And finally,
if you use Chef/Puppet/etc, you can just drop that in there, and it works just
like you'd expect. Packer even installs Puppet/Chef for you (if you want)!

I hope that clears things up. Packer has been helping with adoption of Docker
for many people I've helped! I think its clear from my work on Vagrant and
Packer (and some future stuff), that the one thing I try to avoid is lock-in
of any sort. I focus instead of human process, rather than technical
specifics. You can argue that Packer itself is a lock-in format, but the point
is that its a single similar format for many. Its agenda is to be as helpful
to as many projects as possible that need images, and not to discriminate in
any way.

And to address the grandparent (I'll comment directly on that to): with
regards to speed, we're working on what we're calling "snapshotting"
functionality in Packer now. With this, Packer will snapshot containers at
various points in the build process, just like `docker build`. So when you run
a `packer build`, it'll start only from the point it needs to, rather than
from scratch. A cool thing is that this feature will extend to all the other
builders, too, so if you're building a VirtualBox VM from an ISO, for example,
it won't reinstall the OS if it doesn't have to. Cool stuff.

~~~
pas256
So, does this mean I am asking the wrong question? Are AMIs less relevant now
that we have containers?

My sense was Docker was the future, but AMIs are the present. Perhaps that is
wrong?

------
dkoch
I create an AMI with a bare minimum OS. Then I use a configuration management
tool to install all software packages, libraries and configurations. My new
favorite is Ansible (ansibleworks.com) but Chef and Puppet are others.

Updates are easier this way versus having to rebake images.

~~~
pas256
Easier in what sense? Baking, deploying, or managing the AMIs?

Also, are you using AutoScaling Groups with this methodology?

~~~
dkoch
I think more flexible is a better word -- it can do more than what you can do
when you bake a static image because you also use it to dynamically manage a
running system post-boot.

Ansible can deploy all of your dependencies, keep them updated, push out
configuration changes, and deploy your main application code.

I don't use AutoScaling, but no reason it couldn't be used with Ansible. The
docs have a bit of detail on how to do it.
[http://docs.ansible.com/guide_aws.html](http://docs.ansible.com/guide_aws.html)

~~~
pas256
I wrote the EC2 inventory plugin, so yes, I am a big fan of Ansible. I
however, now use Ansible almost exclusively to build AMIs. Do your application
code deploys require downtime, or do you have a technique to keep the service
online while making changes?

I highly recommend you start using ASGs. It is only a matter of time before
things go bad if you don't.

------
geetarista
I use Ansible for all configuration management. Boxes that belong to ASGs use
Ansible to create a pre-baked AMI, while the rest are just handled with
Ansible on a case-by-case basis.

~~~
pas256
Are you using the Aminator or Packer Ansible provisioners, or do you have
another technique for building those AMIs?

~~~
geetarista
Right now I'm just using an ansible playbook to provision the box, create the
AMI, and destroy the box within a playbook.

In the near future I plan on moving to Packer since it's the perfect tool for
the job.

