Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Do you bake AMIs for AWS deployments?
14 points by pas256 1467 days ago | hide | past | web | favorite | 15 comments
I am curious how people are doing their staging and production deployments on AWS. Do you bake everything into the AMI and do nothing at boot? Do you boot a vanilla AMI and do all configuration during boot? Something in the middle? If you don't fully bake, is it because it is too hard to manage?



I make complete AMIs with packer, configure them entirely using environment variables in userdata, configuration data in etcd, and shell scripts, and run all services in docker containers, which I also build using packer. With all services in containers, AMIs are almost never rebuilt and there is no need for configuration management/mutating infrastructure.

Building containers with packer is easier than switching to Dockerfiles for existing builds, but does not support fast, incremental build and deploy or tagging. Even without those features, I see no advantages in traditional CM other than the convenience of familiarity and legacy.


In addition to my other comment in this thread (in response to shykes), I want to add respond to this:

> but does not support fast, incremental build and deploy or tagging

Deploy/tagging is actually already a PR to Packer and will be merged in shortly. Incremental builds are under development. They'll make it in hopefully in the next version of Packer, but if not, the following.

And the incremental builds will work for AWS, VirtualBox, etc. as well.


What does a typical packer config file look like for building a docker container? I think of it as a useless abstraction on top of docker, but that's probably because I only generate containers and don't need the other packer targets. I only ever need one AMI, GCE image or vbox: the boot2docker base. And even that is exported from a container :)


Since building Docker support into Packer, I've heard a few beneficial points from some companies I've helped integrate it into as well as some users. I'm not here to convince you, but just to share what I feel are some valid use cases. Alas, portability is not the only reason, but is one.

* Software installation/configuration knowledge remains solely in existing Chef/Puppet/etc. code-bases. Dockerfiles can add another format for software to be "installed" (or "packaged" if you prefer). Packer + Docker allows you to use your existing expertise, CI process, etc in order to create these containers.

* Common "image" configuration format: again Dockerfiles represent a Docker-specific way of building images. This is all well and good, but it is still very common to have multiple types of images (AMIs, Docker containers, VirtualBox, etc.). In a world where Docker isn't used everywhere, it is a burden to maintain multiple methods of building images. Packer provides a single way to do it that is flexible to multiple platforms. And even if an org decides to transition completely to Docker, Packer helps get them there. Perhaps they want to switch to Dockerfiles after that, but there is still point #1 above.

* Portability: Packer represents a low-risk way to adopt Docker containers, if you use Docker. Dockerfiles are somewhat of an "all-in" approach to Docker. If you don't like Docker, or Docker isn't good for this specific use case (yet, or ever, doesn't matter), then Dockerfiles have to be translated over to another format. As I'm sure you know, big IT is all about minimizing risk when adopting new technologies (actually, a top point to NOT adopt new technologies that we have to fight!). Packer represents a way to say "yes, Docker is new, but Packer provides a pretty low-risk way to get into it. Let's first build vSphere images, like you're used to, and see how those transition to Docker containers. If you don't like it, we still built automation to build vSphere VMs!"

* Extensibility: Packer is very plugin-friendly. You can hook into almost anything. This allows some nice plugins to exist to help augment the process for building images, whether they be containers or not. If Dockerfiles don't support a command to do something, then Packer plugins can very easily do that for you. Maybe it doesn't make sense for this certain feature to be a core feature of Dockerfiles, OR Packer. Either way, it doesn't matter, because the org can just build a plugin for themselves and use it internally. No harm done.

* Process friendliness: In addition to the portability above, centralizing on Packer for image creation is 1..N less processes to adhere to. Docker has a different process for building containers. Aminator has a different process. Every new process is a new special snowflake CI handler to run them, new education for employees, new maintenance. By using Packer, you can use the same CI runners/parsers/steps (Bamboo, Jenkins, etc.) to build any sort of image.

And to answer your question on "what does a Packer config file look like" here is a basic, but fairly typical config file:

    {
      "builders": {{
        "type": "docker",
        "image": "ubuntu",
        "export_path": "image.tar"
      }],

      "provisioners": [{
        "type": "shell",
        "scripts": ["base.sh", "nginx.sh"]
      }]
    }
Pretty simple. Easily human and machine editable/readable. Also, if you happened to want to build an AMI too, it is not much different. And finally, if you use Chef/Puppet/etc, you can just drop that in there, and it works just like you'd expect. Packer even installs Puppet/Chef for you (if you want)!

I hope that clears things up. Packer has been helping with adoption of Docker for many people I've helped! I think its clear from my work on Vagrant and Packer (and some future stuff), that the one thing I try to avoid is lock-in of any sort. I focus instead of human process, rather than technical specifics. You can argue that Packer itself is a lock-in format, but the point is that its a single similar format for many. Its agenda is to be as helpful to as many projects as possible that need images, and not to discriminate in any way.

And to address the grandparent (I'll comment directly on that to): with regards to speed, we're working on what we're calling "snapshotting" functionality in Packer now. With this, Packer will snapshot containers at various points in the build process, just like `docker build`. So when you run a `packer build`, it'll start only from the point it needs to, rather than from scratch. A cool thing is that this feature will extend to all the other builders, too, so if you're building a VirtualBox VM from an ISO, for example, it won't reinstall the OS if it doesn't have to. Cool stuff.


So, does this mean I am asking the wrong question? Are AMIs less relevant now that we have containers?

My sense was Docker was the future, but AMIs are the present. Perhaps that is wrong?


Interesting process.

We tried to incorporate Packer in to the Docker Release process but found it just took way too long.

Best of luck, I hope it works out for you.


Cool. This allows you to use ASGs too.

I am hearing more people using Docker on AWS, even though the Docker guys don't recommend production use yet.

By not supporting fast, incremental build and deploy, just how do you deploy new application code?


Using Docker in production is about stability of API, not technology.

Meaning, for those companies/projects with fast, iterative cycles, who want to take advantage of Docker but understand it's an investment in time over the long run, it's a great fit today.

For those companies/projects with slower cycles, who want a solution which will fit their needs out of the box backed by some sort of support.. 1.0 is the target.


I create an AMI with a bare minimum OS. Then I use a configuration management tool to install all software packages, libraries and configurations. My new favorite is Ansible (ansibleworks.com) but Chef and Puppet are others.

Updates are easier this way versus having to rebake images.


Easier in what sense? Baking, deploying, or managing the AMIs?

Also, are you using AutoScaling Groups with this methodology?


I think more flexible is a better word -- it can do more than what you can do when you bake a static image because you also use it to dynamically manage a running system post-boot.

Ansible can deploy all of your dependencies, keep them updated, push out configuration changes, and deploy your main application code.

I don't use AutoScaling, but no reason it couldn't be used with Ansible. The docs have a bit of detail on how to do it. http://docs.ansible.com/guide_aws.html


I wrote the EC2 inventory plugin, so yes, I am a big fan of Ansible. I however, now use Ansible almost exclusively to build AMIs. Do your application code deploys require downtime, or do you have a technique to keep the service online while making changes?

I highly recommend you start using ASGs. It is only a matter of time before things go bad if you don't.


I use Ansible for all configuration management. Boxes that belong to ASGs use Ansible to create a pre-baked AMI, while the rest are just handled with Ansible on a case-by-case basis.


Are you using the Aminator or Packer Ansible provisioners, or do you have another technique for building those AMIs?


Right now I'm just using an ansible playbook to provision the box, create the AMI, and destroy the box within a playbook.

In the near future I plan on moving to Packer since it's the perfect tool for the job.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: