
AWS CodeDeploy - helper
http://aws.amazon.com/codedeploy/
======
STRML
While this looks nice, part of me can't help but be annoyed by yet _another_
deployment option on AWS. We now have CloudFormation, Elastic Beanstalk (which
can take many forms, including Docker), CodeDeploy, and Opsworks.

I can imagine how, for a new user, it's utterly baffling which of these
options is the best for the longterm, with the least friction. I use OpsWorks
quite a bit but have found it very challenging, and the feedback cycle when
attempting to develop new cookbooks is excruciatingly slow.

All I want, personally, is a system that uses a set of interchangeable scripts
that represent dependencies, so my server configs can live in version control.
It doesn't even need to run on multiple OSs (which seems to be a central tenet
of Chef). It just needs to deploy/rollback with zero downtime, and ideally
autoscale as quickly as possible. Is this it? Is there any way to know without
spending weeks fleshing out how it works?

~~~
kxo
How about an Amazon-managed Docker (not shit ElasticBeanstalk)
infrastructure/config management system?

If they handled things like EBS integration, orchestration, ELB integration,
and all of the usual stuff, I'd be sold in a second.

~~~
STRML
From my understanding, after talking to them at Web Summit, something like
this is in the works. And they're aware of how incredibly complicated the AWS
console is becoming. I don't know how or when they plan to address it,
however.

~~~
mentat
They promise lots of things (like cross region VPC peering) to person to
person at Summits...

~~~
STRML
Yes, I am still waiting for something prettier than software VPN or OpenS/WAN.
Their docs show 5 ways to connect VPCs, all of them complex, all of them with
significant downsides. One would think this sort of thing would be easier.

------
ryanfitz
I've only briefly read over the documentation, but this service seems to not
follow deployment best practices that aws and others such as netflix have been
talking about for years. Specifically the pattern of pre-baking an ami with
your current version of the app you are deploying and any other needed
software completely installed on the ami and then having an autoscale group be
able to boot that ami up in a few seconds and start working. This greatly
helps with scaling up, doing rolling upgrades and also very easy rollbacks.

The CodeDeploy service seems to operate by you manually launching base ec2
instance with a code deploy agent and then this agent will checkout your git
code on the live instance, run any provisioning steps and then if things break
somehow rollback all that work, still on the live instance.

I'm sure this is still a big improvement to companies who are manually sshing
into servers and running deployments by hand, but as someone who pre-bakes
ami's and does rolling upgrades with autoscaling groups this service seems
like a step backwards.

~~~
fmotlik
I've been working on the CodeDeploy Integration here at Codeship and have been
working with the service for a bit (as a preface on my thoughts)

While Immutable Infrastructure is also in our opinion (and I've written about
this extensively) the way to go in the future updating systems in place is
still the primary way to deploy systems and will be for a while. By providing
a centralized systems to upload new released and manage the deployment (how
many instances get the new deployment in which timeframe) you can take away
some of the security problems of opening up ports for access and potential
deployment errors where the SSH connection dies.

Especially when deploying into a large infrastructure connecting into each
instance for update becomes painful. That's where an agent based services like
CodeDeploy is really powerful and removes the single point of failure that is
the machine/network that you deploy from.

With ElasticBeanstalk, Opsworks and Cloudformation they now really start to
surround all the deployment workflows.

Definitely a great service that will in my opinion become very important to
many many teams. You can also read more about our specific integration in our
blog: [http://blog.codeship.com/aws-codedeploy-
codeship/](http://blog.codeship.com/aws-codedeploy-codeship/)

~~~
mwarkentin
Have you written anywhere how you guys deal with operational monitoring (eg.
Boundary, New Relic, etc.) when you're spinning up brand new instances all of
the time?

~~~
fmotlik
A bit: [http://blog.codeship.com/lxc-memory-
limit/](http://blog.codeship.com/lxc-memory-limit/)

We use librato for monitoring our build server infrastructure and mostly only
look at max/min values for metrics that could mean trouble. Generally we're
able to separate data of different instances by their instance id so we could
look into them individually.

We use NewRelic for our Rails application on Heroku and pump Heroku data into
Librato as well (we love data and metrics)

And of course you can always send me an email to flo@codeship.com with
questions.

------
kabell
BTW, we now integrate with this from CircleCI:
[https://news.ycombinator.com/item?id=8597439](https://news.ycombinator.com/item?id=8597439)

There's some discussion in that post of how it compares to pre-baking, etc. Of
course there are trade-offs either way. CodeDeploy does require that your are
careful with your lifecycle scripts to make deployments as atomic as possible.
At least they provide a good selection of default lifecycle events for you to
take advantage of.

------
bkeroack
Here's an open source tool that does something very similar (and you aren't
vendor-locked into AWS):

[https://bitbucket.org/scorebig/elita](https://bitbucket.org/scorebig/elita)

------
caiob
As a beginner I have a hard time understanding all these services that Amazon
provides. I know I should probably be using them, but I don't know which one.

~~~
Stoo
What is it you're trying to do? It might be worth taking a look at the storage
provided by S3 and how you'd go about hosting a static website (if that's
something you might need). Or set up a free micro EC2 instance. That will give
you a linux box which you can play around with.

------
marbemac
How does this compare to Deis? Does it serve the same use case, albeit locked
into AWS?

Discussion on Deis from yesterday:
[https://news.ycombinator.com/item?id=8591209](https://news.ycombinator.com/item?id=8591209)

------
deverton
This looks a lot like Marathon [1] though without some of the resource
abstractions that Mesos [2] provides underneath.

1\.
[https://mesosphere.github.io/marathon/](https://mesosphere.github.io/marathon/)
2\. [https://mesos.apache.org/](https://mesos.apache.org/)

------
djhworld
Is this going to integrate with Docker? Would make a great orchestration
platform

~~~
grosskur
That was my initial thought, too. But it looks like the Docker-related news
they hinted at is still coming:

[https://twitter.com/jeffbarr/status/529493907839533056](https://twitter.com/jeffbarr/status/529493907839533056)

------
samstokes
This could be a big deal in terms of raising the bar for deployment practices.

Right now "nobody ever got fired for" setting up deployment via rsync and some
ad-hoc shell scripts. That works for a single host, although it's not great
for reproducibility. But as soon as you go to multiple hosts you need some
degree of orchestration, monitoring, and integration with your load balancer
to avoid downtime.

CodeDeploy offers those benefits, so if it turns out to be even slightly good,
it could become the "nobody ever got fired for" choice, for any non-trivial
app running on AWS.

~~~
rev_bird
If the job is "get an app onto a bunch of boxes and load balance the healthy
ones," I feel like AWS has already been doing that for a long time – deploy
your code to a box, create an AMI from the box, and use it as a launch
configuration for an auto scaling group. New code=new box=new AMI, and then
you don't have to worry about the mechanics of moving code to a bunch of boxes
at the same time.

This seems like a tiny step forward for orgs who are deploying code to boxes
that they never take down, but for the orgs that have been doing it the AWS-
prescribed (immutable) way, I'm having trouble seeing how this is useful at
all.

~~~
samstokes
I think immutable infrastructure is probably the way forward, but it's not yet
easy enough to be the default for lazy people.

 _tiny step forward for orgs who are deploying code to boxes that they never
take down_

That's exactly why this is a clever move - it's a better way to do what you
already know how to do. This should get more teams using responsible
deployment practices. If you have to first learn a whole new mindset about
infrastructure, most people just won't bother, and will keep on rsyncing.

------
bmajz
Interesting. Between Elastic Beanstalk, OpsWorks, and now CodeDeploy it seems
like AWS is taking over every production developer workflow from the hobbyist
on up.

~~~
general_failure
I find none of them good enough for my workflow. I think heroku nailed it with
their deployment approach for my case.

Beanstalk worker tiers are a nightmare. You need to place all your workers in
separate repos, for example for git aws.push style workflow to work.

~~~
bmajz
Good point. Plus Heroku just gives you so many more options instead of the 3-4
that ElasticBeanstalk provides, although Docker integration potentially
changes the calculus significantly. Interesting that its taking EB so long to
catch up to Heroku on that front.

------
demircancelebi
I am having a hard time to understand how CodeDeploy will change my current
deployment workflow (which consists of git aws.push basically). Can someone
here enlighten me?

------
jihip
I really think Heroku nailed the deployment model. I just push my code, and
then the code is deployed. It would be nice to have staging baked in.

------
danielhunt
Region Unsupported

CodeDeploy is not available in EU (Ireland). Please select another region.

Supported Regions

US East (N. Virginia) US West (Oregon)

