
Ask HN: Startups on EC2, what does your setup look like? - rpwilcox
EC2 is every startup's favorite web host: pretty much the default place to go.<p>My question is what does your EC2 setup look like? Just one instance and an AMI to spin up new instances when required? Use RightScale or Scalaium? Use a PaaS (Heroku, Nodejisu?<p>How is your database set up? Mysql/Postgress on the instance or another instance? Is it replicated (and if so, how?) SimpleDB? DynamoDB? Other NoSQL store?<p>How do you deploy? Capistrano? Git style push deploy?<p>Do you use anything for devops (Chef/Puppet) or just set up and update the AMI when you need new things?<p>Have you had any pain points with your current setup? (reliability with Amazon EC2-East for example?)<p>How hard was it to setup? How did you learn all these things? Or would you like to jump from your current hosting options to Amazon EC2 but don't really know how?
======
mryan
Conveniently, AWS has just made a case study on Fashiolista's setup, which
will save me a bit of typing: <http://aws.amazon.com/solutions/case-
studies/fashiolista/>

We run PostgreSQL and use its built in streaming replication.

Deploys are handled with Fabric - this includes AWS API actions (e.g. removing
instances from the ELB while updating them) as well as pushing code.

We use Puppet as our config management tool, in combination with AMIs. If you
just use a vanilla AMI and do all of your configuration on boot, autoscaling
takes a long time, so we use Puppet to configure instances, then make AMIs of
those. We also run Puppet on boot to do some runtime configuration.

This is automated, I'm planning to put the code on github once I have cleaned
it up a bit.

We operate in eu-west-1, so have thankfully been relatively unaffected by the
problems in us-east. Typical pain points are lack of flexibility in ELBs and
variable performance on EBS - nothing that can't be worked around.

Setting up was relatively straightforward - AWS is well-documented and easy to
experiment with. We did not have Puppet in place before moving to AWS - that
is one thing that would have streamlined the process greatly.

I'm currently working on a book about AWS sysadmin/devops topics - some parts
of the Fashiolista infrastructure will be used to demonstrate concepts in the
book, but I'm always on the lookout for interesting architectures to write
about. If you are doing something interesting on AWS and think it would make a
good case study, I would love to hear about it.

ETA: Oh, and CloudFormation. Lots and lots of CloudFormation. I can't stress
how useful it is. Our infrastructure configuration lives in the same github
repo as our code and Puppet config, and is deployed using the same Fabric
process. This makes the sysadmin in me very, very happy.

------
joshstrange
Currently I am using Elastic Beanstalk because it means I don't have to worry
about machines or scaling. Using the configuration
([http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom...](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-
containers-ec2.html)) files I am able to install any extra packages I need or
run commands. After I am more confident that out EC2 config will not change I
will create an AMI to launch new instances from so that I don't have to wait
for the server to install the packages I need.

I run 2 environments in my app (develop and production) my production branch
is master and I work on a "develop" branch and push those changes to EB to
test before merging and committing them in the master branch.

~~~
13rules
How are you updating your master branch? Are you just doing a 'git aws.push'?

I'm reading that causes a little bit of downtime and AWS' recommendation for
zero downtime is to create a new environment and then switch the CNAME to
point to the new environment once it is ready... Seems like a little bit of an
ordeal for pushing out changes.

Thoughts?

------
ryanfitz
Recently, I switched to pre-baking AMIs and then launching those images with
auto scaling groups. This has simplified and sped up our deploys substantially
(and reduced our costs), we can go from code commit to deployed on production
in about 90 seconds.

Netflix has talked about a similar approach. The basic process is to have
volumes from existing AMIs already mounted to your buildbox. Then when you
kick off a build it checks out your code, compiles and installs it on one of
those volumes, runs puppet in a chroot on the volume to do any needed
configuration. Finally you unmount the volume and create a fully bootable AMI.
I scripted this up in python and the baking process takes around 60 seconds.

------
citizenkeys
Storing static files on a micro ec2 instance saves me money versus storing the
images using s3. s3 charges by the GET requests and PUT requests. So if you're
storing a lot of static files, like images, then you'll go through that free
tier very quickly and start racking up the charges. But if you store those
images on your ec2 instance and serve them with apache/http/whatever, then you
don't pay for any of the requests, just the bandwidth. and if you have low
traffic and optimize your images, its easy to stay within that free tier and
pay absolutely nothing.

------
dinkumthinkum
Careful, EC2 is not all chocolates and strawberries. It depends on what you
are doing. If you are not using the micro-instances and you are not really
using "elastic" capabilities, you could spend much, much more on Amazon than
you would anywhere else. Also, if you love I/O performance ... well ... Amazon
may not be right for your projects. :)

------
tim800
I'm just getting started also. I really like the spot pricing. Most sizes have
never even come close to what you'd normally pay.

------
jbobes
Like this <http://cloudiff.com/demo> :)

~~~
jbobes
So can be yours..

