

Ask HN: AWS or dedicated server? - bkrausz

So there seem to be some major trade-offs between AWS and dedicated servers, the most obvious of which being that AWS seems much more difficult to configure, while it's easier and cheaper to scale.<p>Considering I only have experience with setting up dedicated servers, I was wondering if someone with experience setting both up could comment on whether the difficulties of using AWS outweigh the benefits.
======
mdasen
AWS, no hesitation.

AWS isn't that hard to configure. ElasticFox puts a nice GUI to it and while
it will take a short while to get used to the AWS way of doing things, you're
better off.

With AWS, you have a nice spray files everywhere storage in S3, EC2 provides
lots of RAM and CPU muscle, EBS provides RAID-level reliable persistent
storage for EC2 that can be backed up to multiple data centers with a single
API call, CloudFront even gives you the chance to have static files served
from 12 different locations in the world making your latency very small. If
you need more servers, no problem just wait a few minutes for them to boot. If
you need more bandwith, it's automatic. If you need more storage, S3 is
infinite and EBS can always give you more (you can even stripe the drives so
that you could have terrabyte after terrabyte of storage as a single drive).

Dedicated servers have little upside. You're relying on physical hardware in a
very acute fashion. While AWS runs on real hardware, there's an abstraction
level which helps a lot. Let's say you're small and want a single box. That
box fails, you call your host and get a new one in a couple hours, you restore
from backups for another couple hours maybe and you're back online. Of course,
many often don't test their disaster recovery scenarios that well and are
often met with little problems. With AWS, you simply boot another machine off
that image and you're good. Worst case, your EBS gets trashed and you say,
"hey, S3, rebuild that EBS drive". Easy by comparison.

Real boxes are a pain. You have to deal with RAID, backups, how fast your
company can provision new boxes, bleh! AWS (or even Slicehost and Linode)
isolate you from a lot of that mess. There's a reason virtualization is the
hot new topic.

AWS isn't that hard to use. It's definitely different, but it makes so many
other things so much less painful. If you want some of the benefits of AWS
with a "simple as dedicated" feel, try Slicehost. You can get instances with
as much as 15.5GB of RAM and they just give you the instance with your choice
of Linux on it. From there, you can install Apache, MySQL, other. And you get
benefits like cheap and easy backups - they just store an image of the
machine. Then, if you need more capacity, you can boot one of those images as
a new instance, now you have another server. If one of their servers fails,
they can easily migrate your instance to another box. RAID10 is already set
up. Easy.

If you're worried about AWS' management being a little different, don't worry
too much. It's not that bad once you start using it - just a tad hard to
imagine without trying it. If you're still worried, Slicehost will give you
instances that will work like you're used to dedicated hosting working, but
with many of the advantages of AWS.

~~~
raghus
I use and recommend Slicehost.

I'll tweak just one thing and that is that SH offers backups only for slices
upto 2GB - for the 4/8/15G slices, there is no backup option. I am not sure
why though.

~~~
davidw
All other things being equal (which I don't know for a fact), Linode is
significantly cheaper than Slicehost, considering the differences in memory
usage between x86 (32 bit) and x86_64:

<http://journal.dedasys.com/2008/11/24/slicehost-vs-linode>

------
tdavis
I've been setting up dedicated machines for years and looked into switching to
AWS for TicketStumbler. I determined that it was actually considerably more
expensive to obtain the same amount of resources (i.e. cpu/ram) on AWS because
the pricing scheme doesn't lend itself well to having many always-on images.

In the end I chose to run Xen on top of dedicated hardware, which has
essentially bought us the best of both worlds: simple scaling and low costs.
Granted, it would probably take me a couple hours to start up a dozen more VMs
(I'd need to requisition new hardware) as opposed to a few minutes and S3 is
still cheap for mass storage, but neither of these points had any relevance to
our situation.

As mentioned by others, it all comes down to what your project needs.

~~~
mdasen
This is a very serious question as you clearly know what you're talking about
from experience: how do you find it cheaper to run dedicated hardware? The
reason I ask is because I've priced out 4-core servers with 16GB of RAM at
SoftLayer and ThePlanet and they come out to around $700/mo with 2 drives and
RAID 1. Amazon charges $750 for an Extra-Large instance (15GB RAM).

There is the potential that you don't want to delve into what you're paying
for stuff too much, but it just seems like AWS is charging similar rates to
ThePlanet and SoftLayer which are the two dedicated hosts that seem to have
the most credibility in the community. Even if you were provisioning your own
1.7GB instances on a larger dedicated box, you would still only fit about 8 or
9 of them in 16GB of RAM (leaving room for Xen and such) which would make it
the same price as AWS. The only thing I can see is that the included bandwidth
could save some money. Maybe I'm not good at looking for dedicated server
deals.

~~~
apinstein
We've been running these numbers ourselves lately as well.

We find AWS much more expensive.

For instance, we bought a Dell 2950 / 2xQuad Core / 12 GB RAMB / 4x500GB RAID
5 w/hot spare for ~$4500. We have it in a colo where it costs about $150 a
month for the space + bandwidth.

This is about equivalent to the $750/mo extra-large instance. There will also
be additional AWS fees for transfer, storage, etc, but we'll go with $750 for
simplicity.

That's a $600/mo premium, or $7200 a year. So I pay for the hardware within 8
months and after that it's $600 a month savings.

There is a lot of value in being able to provision extra server quickly, use
cloudfront, etc, but it comes with a high price, IMO.

------
jawngee
Hosting on EC2 is stupid, it's way more expensive than linode or slicehost. I
accidentally left one of their extra large slices running for a month, and it
cost almost $600. You can get real metal for those prices.

That said, if you have no problems setting up dedicated servers, than you
won't have any problems with EC2. Use rightscale's free interface to manage
instances, the rest is what you already know.

~~~
anotherjesse
Hosting on EC2 might not be cost effective when you only need one server, but
once you start need multiple servers the advantages quickly add up.

I'm currently a customer of Amazon, Softlayer, Serverbeach and few cheap VPS
elsewhere. I find that each has their advantages.

Need cheap bandwidth - nothing beats Serverbeach (youtube ran their CDN on
serverbeach boxes until they went to google). If you need a small number of
dedicated boxes, Softlayers support is worth the extra money (I usually get
softlayer boxes for about $50+serverbeach costs -- and softlayer includes
private net in that cost).

EC2 is great if you are going to be scaling up/down or have interesting
synergies with S3. (filesystem snapshotting, free bandwidth, ...)

Small VPSes are great when you are building to prove an idea - a couple bucks
a month while you have no traffic.

I also find using EC2 for one time tasks preferable. Spin up 10 instances to
do a massive amount of computation or to do load testing.

~~~
timf
Do you have any experience with 10tb.com (or heard anything about them)? Their
bandwidth prices list as better than serverbeach.

------
aristus
I've done AWS, dedicated, and colo. Each one has their own tradeoffs.

AWS is daunting at first -- but then so is Debian. Once you figure out the
keys thing and your base image it's fairly easy (also see ElasticFox,
S3Browser). You might as well learn it, even if you stick with dedicated
hosting for other reasons.

Things not to sneeze at: * elastic: start up a dozen servers in a few minutes
* free access to s3 storage * crazy awesome pay-as-you-go bandwidth (250mbps)
* expensive for the CPU/RAM you get * virtual disks can be slow on random
seeks * sorry, no cPanel (though see RightScale) * poor locality of servers

~~~
mdasen
So, just to comment on the disk reads: the local, non-persistent disks don't
offer great speeds (you're right). However, that shouldn't matter too much
now. All of your application code should fit in memory once it's been read off
the disk and so you shouldn't be hitting the local disks much after boot.
Static files should be on S3 and databases should be stored on EBS.

EBS has great performance. RightScale noted that they got over 70MB/s with
sysbench and over 1000 I/O operations per second. If you want more
performance, you can even stripe across EBS volumes.

EBS really helped EC2's viability a ton. EC2 users now have access to cheap,
reliable, and fast storage.

~~~
aristus
I'll have to try it out, thanks. A quick search doesn't turn up any bonnie
tests or similar, so I'll do them and put them online.

(edit) this suggests that EBS is slower than the virtual disk for
open/seek/write/flush: [http://bizo-dev.blogspot.com/2008/11/disk-write-
performance-...](http://bizo-dev.blogspot.com/2008/11/disk-write-performance-
on-amazon-ec2.html)

------
bjclark
I'd say that if it isn't obvious why you would need AWS, then you don't need
AWS and should go with a standard dedicated server provider.

Configuration should be the least of the reasons to make the decision. The
many other factors are much more important than configuration.

------
delano
As always, the right solution depends on what you're building.

"Dedicated server" sounds like you're asking about one or maybe a few
machines. If that's the case, you're better off with dedicated hardware from a
vendor you're familiar with.

AWS is an entirely different way to build an application infrastructure. You
don't keep state on any one instance because you plan for redundancy. Rather
than have 1 or 2 front-end machines, you're running 1 or 2 load balancers in
front of N front-end machines. If something goes awry with a front-end
instance, you take it out of the loop and start another to add to the loop
without downtime. It's the kind of power that up to now only companies with
large IT budgets have enjoyed.

------
jeremyw
I built a 140-machine farm at Softlayer, ~6 months ago. Here are some
observations (that may not mean much south of 10 boxes.)

a) AWS feature advantages (mostly instascale, in our case) fade with the high
cost of every additional dedicated box.

b) It's nice to virtualize the map of services to boxen, but at some level of
scale, each box has a single task and you want the ability to run it _flat
out_. If so, you have to decide if Xen overhead is worth labor somewhere else,
and alternately, if Xen source compatibility holds you back from new kernel
features. (Pick your VM technology.)

c) We still wanted instascale with our own software distribution, so in
something less than a week I hand-tooled a pxe-based provisioner that
initialized from a live exemplar (gentoo, whee). It took some work to find the
right propeller heads at Softlayer, but eventually we understood each other
and the bootp listeners got turned off for our subnets. "Insta" became 2-hour
hardware activation, which was ok for us. You might consider puppet in the
same way (except for Gentoo's long from-scratch build time.)

d) Virtually all provider admin is automated and this still works and makes
sense at scale. The SOAP API that backs it is not quite fully baked, but is
very useful. Paired with box-level IPMI pokes, you can replicate AWS control
over hardware.

e) Whatever AWS provides, at scale you still have a custom setup at some level
of abstraction, so putting in place exactly the right hardware saves labor.

f) Substantial discounts can be had.

------
aquaphile
AWS -- we run our entire insurance company, and its multiple applications,
using the EC2, EBS, S3, SQS, and FPS services provided by AWS. I think the
only AWS we haven't used to date is Mechanical Turk. We highly recommend AWS:
we started with dedicated hardware years ago, migrated to an excellent virtual
host, and then finally moved to AWS last year when they implemented EBS and
provided the SLA. Once you have multiple servers, it makes increasing
financial sense to use AWS.

------
eelco
I should definitely try out AWS. It's a bit of work, but the docs are good and
I think it's a useful experience to at least know a bit about how it actually
works.

Since you pay for AWS by the hour (and bandwidth), you can more easily switch
to a dedicated server from AWS than the other way around.

If your application doesn't need to deal with peaks of traffic or potentially
scale up fast, going with a (couple of) dedicated servers is probably a more
cost effective option.

If you do need to deal with peaks or fast scaling, also check out
<http://scalr.net/>

------
VonGuard
Two more things that AWS has that no other hosting company offers: Queue and
Billing. The Amazone Message Queue service handles all those messages you pass
from virtual machine to virtual machine. Companies far larger than Amazon have
spent years working on message queues for proper scaling (RabbitMQ, AMQP), and
Amazon just has one there that works for you off the bat. It makes scaling a
lot easier.

Also, they have a bill pay system for charging credit cards for access to your
systems. Just another boring bit of code you don't have to write yourself
included free with the AWS service.

~~~
alexisr
Yes but see here for an example of the smarter solution. That is: using
RabbitMQ on AWS to get the best of both worlds. Link to AWS blog:
[http://aws.typepad.com/aws/2008/12/running-everything-on-
aws...](http://aws.typepad.com/aws/2008/12/running-everything-on-aws-
soocialcom.html)

------
inovica
What exactly do you want to do - surely you need to ask that question before
deciding which route to go (or at least let us know and we'll try to help). We
use AWS extensively for scaling up and down and it is AMAZING for this. We
couldn't do what we now do without it - well, without a huge amount of
investment anyway. It enables us to sell our products at a decent price point.
If you are just wanting to host websites then a dedicated server might be your
thing. How much bandwidth are you going to consume? Sometimes a dedicated
account will give you a better deal for bandwidth.

------
lsc
I would say "do both" - aws is awesome if your site is running slow 'cause you
are out of capacity, or you otherwise need a box 'right now' or for only a
short period of time. spin up another instance and be done with it. But for
the boxes you leave on all the time, you are probably better off buying and
co-locating your own server. Usually the capital cost difference is made up in
only a few months.

The times when a Xen host makes long-term sense are when you want a box that
is smaller than optimal. Right now, I buy dual quad-core opterons w/ 32G ram
and 2x1TB disk... assuming I am ok with moderate-speed low-power opterons, it
costs about $3K up front. Hosting, say, another $150/month. That's a whole lot
of ec2 instances. At those prices, well, AWS is pretty expensive over the long
term.

But yeah. AWS is awesome for the servers you don't need on all the time, or
servers you don't have time to setup (or your whole ball of wax if your
margins are such that paying more for computers won't break your business
model.)

------
bmelton
Base hosting for an AWS small image (if that's still the correct terminology
-- equates to about a 1.8Ghz Xeon with 512Mb RAM or so) is $72.50 a month in
machine time. That's to keep the machine running only, not counting bandwidth.
Their bandwidth is confusing to me, so I can't really speak to that, and I've
only been dealing with me and the machines so far (no users), so I can't speak
to how that works out at all.

That said, $75 a month or so can get you a small dedicated server in some
places that includes a fixed amount of bandwidth, or more predictably priced
hosting at slicehost or somewhere similar.

If your application doesn't need to scale, then AWS probably doesn't make
sense. If you do, then it does.

As an AWS noob myself, the only confusion I had were with the very initial
setup (in using the keys provided to authenticate and whatnot) -- and in the
initial server configuration. The major differences you'll need to be aware of
are as follows:

\- The AMI image (basically just a virtual image) is static. You can't save
files to this and expect them to exist after a reboot. That took a second to
get my head around, after configuring apache and rebooting, wondering where it
all went.

\- Set up your base OS, then save the AMI. It was confusing to me figuring out
exactly what needed to go where, and remapping my server between 'fixed' and
'dynamic' content and making sure that they were in appropriate places. This
includes your web server configurations, disk mounts, /var/ directories, etc.
User generated data, SQL data, and (probably) your website data will be stored
on either an elastic block or to an S3 bucket. The important thing to note
here is that you configure your OS to look how you want it to be every time
you wipe it clean. Perhaps you put your web application on it, perhaps you
don't. I could see using AMIs as a sort of version control for your apps, but
I don't know your use case.

\- The elastic IPs threw me. Don't release them on production instances. lol.
Effectively, it maps an IP address to your machine virtually, which means it
can be moved around. Your DNS points at the EIP which can be a single apache
instance, or later, a load balancer -- all configurable within a couple
minutes.

Other than that, it took me less than $10 worth of AWS resources to configure
a couple servers, deploy my app and get it configured to how it would be in
the real world if I were to migrate, so you should definitely check it out.
There's no major upfront commitment like there is with dedicated hosting, so
there's really no excuse not to familiarize yourself with it.

Also, you definitely want the elasticfox plugin if you're going to do anything
with it. I'd point you at the following resources, which got me up and running
within a few hours.

\- ElasticFox Plugin -
[http://developer.amazonwebservices.com/connect/entry.jspa?ex...](http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609)

\- ElasticFox Owner's Manual (PDF) -
[http://ec2-downloads.s3.amazonaws.com/elasticfox-owners-
manu...](http://ec2-downloads.s3.amazonaws.com/elasticfox-owners-manual.pdf)

\- Configuring MySQL to use ElasticBlock storage -
[http://developer.amazonwebservices.com/connect/entry.jspa?ex...](http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1663)

~~~
mdasen
Actually, a small AWS image comes with 1.7GB of RAM. That's a big difference.

In terms of the processor metric, that's harder to gauge. Right now, Amazon
says one EC2 compute unit is roughly equivalent to a 1.0-1.2GHz 2007 Xenon or
Opteron (or a 1.7GHz 2006 Xenon which was their original documentation).

Think of it this way, Amazon is putting you on a beefy server with some other
people. I'd guess these servers are 4-core boxes running at around 2GHz+ with
16GB of RAM. So, with the Extra-Large instance, you're basically getting the
box (15GB of RAM, 4 cores with 2 compute units per core (roughly 2-2.4GHz per
core)). So, with the Large instance, you're getting half of the server (2x
2GHz Xenon processors) and with the small instance you're probably getting one
core at half speed.

And that's really as much as most people will need especially since I'm
guessing there's a bit of bursting ability to the CPU capacity.

Hope that helps make Amazon's CPU situation a little more understandable.

~~~
bmelton
Ah yes. Thanks for the clarification on the numbers, yours sound more right
(and more generous), and do a fair job of making AWS services even more
competitive than I'd thought they were.

------
chris123
From the AWS blog: "CloudFront Management Tool Roundup":
[http://aws.typepad.com/aws/2009/01/cloudfront-management-
too...](http://aws.typepad.com/aws/2009/01/cloudfront-management-tools.html)

