
The cloud vs. dedicated servers - fallenhitokiri
http://www.screamingatmyscreen.com/2012/12/the-cloud-vs-dedicated-servers/
======
jeremyjh
I think if you are spending more than $100 a month in VMs you should seriously
consider co-locating if you have the skills to support it.

For my side-projects, personal websites and general purpose "whatever" I'm
using an in-expensive colo provider(Colo@). For $50 I get 10mbit/s @ 95%
(basically, burstable to 100mbps for up to 5% of a month). That's about 3 TB
of data transfer which alone would cost hundreds of dollars at EC2. Of course,
it is also way more data than most people would use.

The server I bought used on eBay for $365. Its a dual Xeon L5420 (8 hw cores)
and has 24 GB of RAM. I run seven or 8 VMs under KVM on it presently. These
images are pretty portable and a couple of them I backup regularly to S3; I
could recover to an EC2 instance if I lost the box.

I monitor this with an EC2 micro instance and have not had any network outages
in 6 months there. If I wanted to run a production site there I would need at
least a second machine for redundancy; that would be another $30-40 a month.
I'd probably also replicate real-time to a small EC2 instance so that would
cost a little (though the incoming B/W to EC2 is free) - I don't do that now
as I don't have real "production" data.

Not everyone should do this but if you like servers you should consider it.
Another advantage here is that I own the server. If I get in a billing dispute
or other issue with my provider they can take me off the network but they
cannot hold my server hostage. Also they cannot login to the box - any attempt
at social hacking is pretty well doomed.

On the other hand, on the two occaisions I've needed remote hands and the one
time I needed a KVM they responded in less than fifteen minutes. It is mind-
blowing the level of support you can get with the right provider.

~~~
ericcholis
I completely agree, having co-located a few projects in my day as well as
using dedicated HW through Rackspace.

One of your key points is what turns some people towards more managed
services: "skills to support it". I'd add time to that factor as well.

If I had the support staff, I would co-locate in a second. Heck, one of our
previous locations was next-door to a Level 3 co-location facility. It was
pretty nice to be able to walk 10 feet and access your hardware.

~~~
rhizome
Relatively none of the AWS/Heroku-type startups of the type most publicized on
HN is spending money hiring those skills.

~~~
Xylakant
There's a distinction to be made between AWS and Heroku. If you're hosting on
AWS you still need to have someone who's able to maintain a server. With
heroku, you don't. So the advantage of AWS over Colo is mainly scalability and
the reduced need for expensive hardware. Depending on your apps load behavior,
spikes can require that you keep 10 times as much HW than you'd need on the
average, that's where AWS really shines. But in the end, the instances you buy
at AWS are virtual servers that need a real admin.

~~~
rhizome
It's a distinction without much of a difference when most are deploying via
Rubber, Vagrant, Chef/Puppet/cfengine, etc. to maintain a policy of
programmer-deployers. Of course this isn't a rule, but it's prevalent.

~~~
Xylakant
No, not true. Server maintenance is a job that requires that you track what
services are deployed on a server, which ones need security updates, knowledge
about how to correctly configure a firewall and lots of other stuff.
Deployment via Rubber/Puppet/Chef etc. only changes how you get the needed
packages and configuration on the server. It doesn't tell you magically _what_
configuration you need on a system.

Nitpick: Vagrant is not a deployment system. It's an awesome tool, but it
falls back to puppet/chef for the actual configuration.

~~~
rhizome
I know what server maintenance is.

~~~
Xylakant
Then I don't get your point. One of the major advantages of heroku over raw
AWS is that you don't need to do the server maintenance - it's all done for
you. And yet you say that the distinction blurs when people use puppet - which
is not true.

------
gtaylor
I don't intend to sound harsh, but comparisons like these are absolutely
useless. It's simply incorrect to make blanket statements on the pros and cons
for each service without some context. The benefits and drawbacks are going to
change depending on the characteristics, purpose, and needs of the
application. This post makes a "one-size-fits-most" generalization, which
makes it almost entirely useless.

What kind of application are we trying to deploy? What is your budget? What is
the traffic level? Is performance a top priority? How many sysadmins do you
have at your disposal, and how many are you willing to add? What kind of
sensitive data are we storing/transmitting?

The answers to these questions drive the selection process, and end up
altering the importance of each pro and con the author mentioned. Depending on
your application, some pros and cons are eliminated, and new ones added.

Please please PLEASE, for the love of all things good, don't use an article
like this as the sole basis for selecting providers. Think about what you
need, ask questions, and craft your search to your purpose. Don't go pick
method X because other people say it's great (for their purpose).

~~~
fallenhitokiri
Author here. You are right that I should have given a better overview what
exactly I am doing (the introduction + requirements are a bit short).

The pros and cons can change, some of them, like availability of features,
support,... don't. But you are right that there is no "one solutions fits for
everyone" plan.

~~~
gtaylor
> The pros and cons can change, some of them, like availability of features,
> support,... don't.

But they do change. What features are important depends on your application.
The amount of support you need, and by who, changes as well.

What I'd love to see is a series of articles that helps walk people through
the platform selection process from the perspective of a few sample
applications/organizations. I feel that would be really constructive.

------
gmac
This issue is near the top of my list at the moment.

I currently spend $100/month on 4 Linodes (3 x 512MB, 1 x 1GB). I love Linode
-- efficient support, and their London datacentre has been utterly rock-solid
for me for several years -- but I'm beginning to think that, for me, it's the
worst of both worlds.

On the one hand, I could move all 4 servers to a dedicated Hetzner box (EX6 or
EX6S) running Xen, for a small setup fee and similar monthly cost, and get 4
or 8GB ECC memory _on each one_. This has a slightly higher sysadmin burden (5
servers to administer instead of 4, slightly higher risk of disk failure), but
not that much. And the move is relatively painless, because I can directly
transfer the disk images with dd over SSH.

On the other, I could move the services to Heroku, probably pay a bit more,
and essentially stop doing any sysadmin. This is superficially attractive...
but moving a load of old things to Heroku isn't straightforward, and that
probably rules this option out.

~~~
larrys
"I could move all 4 servers to a dedicated Hetzner box "

How do you deal with hardware failure when you have a dedicated box at
Hetzner? Specifically if/when something fails are there spares etc?

~~~
mootothemax
_How do you deal with hardware failure when you have a dedicated box at
Hetzner? Specifically if/when something fails are there spares etc?_

Just to play devil's advocate, it's not unknown for large VPS providers to
have major issues. This is something you need to consider _regardless_ of
whether you're using dedicated or VPS.

~~~
larrys
"This is something you need to consider regardless"

I'm not talking though about a separate issue which is proper backup
procedures. I mean if you have a server racked at Hetzner (or elsewhere) what
is your "plan b" when there is a hardware failure.

In the case of a server that I just racked somewhere I will purchase a supply
of the most logical parts to fail (fan, hard drive, board, power supply etc.)
so that the parts are available and can be replaced quickly. I know that some
providers take care of this for you so if the hardware fails (your hardware)
you are back up and running quickly.

In the case of a VPS, by contrast, it can generally be assumed (nice if a way
to verify this but I'm guessing there isn't) that they take have taken care of
and planned for hardware issues and spares and have a strategy. Of course if
they haven't you have a big problem. At least with your own hardware you can
plan accordingly and insure a better outcome.

There are other issues as well. If the colo place is close to you do you keep
the spare parts yourself or do you leave the spare parts with them? The answer
depends on many factors (such as is there security where the parts are and who
has access to them. If that is not the case better to keep the spares and
drive over yourself with them even in the middle of the night).

~~~
TillE
You're talking about colo issues, not dedicated servers as originally
mentioned (the Hetzner EX6 package). It's simpler and sometimes even cheaper
to just rent the hardware and let the hosting company deal with all those
issues, while of course taking care of redundancy yourself.

------
stephengillie
As I'm looking at setting up a blog, website, and company, my inner nerd keeps
nagging me: "You could build it and host it all yourself". But I know I don't
need to.

I nearly majored in economics, and I've worked in a datacenter, so I know it's
simply more efficient to depend on hosted services. Yet I still want to set up
the whole stack. For me, it's a question of letting go and trusting the
services that others host and others use. And it's foregoing the pride of
"doing it all myself".

There simply isn't enough time to build _everything_ from scratch -- if you
build your own servers, you're sourcing HDDs and motherboards and power
supplies and other components. If you make motherboards, you're sourcing
copper and other raw materials. No single human is so tall as to pull copper
ore from the ground, pull silicon from sand, and move vertical enough to self-
produce a tablet or PC. Currently this takes several thousand humans.

------
alexkus
Don't forget hybrid solutions. I've done things in the past with:

a) co-location for the main DB servers (allows you to be very specific about
hardware choices: RAID cards and SSDs not just preferred manufacturer but also
the exact model) and backup machines (needed higher density HDDs than could be
supplied by hosting providers choice of dedicated servers)

b) some unmanaged dedicated servers for the core servers that don't rely upon
specific hardware requirements (HTTP servers, memcached, varnish). Also easier
to slowly ramp up the number of these month on month.

c) virtual boxes spun up when required to handle spikes in the load and then
canned when it goes quieter again

Even better if your hosting provider provides all 3 and can arrange a private
VPN between the sets of hosts so you don't get billed for your 'internal'
bandwidth.

------
staunch
...and these kinds of issues, which I've faced myself many times, are why I'm
building Uptano. The "cloud" vs "dedicated" vs "co-located" are issues that
were created by the artificial separation of a few good ideas.

There's no reason you shouldn't be able to have dedicated hardware
performance, instant deployability, on-demand usage-based billing, at costs
close to, or better than, co-locating it yourself. As I'm working to prove
with Uptano (<https://uptano.com>)

I really think server hosting is going to look very different in a few years.
We've not come very far in the past 5 years.

~~~
moe
In fairness, baremetal clouds have existed for a while. E.g.
baremetalcloud.com, stormondemand.com and a few others.

That said your offering strikes a nice balance in terms of price/performance.
What I'm missing is bigger profiles (64G Ram please?) and information what
CPUs and hardware you are using (blades?). "Compute units" are a terrible
metric, give me a model number so I can look it up on cpubenchmark.net.

~~~
staunch
Thanks for the feedback. Bigger hardware profiles are definitely coming (some
exciting profiles as well). CPUs vary a bit, but I added Passmark numbers.
Clarified that the servers are 1U rack-mount machines.

------
itsgettingcold
In my experience, Linode is the best roll-your-own you-are-on-your-own cloud
provider. Obviously they are aimed at the savvy but it's reliable, cheap while
being easy to estimate costs, simple to configure and expand, pretty good
documentation, plus it doesn't have the learning curve or linguistic
peculiarities of Amazon.

Regarding Rackspace, I've had good experience with them when working at mid-
size and larger companies. Unfortunately I've had the opposite experience when
functioning as a freelancer, working with startups, or as an entrepreneur
myself. Rackspace didn't even respond to sales inquiries. Initially I figured
this was a strangely repeated fluke, but other small companies and
entrepreneurs I've spoken to have reported the exact same thing, where they
send an inquiry to Rackspace or ask to speak with a sales engineer, and they
get no response. Nothing, zip, nada. I find that very strange, and am
speculating RS no longer wants to deal with the growing pains and frequent
support requests of startups, but it certainly makes the decision to stick
with Linode or EC2 much easier.

I don't have much experience with dedicated anymore, but have repeatedly heard
good things about ServInt and SingleHop. Have also heard good things about
Firehost for a managed cloud provider. I would love to hear others opinion and
experience on any of the aforementioned companies though.

~~~
taligent
I really do not understand why people keep recommending Linode on here. Apart
from their woeful and disgraceful security policies some of their data centres
e.g. Fremont is very unreliable.

I would recommend <http://www.webhostingtalk.com> as you will find out much
better options for your specific needs.

~~~
thaumaturgy
I defended you the last time this came up; this time I think you're being
wholly unfair. A single incident -- severe though it was -- does _not_ make
"woeful and disgraceful security policies". And the _only_ data center that
they have that has _occasional_ issues, as far as I know, is Fremont, and it's
worth pointing out that Fremont has had _less downtime than AWS_ this year.

I use their Dallas and Newark data centers currently. I have had _zero_
downtime this year, which puts Linode at the head of the pack in terms of
reliability.

So if you don't understand why people keep recommending Linode, it's because:

1\. The prices are fair;

2\. The service is as reliable as anything else out there, and in some cases,
far more reliable;

3\. The performance is good;

4\. The support is blow-you-out-of-the-water fantastic;

5\. The software (their management console) is pretty good;

6\. There are very very few complaints overall, _other than their handling of
the Bitcoin incident_.

I agree that they should have handled that incident differently, and that they
still haven't taken proper care of it. However, you're being otherwise
dishonest in your portrayal of Linode.

~~~
tedchs
Linode also offers native IPv6 support, and they will route you a /64 on
request.

------
devonbleak
Good comparison between the 4. Rackspace has come a long way since we
evaluated them a few years ago (they wanted like 24 hours to bring up a new
instance/server for us back then, so we ended up going with AWS).

Generally speaking our biggest challenges with AWS have been storage (making
TBs of web content securely available to various autoscaling clusters) and
network i/o (especially across VPC/public internet boundaries).

We've actually found that AWS' pricing beats the costs of hosting internally,
especially once you look beyond raw server cost and factor in
power/cooling/manual labor/datacenter space/etc. And there are lots of
different options for monitoring your usage to avoid surprises (we're looking
into programmatic usage reports and New Relic for that, though we've been
there a couple years now so we have a good idea what our bills are going to
run each month).

As far as CDN, we get way better pricing from Level3 and Akamai than we could
from CloudFront or Rackspace, but our traffic patterns are more 95th-
percentile-friendly than most.

------
bdcravens
The issue with these comparisons is that it tends to be about VMs and storage
only. A modern applications requires a lot of moving pieces. Setting up and
managing, say, a queue service, has costs associated with it where something
like SQS becomes a serious value add.

------
jimwalsh
I totally dislike comparisons of dedicated hosting versus cloud. Especially
when they dont factor in any of the costs of the support contract, hardware
replacements, etc. involved in supporting physical hardware.

He also mentions that there is no way to see what your next bill will be in
AWS. They offer an 'Account Activity' link, that shows you current charges in
the current month. That can be helpful when testing things.

I hope people that are new to setting up infrastructure and supporting it do
no use comparisons like this to make the decision for them. There are far too
many variables not discussed in this article for this to be very valuable to
anyone.

~~~
fallenhitokiri
> He also mentions that there is no way to see what your next bill will be in
> AWS. They offer an 'Account Activity' link, that shows you current charges
> in the current month. That can be helpful when testing things.

Good to know, thank you. I was not sure about this feature and after asking
one of the sales engineers on the AWS event they told me that this is just not
possible, especially if you want some details.

------
halayli
A good balance I found is to have a dedicated server with a stand by AMI in
the cloud and switch over using DNS.

What you pay for in the cloud is convenience and not performance.

------
ericcholis
I've got a con for the Rackspace list, that in some was conflicts with one of
it's pros. Pricing is simple because of the small choices for instance
performance. I would love to have more choice in instance performance beyond
memory based tiers. I'd kill for a c1.medium analogy on Rackspace.

With that being said, I'm a loyal Rackspace customer and love their cloud
offering.

------
otterley
There's a very simple formula for figuring out whether self-hosting or cloud
hosting makes more sense.

Add a month's worth of colocation fees, capital depreciation and associated
labor costs. If it's less than your monthly cloud hosting bill, then it's time
to self-host.

And if you run your own firm and haven't figured out how to calculate capital
depreciation yet, it's time to learn. :)

------
cincinnatus
This is a pretty thin 'comparison' with dedicated servers being given only a
cursory mention and no analysis and VPS not covered at all.

------
aioprisan
how much cheaper is Rackspace vs Amazon CloudFront? From our experience,
Amazon also has more nodes that its CDN pushes files to and our CDN cost with
100k+ views a month is still under $3/month, for each solution.

~~~
gtaylor
You can always still use CloudFront from Rackspace. It isn't bound to Amazon
EC2 VMs in any way. You get much better transfer speeds when you're
interacting from within EC2, but even outside it's not too bad.

------
papsosouid
>Of course there would be the point where I would need help from people who
are specialized in database design / sharding / partitioning, etc - likely
earlier than going the cloud hosting route

Where does this misconception come from? That is the exact opposite of
reality. With the "cloud" route, you are limited to absurdly inadequate
servers, which is a large part of what drove the "nosql" fad, you need to
shard if you are on EC2, because they offered nothing with reasonable IO. Even
now they have an SSD option, but it is a single crappy SSD, and barely any
RAM. With the dedicated route, you can do a 512GB RAM, 24 SSD array server and
not have to worry about sharding until you are in the top50 sites on the web.

~~~
fallenhitokiri
Author here. From what I understand AWS tries to address the performance
problem with RDS - marketing statement "this is build for solving this
problem". Do you have any experience with RDS? Still the same problem?

That this problem does not exist if you kill it with hardware in the first
place is true. But the way from evaluating an idea, to gaining traction, to
hiring people and buying hardware is still a long way.

~~~
papsosouid
Unfortunately, RDS is EBS backed, so it does nothing to solve the issue. And
you are stuck with oracle on top of that.

~~~
fallenhitokiri
This basically rules out the last provider I knew of with a hosted DB option.
Looks like DBs of a certain size are still a "do it yourself" area :/ Thanks
for the info

