

The pros and cons of cloud hosting - prateekdayal
http://devblog.supportbee.com/2011/12/30/pros-and-cons-of-cloud-hosting/

======
ridruejo
Though it may seem silly, it is not always about cost and performance. In many
cases, EC2 allows you much more flexibility than a traditional dedicated
server. One example, for EBS-based servers it is possible to clone the entire
server with just one API call. This allows you to test upgrades, performance
enhancements, etc. without disturbing the production server configuration. And
you will be doing so in an _exact_ replica of the machine, minimizing the bugs
or issues introduced when an staging or test system had changes applied.
Another one: it is simple to resize your server as needed. You can start with
a micro instance when developing and then scale to bigger instance types as
needed after you on production. With a dedicated server, it is much complex to
migrate your setup. We take advantage of those and other features at BitNami
Cloud Hosting (<http://bitnami.org/cloud>) and have had a lot of success so
far.

Finally, hosting on Amazon is not only about EC2, it is about the whole
ecosystem. You can take advantage of many other services, such as their
offerings for MySQL (RDS), memcached (elastic cache), CDN (CloudFront),
monitoring (CloudWatch), etc.. Like any other technologies, they have their
shortcomings, but can save a significant amount of time and effort vs. doing
it yourself and they are way ahead of anybody else in the space (specially
traditional hosting companies)

As a side note, EC2 costs can be significantly reduced with reserved instances
if you are willing to commit to 1 or 3 year terms.

~~~
mark_l_watson
I totally agree with you: AWS is a platform with a range of services and much
flexibility. Depending on requirements, sometimes using a PAAS like Heroku
(and plugins) makes sense, sometimes AWS, sometimes naked hosted servers.

I think that very high level PAAS providers like Heroku, DotCloud, and
CloudFoundry are the future, at least for what I would like to do. If I am
working for customers, spending effort on plain AWS or hosted servers is OK if
that is what they want, but for my own projects I prefer to spend a little
more money and save a lot of my own time.

~~~
prib
And I really agree with you too.

Adding to that, AWS does have its own PaaS: Elastic Beanstalk. We've been
using it for several months now without a single hiccup. It's Java only, but
it just works: load-balancing out-of-the-box, auto-scales beautifully and
replaces dead instances in a couple of minutes. You can easily launch new
environments for testing new features and app updates are a breeze. And you're
still close to all the AWS services like S3 and CloudFront which adds a lot of
value to the package.

This sounds almost like a commercial but, yes, being used to manage our own
servers, this was one of the greatest moves we did. We still have a server
hosted at Hetzner (mentioned by the OP), but have moved pretty much everything
to the cloud. It's good to know that updating MongoDB is now just a matter of
sending an e-mail to the guys at MongoHQ (which also runs on AWS) and they do
it in a few seconds.

It may not work the same for everyone but from our experience it does pay off
even if we're burning a few more dollars every month, as we're not spending
long unexpected hours on server management anymore. And for a small team
trying to focus on product development, that's gold!

------
cullenking
There's a sweet spot where virtualized solutions make sense, but it's really
easy to find reasons why owning and colocating your own metal is more
economical. I have a 1/3rd private rack and a 100mbit uncapped unlimited
dedicated port, for $570 a month. I have five servers in there, which I have
cobbled together for pretty cheap (for the most part). I would be paying $3000
a month for the equivalent cloud solution....

For anything requiring real IO performance or tons of memory, stick with your
own hardware.

~~~
marquis
We've experienced location-wide outages due to natural disasters in host
centers - putting all our servers in one place is no longer an option, so
we're looking at a combo of colo and cloud. We're scaling up to 5 locations in
the next months, thanks to all the great replication tools we have at our
disposal now it's a fairly simple task, more than it would have been 5 years
ago.

~~~
jared314
Which replication tools are you using?

------
ekidd
There are actually 3 major alternatives here, and the article ignores the
third:

1) Run on dedicated hardware.

2) Run on EC2, or another "Infrastructure as a Service" provider.

3) Run on Heroku, or another "Platform as a Service" provider.

For smaller companies, it really comes down to a few questions: Who's worrying
about your database backups? Who handles security patches? What happens when a
critical machine fails on Christmas week?

Many smaller companies will be happiest with option (3), because somebody else
worries about backups, security, and machine failure for you. Sure, it's
expensive. But it's a lot nicer than calling your senior programmer back from
vacation because of a catastrophic RAID failure.

Option (1) certainly _looks_ cheaper on paper. But many small companies are
skimping on something critical, and they'll get burnt within the next 5 years.

~~~
bau5
Oh, please. That's just your list, which you haven't provided any evidence
for. PaaS is still tiny in comparison to the overall market for commercial
hosting.

------
duggan
It's better, in my opinion, to think of various cloud providers as just
another endpoint in the evolving infrastructural API layer available to people
and companies.

The breadth of options becoming available is fantastic; it's not that long ago
that hosting options were:

    
    
      * sharing a single physical machine with a group of unknown other customers
      * renting one or more single physical machines with preselected hardware/OS
      * Purchasing your own hardware and co-locating it in a data center
      * Building a data center
    

Right now, PaaS providers are taking advantage of all this newly available,
ephemeral, programmable computing power to build abstracted services, allowing
other developers/companies to take advantage of pooled expertise and
resources.

I think, if I were building the infrastructure for a company today (which I am
helping with, for a lot of companies), I'd definitely eat the additional cost
(can be offset quite a bit by reserved instances) and idiosyncrasies (instance
degradation, unexpected performance characteristics) of Amazon.

I spent years toiling over hardware quirks, flakey SCSI adapters, power
outages, failing or aging machines. If you rent boxes, you're relying on SLAs
(if you can get them) and sometimes insane costs for an onsite engineer to fix
something you broke (I mucked up a firewall config once on leased dedicated
hardware. Don't do that. Ouch.)

There are similar outages with cloud providers, but depending on which rung of
the abstraction you're on, (IaaS, PaaS, etc) you might be in a much better
position to redeploy your infrastructure elsewhere if it's a real disaster.

The bigger your product/service gets, the more expensive your downtime is (and
the more you're spending on engineers to make sure it doesn't go down. Oh, and
your hardware has quirks, and your engineers know them - if they leave, they
take that knowledge with them).

Of course, there are situations where you'll want to minimize financial
outlay, run something that's not a "web app", don't mind getting your hands
dirty, willing to risk hardware failures, etc. Hopefully PaaS providers will
continue to bridge the gap for most people.

~~~
ericabiz
Your analysis of dedicated servers is somewhat dated. For instance, most
higher-end dedicated servers these days come with IPMI. You can use it as a
KVM in case you muck something up. It has its own IP address (on a different
subnet) and browser-based console access. For the servers we rent, IPMI is a
requirement.

Of course, you can also build and colo your own servers with IPMI--pretty much
every brand supports it.

------
pbrumm
that is quite a bit of hardware for $51 a month. Anyone know of servers in the
US that are priced even close?

~~~
amfr
They tend to be out of stock, but VolumeDrive comes pretty close:
<http://volumedrive.com/vdrive/?a=dedicated> The other thing you have to
consider is the 150 euro setup fee on the Hetzner server, though that becomes
less of a factor if you keep the server for an extended period of time.

~~~
phillco
Damn, their VPS packages cost less then I'm paying for shared hosting at
Dreamhost. Unfortunately, a cursory search shows mixed reviews. Have you used
them personally?

~~~
amfr
Sorry for the late reply, but yes, I have used them. They are OK, you really
do get what you pay for. I've had some trouble with their default
Debian/Ubuntu images being messed up and my VPS has been suspiciously slow at
times. Their support is prompt but they aren't willing to do much (which is
fair as it is unmanaged).

------
j15e
Why do people always compare EC2 _On-Demand Instances_ cost to classic hosting
cost?

The actual cost for a planned _Reserved Instance_ on EC2 is much lower and is
a much more realistic scenario for hosting : it cost about 27$/month for a 3
years reserved small instance (425$/36 months + 14.64$), not 60$/month.

No one knowing it will use an instance 100% time should opt for On-Demand
instance, because yeah, it costs a lot.

<https://aws.amazon.com/ec2/#pricing>
<http://calculator.s3.amazonaws.com/calc5.html>

~~~
abhaga
You need to compare costs on the comparable commitment time-frame. You are
assuming a 3 year commitment with Amazon. Classic hosting has a monthly
commitment.

~~~
j15e
I think most dedicated hosting often demands a setup fee of 50$+ or a 1 year
commitment.

And even not taking in account the setup fee and the commitment, at 27$/month,
you get a lot of power for your buck comparing to the cheapest dedicated
server available at maybe something like 60$/month.

Cheapest dedicated box I could quickly find is at 59$/m, and even with a 2
years commitment you get "only" a 25% reduction which put you price at ~45$/m:

<http://iweb.com/dedicated/clearance>

~~~
ericabiz
Yes, but that dedicated box (even an older/cheaper one) is going to beat the
pants of S3's I/O. Apples and oranges.

~~~
gtaylor
You don't always -NEED- fast I/O, btw.

------
pardner
While the article is a useful comparison of straight EC2 vs dedicated servers,
it doesn't touch on the cloud PaaS options such as Heroku that eliminate so
many complications of installing and tuning and maintaining your frameworks.
IMO comparing "cloud vs dedicated" without reference to PaaS options is akin
to comparing "combustion engines to bicycles" without mentioning motorcycles.

~~~
prateekdayal
Thanks for the comment. I thought about it but wanted to keep the post short.

I agree that PaaS is a different story all together. It frees you up from
doing most tasks. However a vanilla EC2 instance or dedicated are almost the
same in that respect but quite different pricing and performance wise.

~~~
michaelleland
We use PaaS because it allows us to focus our energy on the product that the
customer sees, rather than on the backend stuff that, done right, never
affects the customer. When our small team grows, we can afford to concentrate
on our own hardware and make cost-saving decisions. But we are very light on
system administration experience, and our scarcest commodity is time, not
money.

------
tbod
Great discussion - would be an interesting poll to see exactly what hardware
setups HN users run. When looking at options for our bootstrapped startup, was
very surprised to find the costs of Amazon compared to dedicated boxes, or
even a couple of dedicated VPS instances. It appears so many people I have
spoken to who use AWS do not even automatically spin up instances when they
need them, so have to manually react and then bring them up... And another
startup who have automatically scaled up the instances have also had massive
bills when someone found a loophole and distributed manga art via their
services.

------
schuon
There's one other drawback that made us switch from AWS to dedicated hosting
(among others such as those mentioned in the post): latency

At least in Europe Amazon has only one datacenter, quite at the outskirts in
Ireland. We could save some 20% in load time by moving to Germany, where our
customers are!

------
marquis
This brings up a question I've also wondered when looking at moving to EC2: it
simply costs more to run when you have a few servers that use up a lot of
bandwidth. At what scale does it become more efficient?

~~~
ericHosick
EC2 instances are not reliable: they degrade all the time. The solution,
recommended by Amazon, is to have redundancy. So, plan on having at least two
of every type of server or using their services like RDS.

Really, it is a rabbit hole that can lead to thousands of dollars each month
very quickly. EC2 becomes efficient when talking about hundreds of thousands
of customers if not more.

You can end up with an impressively robust system but at a large of upfront
cost: for startups probably not worth the ROI.

~~~
bretthoerner
> for startups probably not worth the ROI.

I definitely agree. That said, I think people seem to forget something.

Amazon says: "your instance may go down and if it does we may just terminate
it, reclaiming your ephemeral disk. All data not on EBS will be lost."

Every other host (at least implicitly) says: "your instance/machine may go
down, we will attempt to recover your data. The ability and speed depends
largely on why the machine went down and whether we run RAID/etc on the
machine."

I've waited _weeks_ for a disk repair from very well known/regarded non-Amazon
hosts. Redundancy is needed everywhere, not just on Amazon. Bare metal still
dies, controllers flake out, networks go down.

~~~
sunsu
This is why I'm sticking with EC2 for now as well. Though one of my instances
might go down or become unreliable, all I have to do to fix it is Stop and
Start it!

No messing with support tickets or waiting (sometimes hours) on someone to
talk out to the data center to troubleshoot it. Its just fixed.

