
The cost of cloud hosting vs. colocation - squanderingtime
http://chrischandler.name/the-real-cost-of-cloud-hosting
======
DanBlake
The cloud is a vehicle for hourly billing and instant provisioning.

If you are not actively using either of those features, you should look into
dedicated or colo. Now.

On every other front besides billing/provisioning, it will lose to dedicated:
Speed, Price, Performance, Server Specs and control.

The cloud should be used to handle unexpected workload or random jobs only. If
you are running your 50x database cluster on ec2 for 100% of the month, you
are doing it wrong.

~~~
andrewvc
Eh, the other thing is dealing with hardware failures is easier on the cloud
since you have instant provisioning. At small scale this is worth a lot.

~~~
dangrossman
There's a good selection of dedicated hosting providers that provision
physical servers in an hour or less. Softlayer has diagnosed and replaced
failed hardware in my servers in under an hour a few times. If you take care
of your own backups and configuration automation, then the advantage tips back
to dedicated servers at small scale.

------
MartinCron
The metaphor that I used when describing this to my boss was that sometimes
you need a traditional hotel, sometimes you need an extended-stay hotel,
sometimes you want to rent an apartment, and sometimes you want to buy a
house. It all depends on where you want to be and for how long.

Right now, we're in the extended-stay hotel phase. It doesn't mean that people
who buy their own homes or stay in traditional hotels are doing it wrong.

------
tomkarlo
You're ignoring the financial aspect of when that money needs to be paid. With
AWS it's billed gradually over the lifetime of the servers (and if you have
too many, you can easily reduce overhead with relatively little lost value.)
Buy those servers, and you either have to pay up front or commit to a lease
that may have breakage costs.

Additionally, if your service is growing at a material rate, there are
inefficiencies around when you choose to turn on extra hardware. With
colocation, you're probably going to do groups of machines at once (say once a
quarter) and attempt to predict how many you'll need (naturally erring to the
high side.) With cloud, you can provision new machines at any time _as
needed_.

It's great to do a set piece calculation and say colocation is cheaper, but
you're ignoring the realities of doing business - that plans change regularly.
That flexibility is one of the primary benefits of using cloud services.

Ask most CEOs if they'd rather pay 300K now, or 400K over a year or two with a
lot of optionality/flexibility, and I suspect they'll take the latter.

~~~
jacques_chester
Put another way -- profits and cashflow are different beasts. In the short
term, cash is king; which is why you can make more by charging frequently.

~~~
tomkarlo
Cashflow is generally fungible - there are ways to make either strategy work
such that you defer cash payments (this is why companies mostly actually lease
colocated hardware.) But one leaves you more flexibility than the other, and
that has real value in the real world.

------
mikeklaas
A couple of problems with this analysis:

Using reserved instances would push the EC2 figure down 30-40% (and like the
dedicated option, provide further cost savings in years 2 and 3)

You can't assume that the marginal increased technical management cost is
zero. If that's true, you're employing people that aren't doing anything
productive with their time. A dedicated cluster of this size would likely
consume 50-100% of one employee's time, which adds at least another $50k to
that side of the ledger.

~~~
DanBlake
Going dedicated has not increased the workload for us over the cloud. If
anything, it has reduced it significantly.

(We have over 150 servers)

~~~
jetsnoc
DanBlake I share your same sentiment. We went the real servers route and when
someone does if they use programmatic methodologies for system administration
(Puppet, Chef) you can have systems online just as quickly or systems
reprovisioned in to different roles very quickly. We call it our own private
cloud!

Edit: Note we are using an automated Debian preseed with Puppet installing
from an internal repository to bring the systems online with minimal sysadmin
interaction.

~~~
nagnatron
Is there some place I could read more about how something like that works?

------
mmt
Hear hear.

I've generally found that hardware has about a 10-month payback aginst AWS,
though I've tried to estimate my own (sysadmin) time cost to build out the
datacenter, as well as just cash cost.

I'm glad to see someone else is coming to similar conclusions.

Anecdotally, I've found that the big cost is up front, which is why it's
daunting many companies to make the change. What I find less comprehensible is
the desire to "move to the cloud" from an existing full-stack infrastructure,
as if replacing aging server hardware costs more than paying Amazon.

~~~
spade
You might be able to lease the hardware to reduce the large initial cash
outflow.

~~~
mmt
That's a big "might," and it adds significantly to the cost.

If I'm buying all brand-name Dell or HP, I expect it would be relatively easy
to get a low interest rate without much hassle. However, if I'm also buying
different brands of network hardware and, say, rolling my own high-performance
storage[1], it's another matter.

Regardless, even leasing can cause "sticker shock," and, perhaps more
importantly, still requires all the up-front time cost.

[1] Such as SuperMicro enclosures with commodity disks, rather than all brand-
name Dell, which can easily double or triple the price, even with a steep
discount.

------
alecco
That ballpark calculation only works on a particular subset of systems.

For example, on per-hour services it's nice to bring up big environments
replicated for Dev and QA. Or to do incremental updates with full A/B live
testing, with the old system eventually replaced.

Also, AWS has many services that replicating takes a lot of effort and
planning if you do from the ground up. Like load balancers and monitoring. In
fact, it would be smart to play with an AWS system before buying a whole stack
of BigIP/Cisco/EMC/IBM.

EBS is terrible, EC2 is dog slow and the relational MySQL thingie probably is
unreliable. But everything else has showed to be very stable for a long time
with very heavy users.

I'm upset at Amazon's terrible communication, but it's still the best option
for starting anything bigger than a php webhosting plan.

~~~
ericd
_EBS is terrible, EC2 is dog slow and the relational MySQL thingie probably is
unreliable._

Those sound like the main pillars... what's left?

~~~
alecco
Elastic Load Balancing, S3, SimpleDB, CloudFront, SQS, CloudWatch, and DevPay.

You can combine things, like automatically starting up EC2 instances from
monitoring/balancer.

------
chrismiller
As you note in the article $105/mb for bandwidth is extremely pricey.

With a 4 rack commit most colo providers would just throw in the 54mb of
bandwidth for free. That being the case you would save a further ~$100k a
year.

------
druiid
Well, I would say two things in response to this... that if anything the costs
totaled up here are pretty inflated (read: vastly so).

First, you generally aren't going to be paying (or paying much) for reasonable
connection speeds at colocation facilities. As someone noted already, many
times it's included with a large enough contract. I know in my case we're
essentially paying $2.5k/month for a full cabinet and a connection...
essentially dirt-cheap and not in a rinky-dink colo facility either.

As for the hardware, you have to figure that to spin up 50 application servers
on 50 sessions at Amazon is nowhere near the same as 50 sessions on your own
hardware. If you virtualize, like the EC2 backend is of course, you're not
sharing the hardware with anyone. You don't need to worry about noisy-
neighbors or I/O issues if you've purchased the right hardware. Essentially
I'd go out on a limb and say you could at least halve those hardware numbers.

In my experience from physical to virtualized systems, even under high-load
situations 90% of your 'load' issues are not going to come from processors,
but from memory limitations, so yeah... hardware costs I expect will be lower
than calculated... much lower.

~~~
squanderingtime
From a "what you get" perspective I totally agree with you. I simplified the
comparison to core count to have an easier basis to setup the rest of the
analysis.

I would not be the least bit surprised if I halved those hardware numbers and
it kept up just fine; mostly for the reasons you pointed out.

------
snorkel
To me the IT people who bought into cloud hosting are the same type that would
gladly buy a vacation time share: you end up paying too much to rent a slice
of a resource that is never really yours, when you could've used that same
money to outright own something much better.

When people tell me how great their vacation time share they "own" is I ask
how often do they actually use it (3 weeks per year) and how much are they
paying (a lot) then I point out that they could've booked first class air
travel and a top floor penthouse at a 5 star hotel at a luxury destination for
3 weeks each year for far less money than their time share costs and fees. I'm
glad someone is clearly pointing out that you can make the same case with
cloud hosting vs. colo hosting.

Why is owning physical hardware so scary all of the sudden? Dear Lord, is it
_really_ that difficult to rack a physical box and replace the hard drive once
in a while? AWS marketing deserves a Gold Medal for Industry-wide
Brainwashing. The cloud is a time share!

~~~
mnutt
I don't understand your analogy. One case involves making yearly payments
regardless of use, while the other involves only paying when you use it. But
from here it looks like cloud is to colo as hotel is to timeshare (or
purchasing property).

As a small startup I'm happy to host in the cloud because it removes a lot of
the up-front risk of buying physical hardware. Just as I get a hotel when I go
on vacation because I don't want to purchase a beach house outright. But at
scale, the economics change and it most likely goes the other way.

------
bestes
I had Dell's 24x7x365 4 hour, on-site service contract for a rack of 16
identical 1U servers. "4 hours" was for them to _respond_ to my call (phone or
email, I can't remember). Their response was not showing up on-site with the
part, but asking me to run a long series of tests, including brining the
machine down, swapping out parts, resetting the BIOS, etc. Once I did all
that, they ordered the part, then scheduled a delivery. It was 3-4 days,
minimum, to get something fixed.

For the $1,909 price per server: I followed the link to Dell's site and tried
to configure the R515 for myself. I added the second processor and left it set
to the worse one, added the cheapest 16GB memory option, the redundant power
supply and rails. That was $1,999.

A single, 250GB HD seems a bit "lite" for a server, even for one that does
mostly processing. I didn't look into the networking or anything. I'm guessing
at least another $1k to make it reasonable, probably more like $2k.

~~~
thwarted
_A single, 250GB HD seems a bit "lite" for a server, even for one that does
mostly processing._

I've found that 250GB is way overkill for a machine that does mostly
processing. Consider web servers, which need a copy of the code (which
hopefully isn't over a gig), and the operating system install for a server
should fit in under 10GB. The bulk of the data they'll be processing will be
in remote services like databases. If you have a decent logging infrastructure
setup, where you regularly ship all logs off the machine, you need enough
storage space for the logs for the rotation period -- if you ship them off
immediately (say with scribe), you need enough local storage for the period
your aggregator is unavailable (which can be minimized by having even scribe
log aggregation load balanced). If these machines are in clusters and you can
survive with some of them being out, you don't even need RAID on them. It's
kind of unfortunate that the smallest drive you can buy leaves it 80%+ empty,
because I have a feeling that drives that were smaller and not optimized for
speed and capacity, but still use modern technology, might be more reliable.

We've taken to putting smallish SSDs in our load balancers and other machines
that don't need a lot of local storage, since having fewer moving parts (now
the only moving parts are the fans) is a power-usage and reliability win.

------
dialtone
How exactly do all these cost comparisons factor in the ability to quickly
deploy hardware in 4 different regions in the world without having people at
location to swap broken hardware or reboot an instance that is not available
for whatever reason?

------
justincormack
OK we just had a big Amazon outage and someone compares no redundancy costs vs
Amazon. You need to double your dedicated hardware costs if you dont want to
go down when the data centre goes down. Your Amazon costs include a whole lot
more redundancy if you architect well.

~~~
TillE
What if EC2 is your failover solution? That seems like an ideal use case,
considering the billing model. Keep one instance online at all times to clone
the data, then spin up a few more to replace your servers when they die.

Or just build what you need in two datacenters with load balancing between
them, and accept that operating at half capacity for a while is a lot better
than being down entirely.

------
jwr
This is very theoretical and won't hold up in the real world. You also need to
factor in the costs of hot standby equipment (can you provision a server with
68GB RAM within minutes?), service (whom do you pay and how much for a)
servicing your hardware b) being on standby to fix issues within single
hours).

You need to train your people that will maintain the physical servers. You
need to factor in the probability of their mistakes (in my experience from a
supercomputing center most problems were caused by people touching equipment).

And of course when you get mentioned on CNN you need to be able to handle the
traffic peak.

Runnning a real world operation really isn't as simple as adding numbers from
a colocation price table.

------
adulau
Nothing is black or white. From my past experience, when you start to have
services that are i/o intensive, the colocation is a good option especially
that you can easily control or tweak the underlying hardware. On the other
hand, if you start with a small scale service and you don't have a large
distributed datastore "cloud hosting" is often simpler and cost effective.

~~~
phlux
This is something I have been wondering about; it would be good to be able to
have space in a cloud DC - where you can have a rack or ten of your own
equipment as well. Thus being able to leverage both.

I don't know if it is offered at all by any of the cloud vendors, but it would
be good to be able to install specialty machines/disk into the facility but
still leverage all other aspects of the cloud provider.

~~~
MartinCron
You can mix "cloud" + "non-cloud" stuff at SoftLayer. That's part of my
startup's growth/emergency scalability plan.

------
ChuckMcM
An enjoyable post. I would add a couple of data points to the mix.

Getting a gigabit link with a competent IP-transit provider will be on the
order of 3 - 5K$/month. Thats 1000mbits 24/7 not limited by how many bytes you
push through it.

A switch and router for your rack stacks will be on the order of $15K (that's
a couple of 48 port GbE switches and a Cisco router (or equivalent))

You _really_ need to understand the depreciation costs. As your equipment ages
you will need to replace it (if only to keep on supported platforms). $100K +
$30K for servers + $15K for networking gear is $145K of gear. If you squeeze
all you can out of it and only replace it in 5 years then you can do a 5 year
straight line depreciation so add about $30K/year to your costs for
depreciating the old gear.

On the storage array, if you want 10TB of raid protected storage with the
MD1220 you need 24 600GB SAS drives [1] which comes in at $23K each (not
$12K)(oh an you have two of those and you again have $10K/yr depreciation).

Oh and you probably want a service contract, something like onsite in 4 hrs or
if you're a bit more laid back in 24hrs. That will add another $150K/year (
but I'm sure that you can get the sales guy to knock off a bunch as its
probably a list price vs 'what i can get it for' kind of deal)

Another real world bit that will bite you is that while you can "fit" all this
gear in a 40U rack you can't put enough power into that rack at a Colo
facility to run it. The servers are 750W machines, so lets say you put a
120V/30A circuits into your rack, you can really only draw about 25A before
people complain so you have about 3KW/circuit available. A 'normal' colo
facility will offer you 2 per rack. So with 750W servers you can run 8
machines per rack. You'll probably not run them that hard and can get away
with maybe 12 per rack. But with 54 totals servers that is going to be 5 racks
minimum and maybe 6. (remember your switch and router will take power too).
Either way you're looking at 24 - 30 'circuits' for this space and those are
probably about $500/month each so another $12-15K/month in 'power+cooling'
charge.

You pretty much have to add in either the cost of a tech or half the cost of
one of your operations employees to run this setup. Ideally you have two
people at half time so that you can structure vacations for them. So put it
down as one full time sysadmin and one full time tech, implemented as anywhere
between 2 and 4 people. Don't forget to include the cost of their office
space, their health plans, and their laptops :-).

Did you include travel time and travel expense? So most things can be 'lights
out' but many exceptions to that rule exist. If you can drive to the data
center from home then you're better off than if you have to fly there and
check into a motel.

All that being said, its an important exercise to run through and figure out
the costs since it is your own money that you are spending. And AWS does get
some economies from being able to fractionalize things like sysadmin
resources.

[1]
[http://configure.us.dell.com/dellstore/config.aspx?oc=bvcwmk...](http://configure.us.dell.com/dellstore/config.aspx?oc=bvcwmk2&c=us&l=en&s=bsd&cs=04&model_id=powervault-
md1220)

~~~
enko
> You really need to understand the depreciation costs

Huh? He's already paid for the gear. Depreciation isn't further payment due,
it's an asset write-off which is actually welcome since it slowly turns the
initial capital outlay into a tax deduction.

~~~
sudhirc
This is not about depreciation cost alone. Normally vendor tie support cycle
to depreciation cycle so you cannot get new parts or support unless you pay
them heavily.

~~~
enko
Sorry dude I don't know what you are talking about. The post talks about
buying your own gear; the parent to my reply seems to misunderstand what
depreciation is and goes on about some non existent recurring cost.

Are you talking about renting gear from vendors? This seems again to be a
misunderstanding, no-one is renting anything. Support contracts are normally
separate, certainly they are still available at normal cost within 5 years,
the normal depreciation timescale.

It seems this is a really misunderstood topic. Maybe someone can do an
"understanding depreciation for startup founders" post or something.

~~~
ChuckMcM
See my response to your response :-)

------
3pt14159
"Note: If you're going to disagree with one of my assumptions it's this last
one. I am perfectly aware that a uniform duty cycle is unheard of when it
comes to web applications..."

Well there you go. I know a Startup paying a good $12k a month on the Amazon
Cloud and their multiple for daily peak hour to valley hour is greater than
10. So given your assumption backed up by anecdotal evidence (of which I have
my own) sure, collocating is cheaper.

~~~
squanderingtime
That's why I made sure to document that. I have some clients that have an even
higher disparity between their peaks and valleys that makes cloud hosting very
viable. Auction type sites are a great example. As the time to an auction
ending approaches zero everyone starts mashing refresh, but if nothing is
going on the activity is almost zero.

I just wanted to clarify it isn't a one-size-fits-all.

------
mtw
I don't get it, what about renting dedicated servers at a company like iweb or
softlayer? much cheaper than colocation for startups, you don't have to worry
about networking and hardware. and also much cheaper than cloud hosting.

(of course, it doesn't give you all the fancy features of cloud hosting such
as flexible pricing and automated provisioning)

~~~
lsc
are dedicated servers that much cheaper than 'cloud'? last time I looked they
were fairly comparable.

genuinely interested; I'm considering getting in to the 'instant provisioned
dedicated server' market myself; there's no technical reason why dedicated
servers need to take more than, oh, about sixty seconds longer than a virtual
server to set up.

~~~
benologist
Dedicateds give huge bang for your buck - we have a pair at Hivelocity.net.

One's got dual xeons w/ 12gb of ram and 4x500gb in raid 10 for like $320 a
month, which works out to 42 cents an hour.

The other's a single xeon w/ 8gb of ram and 2x500gb raid 1 for $160 which
works out to 22 cents an hour.

Both come with 10tb of bandwidth a month and are exclusively used by us,
nobody is messing with our disk io or anything else.

I don't know how they compare with AWS' compute units etc but if you have an
_ongoing_ need for the hardware and you're going to be paying those hourly
fees (and all the others) every day of every month then I suspect it's going
to be a _lot_ cheaper than AWS.

~~~
lsb
For 17GB of RAM and two 3.25GHz xeons and 400GB local storage, it's 18c/hr for
a spot instance. Make your make payment $1/hr and you're set.

<http://aws.amazon.com/ec2/#pricing>

~~~
ericd
I'm not seeing this? The only thing with 17 gigs that I see is the High Mem
xtra large, and that's 0.50/hour.

The xeons he's talking about are quad core, each, for a total of 8, vs. 2
cores of 3.25 ECU each for the high mem xtras.

~~~
vetman35
"SPOT INSTANCE" is $0.2460 per hour right now. "ON DEMAND" is $0.50 per hour.
Login to your AWS Management Console to see the current SPOT INSTANCE price.

~~~
ericd
Ah gotcha, thanks - assumed he meant that the price was listed on the page.

The points about 8 vs. 2 cores still stands, though? Add to that that spot
pricing isn't really comparable to guaranteed pricing (on demand is more
comparable since dedicated servers are on-demand, you can cancel and get your
month prorated).

------
squanderingtime
I spent some time trying to put together some numbers. From the research it
looks like cost pressure against traditional colocation options has forced the
pricing to come down. It still looks like owning your own hardware is a
viable, and if you have people, good option.

~~~
mtw
owning your own hardware is distraction, unless you've become too big or your
business involves hosting (such as wordpress business model)

~~~
ssmoot
Distraction to whom?

The developers (everyone) in a two-person startup?

Sure.

In a mature business with 50 employees? In an organization of that size do you
really want your development team driving IT decisions (and budget)?

~~~
mtw
as demonstrated by the frequent data center outages, it's extremely difficult
to have 100% uptime, unless you have a crack team of sys admin and operators
who are better than Google or Amazon engineers in server management and data
center management. Investing in hardware means investing precious time,
engineering talent in issues you can invest otherwise in sales, marketing,
product design or any critical aspect of your business.

Of course, if you find that hosting takes a large % of your costs, and if you
are sure you can do it yourself with less costs , then it's time to have your
own hardware.

~~~
ssmoot
You're over-simplifying.

Amazon provides power, servers, bandwidth and their network.

You still need administrators and ops people. If you colo, your data-center
will provide power and bandwidth. You just need your cabinet network and the
servers.

On top of that, Amazon costs more, and expose you to problems like the EBS
outage that you simply don't have with colocation because you haven't gone and
developed a (probably necessarily) complex provisioning system to manage it
all.

The implication in your statements is clear that the act of purchasing,
provisioning and maintaining hardware is a primary driver of your
IT/Operations work-load, when in reality that's a minority concern at best for
most. It borders on misinformation.

~~~
mtw
network or electricity or software will go down one day and then you will
begin to think about disaster recovery plans and get another colocated space
in easter or western US.

A simpler option is to rent dedicated servers in 2 different hosting
companies, say softlayer (US) and OVH (France) and design fall-back mechanism.
It will cost you less than amazon or owning your own hardware.

------
speleding
I agree with the calculation in the article for applications that have "shared
state". However, if you are doing something that is a bit more amenable to the
cloud, like serving assets or processing email then the calculation is very
different.

The real killer in the calculation is in the assumption of a uniform
distribution cycle. Any service that can break away from that looks very
different. If you serve static video files, for example, you cannot hope to
match the geo-distributed service AWS Cloudfront offers unless you are huge.

A smart mix of colo and cloud is the way to go for medium sized businesses.

------
dave1619
Just wondering, how many users/usage can a set up like the one in the article
handle? (For example, how many concurrent users for a web app that's a social
networking site with mostly news feeds, comments and some pics?)

~~~
squanderingtime
To be honest it's impossible to tell without actually measuring it. The #1
rule of scaling a system is to measure the variables your trying to optimize,
make a change, and then take measurements again. So without knowing the kind
of software that's running on this we have to leave everything in terms of a
theoretical workload.

If you aren't doing graph operations, then 50 dedicated servers can do a
pretty impressive workload. Add in graph-ops and all bets are off :-). The
same could apply for user-generated content (eg youtube, facebook, vimeo).

Stackoverflow is another great example of a company that vertically owns their
stack. Check out:

<http://blog.stackoverflow.com/category/server/>

[http://meta.stackoverflow.com/questions/52353/how-many-
serve...](http://meta.stackoverflow.com/questions/52353/how-many-servers-does-
stack-overflow-use)

------
Killah911
I did the cost calculations for a very very small business, and rackspace's
cloud hosting was more cost effective option for us. I'm sure at some point
that will no longer be the case, but we'll calculate that when we get to it.
Even then, I'm always willing to pay a premium to let someone else manage
hardware headaches.

------
nowarninglabel
Uh, no hardware failure costs? Both the time and money to handle such.
Hardware fails, which costs time and money to fix. To not account for failure
within a year is either delusional or using some extremely high-quality
hardware which is going to cost a lot more than the linked to Dell server.

~~~
druiid
There's uhh.. a thing called a warranty which generally is sold with every-
single-device-sold-ever that is enterprise grade. This is also where they make
their money. If a hard-drive dies you call Dell and they ship you a new one.
If a hard drive in a $40k+ Netapp dies, you call Netapp and they tell you
their rep is waiting for you at the data-center and to hurry up. If your Cisco
switch dies you call Cisco and they ship you a.... you get the idea.

~~~
netpenthe
as someone said above, if your Dell hard drive fails, they _respond_ to you in
4 hours, then they ask you to run a series of tests blah blah blah, then you
might get replacement that day or in a few days..

this is likely to cost at least 4 hours of a sys admin's time, maybe closer to
10?

------
jerhewet
[insert sound of loud cheering from those of us that have a clue]

~~~
netpenthe
one problem we recently faced was we run Xen on remote servers.

We found out that our 3 year old storage hardware had some firmware conflict
with Xen (or something like that)

took 2 weeks to diagnose + a plane trip out to fix, even then we weren't sure
we got it.

after this, we moved to Rackspace.. there is no way that a sys admin running
one rack of equipment is in the same position as Rackspace to diagnose and fix
these types of issues.

i imagine if Rackspace/Amazon had this type of problem, they would: 1\. Have
24 hour manned data centers 2\. Have the input of a team of 20 engineers
working on the problem 3\. Have a Dell/IBM/HP engineer at the data center
within the hour 4\. Have _lots_ of spares - no calling in new hardware

Having been a (part time) sys admin for years, the worst problems are those
that are hardware related and are difficult to diagnose.

(The other advantages of Cloud, e.g. scalability etc are well documented. But
i think my ideal setup would be dedicated, non virtualized databases with
Cloud front ends)

~~~
squanderingtime
I've been on the other end of this spectrum too. Working with a very high
profile company (that I unfortunately can't name) we were paying what I'm
going to call an _astronomical_ rate for managed cloud operations. Dedicated
data center admins, systems administrators, the works.

We would have a problem with something like IO throughput on database
instances. We would open a ticket, wait a little while, get a response. If we
claimed it was hardware related (because we couldn't tell from our host's
perspective) we got the response "it's not the hardware, everything seems
fine." This would go on for _days_. Then eventually, after we had to prove the
numbers were erratic or unresponsive we would eventually get a more helpful
response. Maybe.

It's a _very_ cold splash of water in the face when you realize that your
hosting company, cloud provider, whatever is _not_ in business to hold your
hand. If you need more than their minimum level of support or require human
interaction you will be sadly disappointed. These companies maintain their
margins by automating hardware provisioning, homogenizing infrastructure, and
making it as turn-key as possible. Which is all fine and good _until_ you need
something their infrastructure doesn't provide for. Like switch bandwidth.
Like larger instances. Like all your VMs in a local rack.

You will have hardware problems in the cloud too and they will not be obvious.
You will need the same degree of monitoring software you have anywhere else in
any other environment.

~~~
mmt
_You will have hardware problems in the cloud too and they will not be
obvious_

I'm often hearing "in the cloud, one doesn't have to worry about that
hardware" (or network). My usual retort is that one certainly does have to
worry, since the same problems exist, just that one can't do anything about
them when (or before) they occur, unlike with owned hardware.

------
noonespecial
Don't forget taxes. In most locales you will pay tax for owning those servers.
Depreciating them as fast as possible will help but owning stuff as a business
is hard expensive work from a tax perspective.

------
sukuriant
What about power and cooling costs?

~~~
mrinterweb
That's covered by the coloation service costs.

~~~
sukuriant
Ah. I did not know that. Cool.

------
phlux
This is great information and in light of the AWS outage of last week - I
would recommend a hybrid model. Just as you recommend AVPC be used for
dev/testing - you could potentially achieve an ideal hybrid by deploying some
smaller % of the dedicated hardware and having AWS fulfill elastic capacity
needs.

Obviously this is determined by your application specifics, but there are a
lot of deployment methods to consider.

The biggest issue I take point with in your assessment, however, is the
networking gear cost at 10% being far too low.

You can get fair LB capabilities from low cost vendors like Coyote, but your
switch infrastructure will likely be much higher than 10K unless you're doing
some bare bones setup with 1U stacks and no redundancy.

Further, I would expand this model and add stand-by and failover hardware in
the calc.

In which case I would round up to 60 servers and have a tertiary DB box as
well.

Finally, I would add a support/contingency budget of 15% for emergency gear
replacements.

In the case of staff -- there is a strong likelihood that you would need more
staff to dedicated setups than you would with the hosted setups:

You need your staff to have more specialized skills in DB, routing, sys-ad
etc. You also need to consider that you'll have more support costs for round
the clock and on call support. While you have these costs with AWS, they are
lessened as your staff are fundamentally in a reactionary state only with AWS
- there is no proactive PM on hardware etc with AWS - you simply respond when
outages occur and wait to regain access to your affected systems.

Staff on call would be required to be able to delve much deeper to root cause
any outages you have in a dedicated environment, travel to the site and
physically mitigate any gear failures.

Overall though, this is a fantastic perspective that everyone should have in
Excel and type in their own numbers.

~~~
chopsueyar
In YouTube's early rapid growth phase, they were putting servers online
without switches.

------
ditojim
how come common sense still tells me the cloud is cheaper? i don't need to
write a book to make the argument, either.

~~~
ditojim
ok let me rephrase...for the vast majority of businesses using the cloud, and
vast majority means small businesses since there are way more small businesses
than big businesses, the cloud is much much cheaper.

the 'cloud' has enabled my business to grow. without it (despite the fact that
our business is delivering customers to the cloud), i would not be able to
grow and scale my business efficiently.

the author is talking about hosting an application in the cloud, which does
not nearly encompass all the use cases for the cloud, and thus invalidates the
argument.

~~~
ditojim
disagree? tell me why when you downvote than. i'd love to hear your side.

