
The Five Stages of Hosting - stilist
http://blog.pinboard.in/2012/01/the_five_stages_of_hosting/
======
LogicX
This is an amazing article. I intended to write such a blog article myself.

One aspect I intended to cover (and will now do so here) is that of cost. I
get very frustrated by cash-strapped startups which from day one are expecting
to be 'web-scale' and need to turn up machines at the drop of a hat. Lets get
real. I HOPE you have to worry about that... much, much later.

I'm quite pleased you started your article mentioning solutions like heroku.
Unless you have some special-needs not met by a PaaS, this is where you should
start. You should be writing code, not managing servers (this coming from an
operations guy who has been managing servers for 12 years, and worked at a
webhost). Once you scale far enough that its worth hiring someone to deal with
the knowledge required to deal with the maintenance of managing your
application stack, OS updates, security, etc -- THEN move on. Not a moment
before.

Cloud servers are realistically not the price/performance/low-maintenance
solution for MOST startups. You should get a VPS (Linode) or dedicated server
(from a reputable company which can offer quick SLAs on replacing parts - like
2 hours at voxel.net). Dedicated servers are cheaper than you think. I pay
voxel $180/mo for an 8GB quad core 1TB box on 100mbit/sec. It outperforms
servers costing twice as much in EC2 - and thats not counting bandwidth or
storage. Concerned about reliability? Buy TWO - In different datacenters. --
You're STILL saving money, and you have the exact same level of maintenance
overhead as AWS (OS, Updates, full application stack); while reaping the
performance benefits of bare-metal.

You do NOT want the headaches of colocation. You cannot pay your staff enough
to stay local to the server 24/7 and the cost of extra parts on hand to make
up the money over dedicated.

Your startup is not Google, so I won't get in to having your own datacenter.
(Well done pointing out that they're not getting advice from your blog)

~~~
metajack
I had some poor experiences with dedicated servers at well known providers.
Essentially if the hardware fails, they'd rather fix it than get you into
something new so you can be back up and running. My site ended up offline for
several hours, several times, so that some tech could run hardware
diagnostics, etc.

EC2 may be expensive, but if anything goes wrong, I can boot another instance
in seconds and abandon the old one. They fix it on their own time.

Sure, I could buy more servers so that one down one doesn't affect me, but
that's twice as expensive. On EC2 I don't have any penalty for abandoning an
instance for any reason. I've actually moved data centers in EC2 when the
performance profiles were better on the other side for the same size
instances. It was a temporary difference, but migration was pretty much
painless.

You're voxel suggestion may have a 2 hour SLA for replacing parts, but what
happens when they don't know what is wrong?

That said, I agree that a person should try for the PaaS and evaluate all the
options fairly. I tried Linode, several dedicated hosts, and then finally
moved to EC2 on a previous project.

~~~
devs1010
I agree, I love EC2 personally, their mini server plan(or whatever its called)
is great for keeping a production version of a project running while its still
in an initial phase (very low user base) and then you can scale easily without
having to learn anything new as you can run basically things the same as you
do locally (I run Ubuntu locally and mainly develop for Tomcat so it makes it
really easy to keep things the same on there). I looked at Heroku but as soon
as I realized it would be rather expensive just to get regular datbase access
(so I could run straight SQL if I needed to, etc) it was out.

~~~
reedlaw
The micro instance seems incredibly slow for me running Rails. It runs out of
memory while doing a git index-pack.

~~~
amishforkfight
The micro instance seems to be routinely throttled, for random reasons. My vm
was choked for days despite only pulling in ~200 visitors per day. Upgrading
to the Small instance has removed all throttling.

I'm running a standard LAMP stack on it, nothing crazy.

------
there
as someone that has done colocation, dedicated hosting, and VPS, i'm a huge
fan of dedicated hosting.

colocation was expensive and the hardware problems were all mine. i was pretty
much tied to my local datacenter because i didn't want to ship a server around
(which would be at least a day of downtime). pricing can be hard to compare
because of power/space/bandwidth. if the equipment i colocated didn't have
IPMI support, it could sometimes take up to a half hour to have a datacenter
tech be able to put a remote console online when there were problems. at the
end of it, i had a bunch of servers that were worthless on the resale market
due to their age.

VPSes were never a serious option for the reasons stated in this article. it's
impossible to track down performance problems when a dozen other VPS customers
on the same server are taxing the CPUs and disks. i do use one that i pay
$10/month for just to run a network monitor for some off-network perspective.
they can be useful for single-task servers that don't need a lot of processing
power like dns servers.

with dedicated servers, though, you can signup on a website and within a few
hours have a complete server with modern CPUs, disks, and lots of memory
assembled, tested, and connected to the internet with a remote console waiting
for an o/s installation. when hardware goes bad, the server provider has lots
of spare parts waiting around to be swapped in for free. and the best part of
all, when you're ready to upgrade or move to a different provider, you just
cancel the account and let the provider worry about what to do with the old
hardware. i have a handful of these on various providers costing between
$140-$190 a month for something like a core i5 ~2ghz with 8gb of ram, 2 big
sata drives, and 100mbit ethernet with more than enough transfer every month.

~~~
statictype
_i have a handful of these on various providers costing between $140-$190 a
month for something like a core i5 ~2ghz with 8gb of ram, 2 big sata drives,
and 100mbit ethernet with more than enough transfer every month._

This seems ridiculously cheap compared to something with comparable RAM/CPU on
EC2.

Is there a catch?

~~~
charliesome
No catch, you've just discovered how ridiculously expensive EC2 is.

~~~
statictype
Then why do so many startups (and even established businesses like reddit and
Netflix) use EC2?

~~~
LogicX
Reddit and Netflix have entirely different models at their 'web-scale'

They actually DO need to be able to scale servers up and down based on demand-
usage, time of day, growth patterns, etc.

For them, the extra EC2 cost is negated by being able to spin up an extra 400
instances in the evenings, and turn them back down after everyone goes to
sleep.

Under the dedicated server model they'd ALWAYS need to have an extra 400
dedicateds to handle that peak load (and in fact would need to have far more
to handle additional spikes, projected growth, etc.) Those dedicated servers
would sit idle, costing them money most of the time.

This is the difference between 'web-scale' and your typical startup's usage
pattern.

~~~
vidarh
> They actually DO need to be able to scale servers up and down based on
> demand-usage, time of day, growth patterns, etc.

Cost wise, your best scenario is usually going to be dedicated or colo + EC2
or similar for overflow / peak.

If you do just dedicated hosting you need to leave enough spare capacity that
you feel comfortable handling the spikes for whatever the worst case
provisioning time your host has.

It's still cheaper than EC2.

But if you do dedicated + ability to spin up EC2 to take peaks, you can go
much closer to the wire with your dedicated hardware, and increase the cost
gap to EC2 massively. You don't need as much spare capacity to handle peaks
any more. You don't need as much spare capacity to handle failover.

It's rare for it to pay off to spin up EC2 instances to handle inter-day
"normal" load changes, though - most sites don't have differences that are
pronounced enough over short enough intervals for it to be worth it. If you do
the dedicated + EC2 model, your EC2 instances needs to be up no more than 6-8
hours or so on average per day before it becomes cheaper to buy more dedicated
capacity.

------
rkalla
TIP: I see _a lot_ of people calculating off-the-cuff AWS prices for
comparable hardware somewhere else and declaring how expensive it is.

Don't forget that 3yr reserved pricing is 48% cheaper than the on-demand
costs, so once you know what your hardware reqs are on EC2, you can purchase
some reserved instance and more or less cut your costs in half. Pricing out
any hardware configuration on EC2 using the on-demand pricing is tear-
inducing.

For Day 1 release, probably not an option. But at the 6-month mark you
probably have a much better idea of what hardware your startup needs and can
adjust accordingly.

2 CENTS: For the folks that need something better than a Micro and less
expensive than a Large, don't forget about the Medium instances.

They aren't in the primary section but down in the "High CPU" section; they
are an excellent fit for work that isn't quite big enough for a Large.

------
seltzered_
The hostel

You decide to run a site on 'shared' hosting (e.g. dreamhost, 1&1, godaddy
(shudder), etc.) because it looks really cheap and they list so many
"unlimited" things.

The good:

It's surprisingly pretty cozy for your startup wordpress blog about fish,
although at that point you start wondering why you didn't just get a
wordpress.com account instead, but you justify it by saying you were able to
put your custom design theme with lasercats this way. You're not sure why your
friends are saying the site feels slow.

The bad:

Oh, you like to program huh? Sorry our python version is 2 years behind.
What's this ruby stuff you speak of? You're hosting user-submitted content on
your bunk bed? Get out! Or upgrade to our overpriced "VPS" solution! You give
up on the site and throw money away.

~~~
experiment0
The thing is, the price difference between this and a vps is too dramatic. For
example, I can get a Lunarpages basic hosting, with "Unlimited" Bandwidth and
"Unlimited" Storage for $4.99/mo with support for rails apps and python (not
django however). Not to mention a free domain name.

However the cheapest linode vps is $19.99/mo.

I would LOVE to have a vps to play around on, I would have so much use for it
and yet I'm a student living in London and all my money goes towards the cost
of living. It makes much more sense for me to go with the cheaper shared
hosting as I just can't afford anything more.

As an aside, I found <http://virpus.com/> the other day and they are selling
vps's starting at $3/mo. Does anyone have any experience at all with them?

~~~
stevelosh
I've heard good things about <http://prgmr.com/> and they have 5/6/8/12
-dollar tiers.

~~~
listic
If the amount of traffic they offer feels adequate, good for you. Otherwise,
practically anyone else will offer more traffic these days.

------
kmfrk
An important addition to the article is that, as you descend the list, you run
out of people to blame.

~~~
lsc
"people to blame" only helps if you work for someone else. When you are the
boss? more "people to blame" means more people that can cause serious problems
if they can't do their job.

Having other people that can do something for you so you don't have to is a
good thing. Having someone else to blame, if you work for yourself, is the
downside to outsourcing, not the upside.

Outsourcing is great; but make sure you can always move to another provider if
you have trouble with your current provider. Your boss might let you off the
hook if a provider screws it up, but your customers won't.

------
mattbee
I wrote a similar article trying to segregate hosting stages (though beware
I'm a hosting company promoting a product) -
[http://blog.bytemark.co.uk/2011/11/03/the-cloud-is-your-
inst...](http://blog.bytemark.co.uk/2011/11/03/the-cloud-is-your-install-
script) . tl;dr version is that I think the ultimate flexibility is in your
application's install script. If you can deploy to one host, or several, and
collapse or split out caching layers, databases etc. depending on resources
available, you have a truly portable application that's ready to scale and/or
move ISPs.

------
larrys
An option was left out which is to have your own T1 running into your own
facility (that is not a data center). We did this for years ('96 to 2004) and
had much better up time than with colocation with diverse paths and biometric
security. We didn't need a generator either. Just an array of industrial
batteries hooked up to a power inverter with a line conditioner could keep the
equipment running for 24 hours if utility power went down. (This is much
cheaper than anything you would buy commercially from APC or Triplite).

~~~
hga
Well, that does trade off risks to your T1 getting knocked out by a backhoe,
perhaps power failures between you and the other end of it (e.g. if I bought a
business DSL connection from AT&T to run a server at my apartment, I could
lose it in a general power outage when the DSLAM's battery ran out).

But I can well believe it's better than many co-lo experiences, although all
of mine have been positive. Right now I'm helping a friend with one: due to it
housing Protected Health Information we pretty much have to run it on our own
servers, which I built and he put into a co-lo that he's worked with before
for this sort of thing.

~~~
larrys
"trade off risks to your T1 getting knocked out by a backhoe"

Well luckily no backhoe but I did keep, at my own expense, Westell DS1 NIU's
around because I had a situation where one went bad and it took the Verizon
tech time to go and find one. So I bought a few so he would have the parts
around. I also made sure when they strung the fiber from the connection point
to our office (several hundred feet through other offices in the ceiling) that
it was in orange conduit as opposed to just strung through the building (they
were just going to run it like phone wire). I also had them bring an extra
fiber through, as well as a fishing wire in case it was ever needed for
anything in the future.

------
buro9
No-one has mentioned mixed hosting, whereby you pick from each category.

Something like Varnish you want a lot of RAM for, dedicated suits that well.

Web servers tend to be numerous and are just computational power (stitch all
this gubbins together and return a string), their number vary according to
demand and they suit virtual servers really well.

Now databases, these really need good disks, lots of RAM, decent CPUs. They
are best dedicated or colocated. When things go wrong with a database server
you _really_ want to be able to rule out the invisible magic of other hosts,
and the voodoo of being a virtual machine.

The best thing I can hope for whilst I scale is to find a provider that will
sell me dedicated and public cloud instances that can live on the same VLan
and still be reasonably priced.

I'm currently still totally with Linode, but with 9 instances, and an over-
heated database server I know I'm getting close to the limits of what I can do
there without re-focusing on splitting up the app when I could be adding new
features.

------
_delirium
This is an excellent write-up; I like the metaphors, and it doesn't feel too
much like it's trying to force a preordained conclusion.

One option I have trouble fitting in, though, is the "run your own server
locally". This might be #5, except it's often seen as actually a lower-class
option than #4, rather than a step up: before you go all out with a colocated
server, how about just a machine with Apache sitting in the office hooked up
to your office's business-SDSL line?

~~~
notatoad
it's probably not included because keeping your production server outside of a
datacenter is kind of a silly idea. you save some money, but you don't have
the cooling, the power conditioning, the network redundancy, or the security
of a proper facility.

~~~
_delirium
I suppose it depends on your use case, but I don't see those as necessarily
worse failure modes than other options. For example, Reddit has in the past
been down for >20 hours at a time with its cloud solution. With a local
solution, you would want offsite backup for the catastrophic "building burned
down" failure mode, but most local failures can be solved in that kind of
timeframe with the "drive to Frys, buy new server, and restore from backup"
recovery plan.

Now if you can't afford _any_ downtime, a local server is probably not a good
idea, but then neither are many of the cloud alternatives. Also depends on
size, of course; one or two local servers is a more reasonable proposition
than 35 servers randomly thrown under desks. (Though the "so uh, does anyone
remember which room 'thor' is in?" moment _used_ to be a classic startup rite
of passage.)

~~~
18pfsmt
You've raised what I consider to be the elephant in the room, and you can see
all the responses quick to tell you why can't do that> It simply doesn't fit
with what most people consider acceptable, which is that one must be a user
for everything internet related.

Most of us have at least ~5Mbps/ 2Mbps (up/down) connections at home that are
always on with minimal latency. The gaming service OnLive shows that these
connections are adequate for most people's needs. Home/ office connections
will only get better, and I think the idea of running one's own server(s) for
things like family and/ or small offices will make more sense than some 3rd
party corporation's "cloud." PIM, email, IM, pictures, videos etc. simply
don't need to be subject to the whims or TOS of a company or outside one's own
control.

This is what I do currently, and it certainly is a kludge at the moment, but I
believe it can be improved to the point of appliance-usage eventually.

~~~
falien
This is against many consumer ISP ToS and they can/will arbitrarily start
blocking traffic on certain default ports depending on how draconian they are
about it. I agree that hosting on a home server is an option (even a good one)
for some people and usecases, but you are still subject to the whims of
someone else, and in this case you often have no recourse as you are violating
the ToS.

------
kijin
You forgot another popular option that a lot of people use as an alternative
to the "monastery" stage: shared hosting (shudder).

~~~
josephcooney
Agree. I was surprised they overlooked this one. Perhaps they couldn't think
of an appropriate metaphor. Prison? Orphanage?

~~~
Lazare
I'd suggest slum. It's a place to sleep, but you have no security and no
property rights. It's generally unpleasant, and the whole thing could burn
down at any moment. And you have (practically) no recourse if you don't like
what you're getting. Even the best hosts have laughable SLAs (if they have one
at all), and no meaningful performance guarantees.

~~~
lsc
Show me any reasonably priced service that has a SLA worth reading.

Nearly all retail providers have some sort of "if we have downtime, if you
complain, we will refund you for the time you were down" SLA. The pennies you
get don't compensate you for the time it took to complain, much less the lost
business from being down. Look at all the companies advertising a 100% SLA; I
mean, even 99.9% is unrealistic in most cases without some very expensive
hardware or application-layer redundancy.

------
A1kmm
One thing no one has mentioned is the tax implications of the different
choices for startups - something which can be as important as the technical
side of things.

The OECD model tax convention, which has been adopted by many pairs of
countries for their double taxation avoidance agreements (DTAAs) uses the term
"permanent establishment" - "a fixed place of business through which the
business of the an enterprise is wholly or partly carried on". If a business
derives income attributable to a fixed place of business in another country,
they will likely need to deal with the tax department of that country. At
best, this will mean considerable expenditure (by a bootstrapped startups'
standards) on legal compliance, and at worst, could mean considerably more tax
is paid.

For hosting stages 1-2 in the article, no fixed place can be attributed to the
business purchasing the hosting (CPU resources are shared and no CPU fixed CPU
is assigned to the business). Disk space may temporarily be associated with a
business, but this is under the control of the service provider - it is disk
space and not physical disk sectors which are being rented, and the provider
is free to move data.

However, stages 3-5 mean that a business has a very definite physical location
(i.e. place of business) in another country.

These types of non-technical issues can often outweigh the technical ones in
terms of business priority.

~~~
hessenwolf
Have you seen this be an issue in practice? Our place of business is our
registered office.

~~~
A1kmm
It is a place of business if it meets the definition, and there can be more
than one.

The definition is somewhat ambiguous, but it is likely that if there is an
actual physical place (even if it is just a server) used to offer goods or
services for sale to the public, there is a good chance that those sales are
attributable to that server, and will be taxed in the country that server is
located in.

This might not be a problem for an established corporation with the sales
volume to justify the tax accountant expenditure and payment overhead needed
to comply with tax law in multiple countries, but for a bootstrapped startup
it can be an important consideration.

------
moe
Aww, this is a gem. Beautifully written and none of the bias and factual
errors that you commonly see in articles on this subject.

~~~
mseebach
Plus the metaphor pretty much holds up. That is rare in this kind of post.

------
neilmiddleton
The article isn't completely accurate in that it states that you need to build
your application in a certain way in order to host it on Heroku, which isn't
true. I just don't see Heroku in same category as something like GAE.

Heroku places no requirements on your code that you wouldn't find through
general best practise when building scalable applications. A lot of people
will cite the read-only filesystem as a special requirement (which requires S3
or similar), but this is a common requirement with clustered systems. Yes you
might have a local SAN that you can use as a local filesystem but the point is
the same.

With the multiple applications I've deployed to Heroku I don't think any would
not run on a 'regular' VPS as is. There's no Heroku specific code in there
period. In fact, if I have changed my approach to better suit hosting on
Heroku, it's generally been changing it to a better approach that would suit
all types of hosting.

------
cwilson
I'd like to add that for different parts of your application, or website, it's
ok to use different services.

For example, a large majority of tech startups have a WordPress blog that is
totally separate from their actual web application. In many cases it also
drives the marketing front-end of their website.

So while my main application may be on Heroku or AWS, I like to fire up a free
PHPFog (I don't work for them, you could use DotCloud, or likely another
alternative as well) account, and have my WordPress install setup there. It's
insanely easy to setup (it's all Git based), and the free account will get you
a long way.

It's also nice to know that the WordPress install lives on an entirely
different server, so if you get slammed with great press, your entire stack
isn't feeling the heat. There are security fears here as well, so I like
having it separate.

~~~
dlevine
Or, if your Wordpress site gets hacked, your entire site isn't p0wned. I have
a friend who had his WP site hacked last week. Fortunately, he hosts his
application on a separate server, so there were no (major) security issues.

------
apaprocki
No one seems to be commenting about the Stately Manor :) Datacenters come with
their own set of interesting problems that are just as much fun to solve as
programming puzzles. Thermal imaging goggles to design airflow.. Designing
extra redundancy into power systems while adding alternative energy to reduce
power costs. In this category, rack space and power consumption are almost as
important characteristics as ram/ghz.

------
crcsmnky
The best part? There's no one "right way", each one has it own ups and downs.
Too few authors do this these days and think they have all the answers to
everyone's problems.

------
dangrossman
I've been stuck between #3 and #4 for years. I could really, really use more
than the 8GB of RAM my largest (rented) servers have. Not being able to build
high-RAM servers limits the sites I can build and the features I can offer on
the ones I already operate.

For example, I built something like Mixpanel 2 years ago, but I never launched
it because in load testing it really didn't take a very large client's worth
of data to exceed the 4GB of RAM I could afford for a server, and hitting disk
would make reporting far too slow. Buying a new server for each client (who
may decide to cancel the next day) was not something I wanted to commit to.
<http://i.imgur.com/DAOEA.png>

I've ended up at Softlayer after trying a number of hosting companies over the
past 10 years. They want $25/mo/GB for RAM. It's almost like they want you to
pay the one-time cost of the hardware _every month_... and then some!

Yet, supporting all this on my own, I can't really colocate -- I don't have
the expertise or the money to get it off the ground, nor can I be there to
drive to a data center and fix things if something breaks 24 hours a day. I
have no employees. I have 60k users to support myself.

It seems like I'll be stuck here for a long time.

~~~
mrud
Hetzner offers Servers with 16GB of Memory at 49€/month (~$64) or 1and1
starting at $129.99. I don't know the current state of dedicated server
hosting in USA but you can nowadays can get a lot of hardware very cheap

------
Lazare
The may be off topic, but... At the moment we're mostly using option two
(Linode VPS), and it's working pretty well. I've been repeatedly tempted by
option option four (colo), but I'm kind of daunted by the task of getting
started. I've been burnt by option three (renting a dedicated server) before -
it seems like the worst of both worlds.

Can anyone recommend a good guide to getting started with colo? Obvious
questions include:

Where do you go to buy a cheap server? Can you just have it shipped direct to
the data centre, or do you need to configure it yourself? How does that even
work for startups in a different country to the data centre? Is there anything
I should look for in a data centre to make it easier? Do any offer out of band
consoles? What sort of costs are we talking about? Is there a break-even point
beyond which you really should colo instead of using VPS? Is there a detailed
tutorial anywhere on "getting your first coloed server up and running without
bricking the stupid thing and needing to spend thousands of dollars getting a
data centre technician to fix it"?

And so on. :) It seems like there are a LOT of resources to hold your hand as
you get up to speed with Linode-type services, but colo is dark magic.

~~~
ricardobeat
Maybe you should test the waters with <http://macminicolo.net/>. I haven't
used them (yet) but it's relatively cheap, and they will handle lots of things
for you (including buying the mini). A dual-core mini can do a _lot_ of work.

~~~
commandar
Neat idea, but their bandwidth overage charges are brutal. .80c/GB?

------
yanowitz
This is a great list. I've noticed that I always hate whatever option I'm on
that's short of "the condo." And even if you have a condo (or a subdivision of
blades), there are still the inevitable screwups that leave you cursing your
colo. But we live in an imperfect world. We just have to spend money
engineering for it.

------
nchuhoai
Love the article, just one note: I think #1 is not necessarily constrained as
it may sound. Especially with Heroku, I think it is very easy to upgrade later
on for most set ups, not too much special code or conventions. The major one
would be probably the read only file system, but I'm sure using Amazon S3 is
not that bad

------
hkarthik
Great article and this thread is great for checking out some new hosts for
dedicated servers. Pricing has gotten pretty good in the last year so I think
it's a good time to upgrade from PaaS to Dedicated hosting again.

Has anyone had experience with a hybrid dedicated/cloud model? I'd love to
stick our Postgres servers on dedicated hardware but then be able to spin web
servers as cloud servers when needed for traffic spikes.

------
RyanMcGreal
Going along with the metaphor, shared hosting is like crashing on your
friend's couch while you get your life in order and look for a place to live.

------
tzury

        the homeless  
        -- (*.tumblr, *.posterous and *.wordpress.com)
     
        the billboard space 
        -- stackoverflow, facebook, twitter

------
latch
I don't currently host anything with them, and I do find the prices a little
much (especially RAM, what are they thinking?!), but for dedicated hosting, I
think Softlayer has to be mentioned. Their network connectivity and multiple
data centers (including Europe and Asia) make them, in my mind, the best
alternative to AWS.

------
jstsch
As a small web shop, we went the other way. From 3 (dedicated hosting) to 2
(VPS). It's quite nice to know that when a piece of metal breaks you just get
a new machine on the fly.

You do need a good host though. Quick e-mail replies and pro-active management
of server load is absolutely vital. Happy to have found it (in NL) :)

~~~
gcp
Feel free to mention the name of your hoster.

------
ck2
I think the secret is dedicated server costs can only come down so far but VPS
prices should get better and better for even better hardware as time goes by.

$50/mo can get you a 4-core raid10 xen VPS which is plenty powerful and almost
as flexible as dedicated.

------
larrys
I'm thinking of getting another rack at the colo place that I use now. Anyone
looking for space on that rack that doesn't have high bandwidth or power needs
needs feel free to contact me and I will see if this makes sense for both of
us.

------
halayli
One missing advantage in #3 is the ability to launch instances across
continents

------
swah
I'm a brazilian and EC2 just opened in São Paolo (then I realized its kinda
expensive).

I'm not sure if 40ms vs 160ms latency is so important for many kinds of web
sites.

I'm considering a Linode VPS in Dallas.

What are other brazilians using?

~~~
abalashov
O que exatamente você quer fazer?

------
reedlaw
Where would you go to find a "condo"? There are no links in the article. What
is the pricing/reliability/support like having your own server in someone
else's data center?

~~~
cagenut
What this (very good) post calls the "condo" tier is what the hosting industry
calls "co-lo" or "co-location". Search that term and you'll get more options
than you can probably wade through. The "rackspace" option (biggest name for
good quality mid-tier sized setups) is <http://www.equinix.com/>

Huffington Post, Gawker, BuzzFeed, CafeMom and AdMeld all co-lo with
<http://www.datagram.com/>, most of them are within a few feet of each other.

~~~
kornholi
Equinix is a little on the expensive side... Just a little bit.

------
jiggy2011
They forgot my preferred hosting method from my pot smoking college days.

A second hand Pentium II in the corner of the lounge, running Slackware with
ports 80 and 21 forwarded to it from a cheap belkin router off your domestic
DSL connection.

Good: Basically free (assuming the power bill is in your landlord's name) ,
host whatever the hell you like.

Bad: Someone might spill the bong water over the power strip and ruin your
uptime.

------
rafael-minuesa
Regarding the DataCenter, yes, correct, you will need some divine help ...

------
thomasfl
The power of strong metaphors strikes again.

------
aneth
Why is it that a well written site can't scale quite large on heroku? (or
similar - I use heroku so I'm biased) Perhaps I'm naive, but I feel one can go
from heroku to stage 5 if you truly have a blowout.

According to their website (<http://success.heroku.com/>) some pretty large
websites run there, including Urban Dictionary and Rapportive.

Sure, it may cost more, but not more than a full time sysadmin and you are
buying efficiency and flexibility. You can buy a lot at heroku for
$10,000/month (the minimal cost of a full time deployment / sysadmin /
dbadmin,) including I'd imagine some rather hands-on support.

This article seems to downplay the great advances that have been made in
"cloud" deployment. IMO, a cloud service like heroku beats the pants off of
self-operated virtual servers and debatably some of the higher "stages."

~~~
nl
_Why is it that a well written site can't scale quite large on heroku?_

Is anyone claiming that? The linked blog post doesn't make that claim.

~~~
aneth
It doesn't directly claim that, however calling these "The Five Stages of
Hosting" implies that an application will likely progress through these stages
as it grows, and that application platforms will be quickly outgrown when any
sort of scale is reached. That's a pretty clear implication from the title and
I'm calling for counter-evidence, because I don't think it's accurate.

I also host on Heroku applications that I hope will grow, and if that's a bad
choice, I'd like to know others opinions.

~~~
nchuhoai
I think you pointed it out, it's the cost. I love Heroku too, however, I can
definitely see how Heroku can be much more expensive in the long run.
37signals just recently showed how their recent RAM addition was just a tiny
fraction of what they would have payed at Amazon.

------
longemergency
Pinboard needs to filter some of the bookmarks that are coming in the site. In
the recent stream I constantly see links to depraved sexual acts, including
physical and sexual abuse.

Often these links are fan fiction but sometimes they are not.

Who is reading this stuff and why? What kind of behaviour does this inspire?
And why do all Pinboard subscribers need to be exposed to this?

