
Thoughts on Colocation - anu_gupta
https://blog.pinboard.in/2013/08/thoughts_on_colocation/
======
dmourati
My thoughts on colocation? When choosing a data center, look at your
neighbors. Mine were Dropbox, Netflix, Splunk, and Etsy. Best tools for
finding a good data center: www.peeringdb.com, www.datacentermap.com.

~~~
jetsnoc
SV5? ;) We might be neighbors too.

~~~
dmourati
Yup. I've moved on but my cage is there, close to Dropbox. Nice digs and nice
choice on your part. We were one of the first 10 or so in the building. Pretty
full now. How's the new wing looking?

------
notaddicted
For "finding datacenters", _traceroute_ is useful. You can just pick some
websites and see where they host. For example, is seems that pinboard.in is
hosted at [http://he.net/colocation.html](http://he.net/colocation.html),
Fremont 2, or at least that is where I am being routed.

~~~
dmourati
HE's FMT2 is a hellhole of a "data center" if ever there was one.

~~~
theatrus2
As someone who has equipment there, I agree. Its not a datacenter. However the
pricing is not bad, so its a real get what you pay for.

~~~
timr
He _did_ say that he believes it's better to host in two cheap places than one
expensive one....

Aside: back in the early days, a lot of YC startups hosted at HE. I think it
was one of the first JTV colos, actually.

~~~
dmourati
Disagree. Twice the headache. Put all your eggs in one basket then watch that
basket. Get bigger, repeat. I'd go from 2 to 3 faster than 1 to 2.

~~~
timr
I don't think he was making an argument for ease of maintenance, but rather,
reliability per dollar. Even if you've got two crappy datacenters, you're at
least redundantly crappy.

~~~
Nrsolis
That only works if you value sysadmin time at $0/hr.

~~~
zorlem
You need a capable sysadmin anyways. It makes sense to pay her/him to design
and scale your "platform" properly, especially if you're making any money from
it.

------
virtualwhys
I started out dedicated, and switched to colo about 8 years ago.

Not that I really need colo, could easily go cloud, back to dedicated, or even
dirt cheap with some shared hosting provider for hosting needs, but I've
always liked having my own customized setup in colo.

Here's why:

1) $50 per 1U (power included)

2) /27 IP range

3) 250GB monthly bandwidth

4) 24/7 DC tech support

Pretty basic setup, couple of Dell R610s with gigabyte switch and Cisco ASA.
VMware ESXi runs on top of the bare metal, so basically have my own VPS
environment.

It's a nice break from coding to learn a bit about the systems admin side of
the fence, I like it ;-)

Of course, the times they are a changing, at some point I'm going to need to
go Cloud. For now pretty content though...

~~~
nixisfun
Would you mind sharing your provider?

~~~
virtualwhys
Sure, although if you have a 99.99% uptime SLA, I'd look elsewhere.

Provider is SagoNetworks, based out of Tampa, and have a brand new-ish
facility in Atlanta.

Started with them around 2003 and have not looked elsewhere, DC guys bailed me
out early days while learning the ropes -- tech support is generally very
good.

Couple of times a year the shit hits the fan, core router goes down or other
mini-disaster, and you get nailed with a few hours downtime. Give them a ring,
they say it's blah blah, we're working on it.

Otherwise, everything hums along, no news is good news ;-)

------
hkarthik
At what price point does it make sense to go through this clearly painful
process to colocate?

I feel like there's a logical progression from PaaS/VPS => Managed dedicated
hosting => colo facility.

But I would love to know when the costs for these jumps make sense.

~~~
showerst
That depends heavily on your company's skills and where you live. Colo is Much
cheaper than good (read: useful) managed hosting, but you have to either have
the money for a staff of 3+ local sysadmins so that someone can always be on
call, or have the skills to admin the boxes and swap hardware yourself and a
backup person if you ever want to go on vacation.

I'm assuming that you're in a position where you can't tolerate a few hours of
downtime in the middle of the night if something breaks.

I feel like for most companies with a tech staff smaller than about 8-10,
you're probably going to have all programmers/DBAs who may have some server
skills, but not the competencies to go chase down complex hardware issues at
2am. Past a certain size real sysadmins will have plenty of work do the 99.99%
of the time when you're not down, so it's natural to switch over to a colo
situation.

There are also tax and accounting pro/cons to buying the hardware.

~~~
mjn
If you have more than one or two servers, you don't necessarily need to have
someone turn up at the colo at 2am. Like with The Cloud, the physical servers
just become a resource to be managed. If the discount for colo is enough, for
the price of N cloud servers you can have, say, N+2 in a colo (maybe even N x
2). Then you don't have to worry about fixing anything that breaks right away,
since you've got some spares that you can fix at your leisure.

~~~
showerst
That's definitely true, although I'd still be worried about system problems
further up the chain. (Load balancers, firewalls, etc). Hopefully even with
managed hosting you're still not at the mercy of any one hardware/software
failure.

Part of it just boils down to risk tolerance I think, and how comfy you are
with sysadmin skills.

I'm responsible a medium sized (< 10 servers) web site that use managed
hosting for. Their advice/service is invaluable to me, partially because I
trust them enough that I can just hire web programmers without worrying about
their advanced linux/networking skills being super high. The host has also
helped us debug some fairly deep problems that we don't have a chance at in-
house like hypervisor config issues, and even a processor 'errata', AKA
processor hardware bug.

It's also nice to have an ops team to ping questions off of, since we're not
nearly large enough to hire a dedicated sysadmin otherwise (and they'd be
without sysadmin work 50% of the time anyway).

NB: I'm treating 'sysadmin' like it's just one skill set to administer routers
and server hardware, configure firewalls and networking, optimize database
boxes, etc. This is probably not totally true, but fits for purposes of
discussion. YMMV.

------
otterley
A few notes on power utilization:

My experience is that the 1-amp-per-rack-unit rule of thumb applies only if
every server in the cabinet is under full load (CPU and all spinning disks).
This is almost never the case, however: our average utilization is around 8
amps per phase on a 3-phase circuit. (Load limits are _per phase_ , not
total.)

You can also get 30-amp circuits in most data centers if you're concerned
about it.

Keep in mind, too, that your load is usually balanced across 2 PDUs (assuming
you're buying systems with redundant PSUs, which you should) and on 208-230V
power, which is more efficient than 120V due to lower resistance. If you
configure your systems correctly, the load will be shared across both PDUs
under normal conditions. That said, you'll still need to ensure you don't
overload the remaining circuit if redundancy is lost.

In summary, don't worry too much about overloading a 30A circuit; there are
plenty of full cabinets in a DC for a reason.

------
clamprecht
Mark Maunder wrote a good blog post on colocation -
[http://markmaunder.com/2011/10/31/clouded-
vision/](http://markmaunder.com/2011/10/31/clouded-vision/)

As for the California earthquake risk, one option is to colocate your servers
in Dallas or somewhere with no earthquake risk. The Maunder article talks
about why colocation doesn't tie you geographically to that place.

------
thinkcomp
I recently did the math with the help of a friend who uses Amazon AWS. I have
three primary servers in a data center and by running them myself versus
relying upon Amazon, I have saved my company about $5,000 per year, each year,
over six years.

~~~
aculver
Any estimate how many additional hours you spend maintaining these systems vs.
what you would if you hosted on AWS?

~~~
imbriaco
If it's just three servers, I'd be astonished if any meaningful amount of time
was spent maintaining the hardware after the initial installation. You have to
maintain the operating system in either case, so this is a clear win for
dedicated machines.

Everybody forgets the E part of EC2. If your workload is not elastic, or is
very small, it's incredibly likely that EC2 is not the most cost effective
solution.

~~~
mh-
also, seemingly everyone neglects to calculate these comparisons based on
EC2's reserved _(heavy utilization)_ pricing.

this brings the hard costs much closer. (but doesn't close the gap entirely.)

~~~
imbriaco
Indeed, and you can almost certainly buy a server for the cost of the
reservation.

Don't get me wrong, EC2 is an amazingly useful platform and it has been a game
changer for our industry in a lot of ways. Improved hosting economics isn't
generally one of them.

------
ChuckMcM
Something I find annoying is that Data Centers often figure that if they get
you in, then you will be either unable or unwilling to move so the 'renewal'
can be a lot larger than than the original contract. Equinex tried to take us
from $108/kva to $450/kva on the renewal. I was like "Really? You don't think
we'll move out with a > 4x gain in cost of staying?"

~~~
lsc
this is true of all commercial leases; If they think it's hard for you to
move, well, "market rate has gone up"

That is the thing, though, generally speaking? it costs a /lot/ to fill a
space. 10% of the total value of the lease is not unusual. So they try to get
you in with a low rate, then jack it up later.

Personally, I think a setup fee, then some sort of guaranteed low rate ongoing
(maybe tied to the local price of electricity?) would make more sense. It's
less costly for all involved if you sit at the same datacenter forever; it's
just that the way that these things are sold, the owner of the data center
wants to 'capture value' by raising your rates if it's hard to move.

The owner of the datacenter also wants to start you out with a low rate, for
the same reason; if they can get you hooked so that it's hard to move, well
hey, they might break even the first year, but after year two or three, they
are looking pretty good.

Personally? I find this really frustrating. All parties, when working to
maximize their own value, actually waste a whole lot of value for all
involved. It's an example of capitalism destroying some pretty significant
value.

You see this in almost all cases where switching service providers is
expensive (and, I think, it's one of the reasons why "the cloud" is so
appealing.)

Personally, I really like the idea of a 'condo' model for the data-center, and
for renting in general. Yeah, you'd end up paying $100K+ for your rack, but it
wouldn't take very long to make that up, and you'd end up owning part of the
organization that controls the whole datacenter.

I think the big problem with the condo model is that in regular condos,
usually you don't have individuals that own 100s of units, while that's not at
all unusual in a data center.

------
chx
Hrm, why don't people just rent dedicated servers? For less than $100 a month
these days you can get a 32GB server with SSDs. If your server dies, the
provider needs to replace it... Seems like all win to me.

~~~
xb95
Control over network configuration and hardware is one of the big reasons. I
hate getting bitten because my three new servers are halfway across the
facility and stuck with somebody else who is maxing out the uplink between our
shared top of rack switch and the core...

Edited: Also, your "$100/month" quote is pretty fanciful. Who is handing out
32GB with SSDs for that price? I'd really love to know, because the best I've
been able to get for a database quality machine (RAID-10 with BBU, SSDs, and
lots of RAM) is closer to 10x that.

~~~
bryans
I think the $100 price point might have been slightly hyperbolic, but you can
get 32GB of memory with 320GB of SSD space from DigitalOcean for $320 per
month.

~~~
e12e
While Digital Ocean is great, it isn't really a fit if you're considering
running on your own bare metal server (especially for things like an actually
loaded database server).

------
NDizzle
Bay Area colocation has been expensive since they started billing based on
power, rather than physical space. I had a good thing going in the 90s to
~2005ish with a long term contract I signed giving me a full cabinet on a
100mbit unmetered line for ~$600/mo. When that contract expired it promptly
jumped to ~$1500/mo.

I know that's not much space compared to the big boys, but it fit our needs at
the time pretty well.

If you're looking for colocation stuff these days, and you're not huge and
need to be in San Jose, I'd look at some of the spaces in Sacramento.

Other great spots around the country: Dallas, Chicago, Atlanta, and in
Virginia around MAE-East.

~~~
otterley
Sacramento is a great choice. We're super happy with RagingWire.

------
dcc1
"The place of choice is an awful forum called WebHosting Talk."

I do not get that quote.

I have been a WHT member for years and in that time found dozens of excellent
server and collocation providers.

~~~
virtualwhys
Agreed, but you do have to sift through a lot of noise; i.e. no shortage of X
(maybe shady) provider giving deal of the month offers.

Then again, outside of WHT, where else can you find the colo provider gems?
Certainly the best rates will be found there in my experience; if you're
lucky, provider tech support will be excellent as well.

