
How to colocate your first server at a datacenter - kvmosx
http://blog.definedcode.com/colocation
======
detournement
I am so glad I live in a world where I will never have to do this again for
anything but large infrastructure projects. I've colocated servers at major
POPs like 111 8th Avenue in Manhattan, Equinix in Seacaucus, One Wilshire in
LA.

For smaller projects, the convenience of the cloud is absolutely worth the
price. For a larger build - like over 10k a month in infrastructure cost the
cloud starts to make less sense economically but 'colocating your first
server' is not a right of passage anymore - its unnecessary and a huge waste
of time.

All of the functionality/services you have to provision yourself in colo like
- redundancy, backup, remote hands, environmental monitoring, hardware
maintenance are just not worth figuring out until there is substantial cost
savings to realize.

~~~
hosay123
> redundancy, backup, remote hands, environmental monitoring, hardware
> maintenance are just not worth figuring out until there is substantial cost
> savings to realize.

So you don't need to monitor temperature sensors any more with a VM, but most
of the above are still costs with cloud - flaky RAM, redundancy, backups,
monitoring, etc. There are also the things you previously didn't have to worry
about - crappy resource isolation turning your scratch disks into 2kb/sec
joys, total ineffectiveness of the CPU cache, managing a now-essential network
fabric to tie pieces of your app together where previously it all fit on 2
master/slave machines, etc.

Of course if your application isn't simply some stock PHP/MySQL application,
and you want to really "embrace cloud", then the time you saved fighting a
subset of hardware problems is replaced by a fixed development cost
integrating with someone else's higher tier APIs (S3, Dynamo, etc) you can
then never escape even if you wanted to.

I've never seen any realistic numbers comparing the use of traditional hosting
facilities, say, providing managed servers, to the new generation VM stuff.
Any material I've seen has been sponsored crap involving some multinational.

My own experience is similar to yours - hosting your own hardware is a pain in
the ass. However there is middle ground, there are many colos that will
happily provide managed hardware, and perf/pence, this still tends to be far
cheaper than the equivalent in VMs, and increasingly they're coming with
similar APIs to order/replace machines

~~~
IgorPartola
From my limited experience, the cloud is always more expensive if you know
your exact usage requirements. If, for example, you know that six octo-core 16
GB RAM, 512 GB SSD-in-RAID1 servers would fit your needs from now until 10
years from now, you will do better to just rent them from SoftLayer, Hetzner,
etc.

However, if you anticipate growth, or need to be able to spin up a test
server, then shut it down a day later, etc. then you are better off paying
premium for the cloud. Sure, there are economies of scale at play here: AWS
has so many servers, they are not paying a person to log into every one of
them every so often to run updates, etc. However, make no mistake: everything
you would have to do with a server, Amazon has to do too. In fact, they have
to do much more to keep all of them running at once. That cost will be passed
onto you.

Even with all of that, it's cheaper if you want to be able to spin something
up, then shut it down. Another great example is the additional services
provided by the likes of AWS: you can get things like load balancers, cache
servers, database servers, orchestration services, etc. You can do all of this
yourself, but at some point it's cheaper to just pay for something like ELB
than to learn how to do it yourself and spend the hours to set it up. Human
time is more expensive than that.

Lastly, if you just need a really small machine, there is no beating the
cloud. You simply cannot get a dedicated machine for $5/month, and you likely
never will.

~~~
maaarghk
You can get damn close!

[http://www.kimsufi.com/uk/](http://www.kimsufi.com/uk/)

I found OVH's dedicated server offerings to be so cheap that there was no
point in using shared hardware for the flexibility. Then again, I'm not
running my entire business on these boxes... but I don't think I'd have a
major issue if I wanted to!

~~~
yellowapple
Huh; that's actually a better deal in many cases than my existing virtual
servers. Thanks for that recommendation; I'll keep my eye on that.

~~~
maaarghk
The day I got my first ovh recommendation was a good day for my inner
accountant.

------
jlawer
Couple of bits of advice to anyone doing this:

Firstly get your own IPs from your local RIR. Have your co-lo provider publish
your routes, but they will be YOUR IPs. if you co-lo provider sucks, you can
move and keep your IP space. (This is vital for email, but I recommend it for
everyone).

Secondly buy a Out of Band Management card with your server (iDrac for Dell,
iLO for HP, etc) these cost fairly little and will save you hours of access /
remote hands. They will pay for themselves, you can even boot an ISO from your
laptop over the internet. Get your co-lo provider to give you an extra uplink
for this and give it a separate IP (use one of the providers range)

Thirdly consider Mission Critical support on the servers from a solid vendor
(In australia I consider the enterprise vendors Dell, HP, IBM and Acer, and of
those I will only use Dell or HP). 4 hour response means you don't need as
much spare hardware, and you can have things fixed FAST. I have only lost 2
disks in a rack of servers over 4 years. Both had a replacement in place
within 4 hours (once at 1am).

Fourthly look at a good Virtualization solution. We initially went oVirt (Open
source version of Red Hat Enterprise Virtualization) but ended up migrating to
VMWare. VMWare Essentials Plus costs us $15K for 3 years in extortionate
australia prices and is worth every cent. It provides Backup (VMWare Data
Protector), Failover, Virtual SAN, Live Migration and a heap of useful
features that save huge amounts of time.

Finally if your going to grow consider getting a rack (or half / third of a
rack). This will likely give you unescorted access to the data centre, and is
often not that much more then a few RU of servers (depending on the DC and
racking availability).

~~~
devicenull
Do _not_ put your IPMI controller on a public IP without any sort of access
controls in place. These controllers are pretty terrible security wise, and
it's not a good move.

~~~
justincormack
Indeed. See [http://fish2.com/ipmi/river.pdf](http://fish2.com/ipmi/river.pdf)

------
xeroxmalf
Also don't forget to check out the rented dedicated server market too. It
provides a good middle ground in cost/power/performance to the extremes of
cloud providers and colocation.

~~~
neurotixz
Yes, always compare with rented dedicated, and only go with colo if it makes
sense. I did this analysis last year and rented won big time. Never looked
back since. Main points in favor of rented dedicated servers: \- Much cheaper
in my business case (5-10 TB/month by server, only a few high CPU
requirements) most dedicated come with bandwidth included \- No need to take
care of the hardware in case of failure (opening a ticket versus managing the
whole process) \- Easier to switch machine if needs changes \- Scales faster
(as quickly then 2-3 hours with decent automation tools, and virtualization)
\- No switch, firewall, etc (IPMI should always be behind a firewall, never
opened on the Internet) \- The upfront cost to buy a server was high, and
monthly cost of dedicated was actually lower, due to the lack of competition
in the colo sector in my area (Montreal, Canada), prices were (and still are)
rising quickly, it was hard to predict the long-term costs

~~~
turnip1979
Maybe it is a Canada thing. I looked at some colos in the Toronto area a few
back and found rental prices were about the same as bringing your own
hardware. Didn't make sense. I was just looking for a home for some server
hardware I have in my garage. Oh well.

~~~
neurotixz
Almost all the independant colo centers in Montreal have been bought out by
big players with the corporate world in sight, not the small 1-2 servers
market. So the prices skyrocketed.

The opposite is true for dedicated, since they have to compete with us-based
players, and OVH got in the East Coast market big time. Prices are lowering.

------
err4nt
Thanks for the article, I've seen that term floating around and never knew how
to get started. I've been itching to get into the server hosting business as a
side thing ever since renting my own KVM VPS.

I was blown away by the fact that I can sit there and watch it reboot over
screen sharing from my iPad. I treat it as a cloud desktop (runs the latest
vanilla Ubuntu) and so of course it was easy to get apache and PHP and Ruby
and a whole web server environment up. I do all my work on it, as well as my
play. I use Plex to stream myself media, and OwnCloud and other tools to
replace Dropbox and even deploy sites.

I want to sell people on the idea that it's easy to have a cloud desktop you
can access from anywhere, that can also be a web server (not selling web
servers that can also have a desktop). I want to sell people on the Idea that
with freely available software, we can each have a private cloud with just our
data.

I'm not quite sure how to get started, and I'm not trying to make a killing
with profit, I just don't see people trying to make it simpler for the average
Joe to have a cloud desktop and not need to pay to use shared cloud services
which then become huge targets for data breaches.

~~~
Tenhundfeld
Not trying to dissuade you or imply this is exactly the same thing, but I
wanted to make sure you're aware of Amazon WorkSpaces
[[http://aws.amazon.com/workspaces/](http://aws.amazon.com/workspaces/)], "a
fully managed desktop computing service in the cloud."

------
EdSharkey
I colocate my little cluster of servers with Opus:Interactive in Portland, OR.
It's neat to look at the homebrew artists with their motley crew of machines
all colocated together in racks in the corner followed by many homogenous
racks filled with boxes from big name companies.

I don't prefer building powerful hardware. I prefer reliable and cheap to
build and maxing out the 4U of my rack. I like to think of my servers as "life
support for an internet connected harddrive". My CPU's are fanless Intel
Atom's with 2GB of ram and I get Mini-ITX motherboards that can be powered by
a brick DC power supply. Ultimately, I'll move to Flash hard drives so I'll
have no moving parts in my server, but I'm waiting for the price to come down
and for reliability to match spinning media.

~~~
techsupporter
I'd like to know more about how you do that. I just looked at Opus' site and
the 4U they advertise in Hillsboro is pretty good when compared to what I pay
for 1U of hobbyist colo in Seattle. Do you put multiple machines inside the 4U
with a switch in the same space? Can I be so bold as to ask for pictures? What
kind of physical access do you get? I have a 1U server-grade machine but
moving to a more flexible space would be nice, even if it does mean a train
trip to go see my boxen.

~~~
turnip1979
I've been interested in hobbyist colo for a while but the cheapest I've found
is over $100 a month. Is this in the ballpark of what you are paying or are
there better deals to be had?

~~~
techsupporter
The rate I'm paying no longer exists and the company I'm with got merged into
another one so my point of reference isn't so good. :)

That said, there are a couple of companies on Webhostingtalk that have good
prices for hobbyist colo if you look in their colocation forums and search for
"Seattle." Opus is nice ($129 for 4U and 3A of power with 400GB of transit)
and there is a company in Seattle--their name escapes me but I've seen them on
WHT--that is $35 for 1U and 1A with 500GB of transit.

So, yes, I think you can do better especially if you just want space for a
medium-usage 1U.

~~~
EdSharkey
Opus is nice, I like to support them, always very friendly and understanding
with my noobishness. When I first signed up, I didn't understand how to mount
my case even. There were these brass grommits that were in the toolchest off
in the corner nobody told me about that needed to be snapped into the rack
uprights, and that's what you screw your case into. The tech was totally nice
about it when I asked and he even helped me lift the case into place while I
screwed it in since I was by myself.

------
WettowelReactor
While the article stress the importance of understanding your power
requirements on the server side you also need to be very cognizant of the
power restrictions in your colo contract. Often times power draws will be your
biggest limiting factor and it is very easy to buy a rack and fill it with
equipment that surpasses your max allotted power draw. Depending on your host
your only solution may be to buy additional racks to spread out your load even
though you don't need the physical space.

Another power gotcha comes with the redundant circuits provided. For example
if you are allocated 15Amps total that usually means total across both
circuits not 15Amps per circuit.

------
timthorn
The article talks about 1U being a good form factor, but one thing to bear in
mind is that cooling fans tend to work harder in a 1U chassis than a 2U, hence
drawing more power.

~~~
jlawer
I would also add that in most DCs your never going to have the power capacity
to have racks filled with 1RU servers, so 2RU servers are typically not much
more expensive to Rack then their slimmer brethren.

------
tacticus
And no mention of ipv6 at all. at the very least it gives you much easier
management of private networks (yay real ip addresses) and there is more and
more traffic coming over it (think mobile phones and etc.)

------
csbrooks
What are the advantages of doing this over just paying for computing power on
AWS or some other cloud service? Feels like it would be a very small niche at
this point.

~~~
abc123xyz
cost! Cost!! COST!!!

I collocate 5x 4u servers with 24 and 36 drive bays, 128GB ram, SSD drives
squished in for OS, for a total (multiple raid 5 volumes for each 6 disks)
usable space for my project of 375 TB

Power and 2-3gbit of bandwidth is included, as well as remote hands to check
up on server (for example IPMI sometimes does not work) or replace drives (pay
extra for enterprise grade drives would save you alot of hassle in long term!)

for .... €1500 / month

now go and calculate the cost of that on AWS

~~~
davb
I absolutely agree _in the long term_ , but can you say a little about how
much those servers cost initially?

I tend to favour dedicated servers which I own over VPS but for small
businesses with extremely constrained budgets ("prove we can make money before
we invest in hardware") and startups, flexible virtual servers can be a great
way to ramp up in the beginning.

~~~
abc123xyz
About $10,000 per server, equipment (very heavy) was shipped from US so there
were import duties but it worked out a little bit cheaper than buying in EU in
end, and had other reasons to shop in US.

Anyway its a once off cost, that accountant can do all sorts of magic with
this.

Ive been in business for 7 years and hope to remain around for as long.

Have already saved money as to compared to renting dedicated servers before.
AWS etc were never an option the sums simply never work out.

AWS might be great at first when you are starting out, but the costs can
cripple a business, especially if you dont have others peoples venture capital
to burn.

edit: when bandwidth costs are factored in the difference is and order of
magnitude between my current setup and amazon, at time of this post using 2200
mbit outgoing, 800 incoming

edit2: my only regrets is not collocating earlier, have spent well into upper
6 figures over all the years :( on renting, AWS etc werent around when was
starting off either.

------
lawncheer
Another tip, be cognizant of the difference between sustained drawing
amperage, and spikes; when a server starts up, it can spike to 1.5+ amps. I've
seen many circuits trip because, while the draw should have been in an
acceptable range, spinning up a number of servers at the same time was too
much.

------
preinheimer
I blogged about my own experience with this here:
[http://blog.preinheimer.com/index.php?/archives/413-Buying-a...](http://blog.preinheimer.com/index.php?/archives/413-Buying-
a-Zoo-server.html)

------
dsplatonov
Nice article, thanks, but does it applies to all types of the datacenters?

~~~
caw
I can't see a type of datacenter it doesn't apply to. You pay for transit,
power draw, and to a lesser extent rack units. If you get a rack or a half
rack with your expected power draw and want to spill past that they'll be
charging you extra because that's space they can't sell to someone else. Some
datacenters also provide cages, so your hardware is physically separated from
other people. That'll cost extra too.

The only thing I didn't see a mention of is DC power, whereas your out of the
box power supplies on most OEM equipment is for AC. Most of the server
supplies nowadays should be able to handle 240V, 208V, and 120V with AC on the
same unit, when you go DC you want to consider buying a separate AC power
supply for setting up the server in your office (unless you drop ship it to
the colo).

Make sure you get a very efficient power supply too, because while you can get
the most efficient or power miserly server on the planet, an inefficient power
supply will increase the draw significantly. You also want to right-size the
power supply, because drawing too little power lowers the efficiency (there's
an efficiency curve available for most PSUs that are rated).

------
tempodox
How is it a “colocation” if it is your “first server”? In the age of on-line
dictionaries no less?

~~~
jqueryin
A colocation center is a type of data center where equipment, space, and
bandwidth are available for rental to retail customers. Colocation facilities
provide space, power, cooling, and physical security for the server, storage,
and networking equipment of other firms—and connect them to a variety of
telecommunications and network service providers—with a minimum of cost and
complexity.

Colocation by no means indicates you have multiple servers that need housing
nowadays.

