
Stack Exchange’s Colocation Move: Lessons Learned - revorad
http://blog.serverfault.com/2013/02/21/stack-exchanges-colocation-move-8-lessons-learned/
======
jedberg
I can definitely relate to the last part:

> It might make us weirdoes, but when cabling looks this neat it is a sexy and
> sleek piece of art.

I always tell people that server cabling is as much art is it science. When I
was early in my career, I had some great mentors in this respect, and now when
I see a well cabled rack, it really speaks to me.

Here is the reddit server rack just before we tore it down:
<http://imgur.com/DlaX4>

~~~
barredo
And here's an entire subreddit <http://www.reddit.com/r/cableporn/top/>

~~~
jedberg
Oh no, I didn't know about this reddit and now I do. Welp, there goes my
productivity for the day!

------
ChuckMcM
At Blekko we ended up moving 600 servers from one co-location facility over to
a neighboring town about 5 miles away. It is a _lot_ of work, and its a lot of
co-ordination. When I was at Google I thought they were just being arrogant by
designing their own racks, building them and shipping them in pre-built
"chunks" to the data center. I was very wrong about that, it saves a huge
amount of time.

I look forward to a time when a colo facility says something like "We can
lease you 50 OpenCompute 2.0 slots for $3,000 a month" and know that I can
just populate the hardware, plug my switches into the structured wiring
solution and be done.

Not sure if Colos will last that long though :-)

------
hhw
On lesson 2, power is always the biggest concern when it comes to colo so it
may not necessarily be that the facility is attempting to nickel and dime you
(although that is often also the case), but that they need to restrict power
density to a certain level so they can ensure they have enough power for the
rest of the facility and so that they can provide adequate cooling. Pricing is
also highly dependent on how much available space there is in a facility; the
best deals are to be had when a facility is brand new and needs to fill the
space. Whereas a facility that's almost full may stick to full list price and
be content to have you walk if you don't want to pay those rates.

On lesson 6, when the hosting provider does own the building, chances are
there will be less bandwidth options available within the building unless the
provider specifically offers bandwidth neutral services. May not be a concern
if you're happy just using the provider's own bandwidth mix, but if you're
pushing a significant amount of traffic, or need higher quality transit, you
are much better off shopping around for different bandwidth options.

~~~
KyleBrandt
On 2: Many of the providers we looked at did say it was cooling. They would
give higher power to single racks, but then they would make us take up more
floor space. So really floor space is often how the express their cooling
capacity.

On 6: That is a very good point. We ended up going to the Google building
which has a lot of transit options. Transit if definitely a factor

------
Corrado
I think the biggest lesson learned here is to not co-locate in the first
place. Almost everything they encountered could have been avoided by moving to
a "cloud" provider (AWS, RackSpace, etc.)

While I get a kick out of a well wired rack, I think its a waste of time to do
work you shouldn't be doing in the first place. And how much time did it take
to do all the experimentation with different rack combinations? Purchasing the
components, installing them, and then sending back when they don't fit. Those
things don't matter and don't move your company forward, unless you're in the
rack management business (like Rackspace or AWS).

~~~
kmontrose
You give up a lot moving to a cloud solution.

[http://blog.serverfault.com/2011/03/23/performance-tuning-
in...](http://blog.serverfault.com/2011/03/23/performance-tuning-intel-nics/)
is my go to "we couldn't do things like this in the cloud" example.

Not to say the cloud is never the right choice, it's just never made sense for
us.

~~~
runarb
On the other hand, many are glad they don’t have to handle the performing
tuning of NICs them self…

~~~
druiid
But... I also highly doubt most cloud providers are bothering to tune NIC's.
They are probably just throwing KVM/Xen up on some standard hardware (whatever
is cheapest with highest memory density) and opening the machine up for
access. There are going to be parts they tune as a default stack, but that
far? I somehow doubt it.

------
grogers
Is it really with getting Colo space for just one rack of servers? To me this
seems to be well below the threshold where owning your own hardware makes
sense.

~~~
borlak
I worked at a place where the hardware in one rack easily exceeded 150k. We
started saving money vs. a cloud solution in about 6 months. You can pack a
lot of firepower in a virtualized environment.

~~~
InclinedPlane
When you own the hardware you can also get a tax break on hardware
depreciation.

------
jrochkind1
> Our biggest error with this was not making one person ultimately responsible
> for the physical design. Choices need to be made and not everyone’s ideas
> can be reconciled with each other and the constraints of reality. For a
> holistic design, eventually someone has to reconcile reality with what
> everyone wants or you end up with a bunch of individually well thought out
> pieces that don’t fit together (as well as a bit of frustration.)

That doesn't just apply to server room design. It applies to just about ANY
design. Including software.

