
Ask HN: Cloud is too expensive, how practical is U.S. based colocation? - andrewstuart
I want to run a very fast server and the cloud is simply too expensive for both bandwidth and machine costs.<p>So to minimise cost it looks like I need to buy my own machine and colocate it.<p>And colocation in Australia where I live is ridiculously expensive.<p>So I wonder how practical would it be to colocate machines that I own in the U.S. with the big question being, how can I get someone to install the hardware?
======
wahern
Colocation without being physically nearby can be costly and a logistical
nightmare. So-called "remote hands" can cost upwards of $100/hour, though
initial installation might be included. Nobody wants to be in the position of
helping someone fix their broken hardware (or what they think is broken
hardware) as it's an absolute time sink and a sure way for the customer
relationship to sour when things go sideways.

If you're doing this remotely, you need to send at least one backup, which
likely means paying 2x. When things can't be fixed over IPMI or the BMC
(presuming you trust keeping those connected to a network, internal or
otherwise), just cross ship a new one with the old. But be sure to keep at
least one backup on site as logistical problems can quickly crop up, and
that's assuming you're prepared to respond immediately and remain engaged.

Regarding "very fast server", note that the biggest expense to colocation
isn't space but power. There are lots of smaller colocation providers who
would be happy to sell 4U of space rather cheaply, but you won't be able to
power 4 x Xeon E5 or E7 machines. You'd be lucky to power just one.

I've been running several Xeon E3-1230s (v2 and v3) which top out at about 65W
or less--65W for the v3 and 55W for the v2, IIRC. That's total--the peak power
draw under load at the power outlet is less than the published TDP for the CPU
alone!

I recently built a Supermicro EPYC 3201. AMD publishes 30W as the EPYC 3201's
TDP, but peak draw under load at the outlet is 45W. The EPYC 3201 compiled
Ubuntu's GCC package 11 minutes slower than my E3-1230v3 (219m vs 208m). The
performance isn't surprising, but I'm a little disappointed at the 45W power
draw. The E3s spoiled me. I'm still eager to switch to the EPYC simply because
of the Spectre fiasco. It's why I got the EPYC 3201, which is 8 cores without
SMT, instead of the 3251 with 8 cores and 16 threads.[1] Plus, without load
the EPYC box only draws about 20W, IIRC, which is nice even if irrelevant to
my immediate costs.

Probably these types of machines are not as powerful as you had in mind. But a
colo provider may not let you plug in multiple servers which could exceed your
power budget even if you promised not to run all of them under load
simultaneously. When you have multiple machines power becomes a significant
constraint.

[1] The TDP for the 3251 is 35W. The 3201 clocks memory at 2133MHz, but the
3251 clocks memory at 2666MHz. I think the extra 5W comes more from the
increased memory speed than the SMT. The DIMMs themselves would also draw more
power, so I would expect peak power draw at the outlet for the 3251 to be
greater than 50W (45W + (5W TDP difference)), closer to 55W or even higher.
(My E3-1230v3s clock memory at 1600MHz, which makes think DIMMs are actually a
significant power draw. I think the v2s use 1333MHz, which may explain the
~10W lower power draw--also the Intel changing the voltage regulatory between
v2 and v3.)

------
simplecto
If you are this cost sensitive and still demand bang-for-the-buck then you
really need to look at dedicated rental. Hetzner is great (not in US), and
there are some budget providers in Miami. You might also look at
webhostingtalk.com

------
billconan
the cheapest option I found was this for 400/month. I want to see if there is
anything cheaper.

[http://he.net/Colocation-in-Fremont-
CA/?l=Colocation_in_Frem...](http://he.net/Colocation-in-Fremont-
CA/?l=Colocation_in_Fremont&a=26570270233&n=g&pos=1t2&p=&t=&m=b&k=colocation%20fremont&gclid=Cj0KCQjwyerpBRD9ARIsAH-
ITn8zYOVMcYDNjUbinkS35pedUXg1mReWlSMXHQK8p2x8CB6mpDf_YKsaAl2XEALw_wcB)

linode seems to use this colo too.

at this point colo is more expensive for me, so I'm using google cloud.

~~~
wahern
Historically HE was rather expensive as they focused on selling bulk
colocation to people who partitioned their cabinets and sold downstream. It's
these providers that you normally need to find, but the easier they are to
find the more expensive. For years I colocated with prgmr.com, but it seems
they've stopped offering colocation--it can be a headache managing a bunch of
single-machine colocation customers. I've used 2 or 3 others before prgmr.com
but forgot which ones.

The $400 deal looks incredibly good. They don't specify how much power you
get, though. (Maybe it's mentioned elsewhere?) Power is usually the most
expensive part of colocation, excluding "remote hands" hourly rates.

------
verdverm
Is the cost more than paying someone to maintain the physical system? Does
downtime due to hardware issues cost you revenues and customers?

