
Building Servers for Fun and Prof... OK, Maybe Just for Fun - bussetta
http://www.codinghorror.com/blog/2012/10/building-servers-for-fun-and-prof-ok-maybe-just-for-fun.html
======
abtinf
That article is not just comparing apples to oranges; with that default aws
setup, he is comparing apples to a herd of donkeys.

The correct comparison is his server vs a single EC2 High-Memory Double Extra
Large instance with a 3 year heavy-utilization reservation. This instance
costs $3100 upfront plus $0.14/hour. The total 3 year cost for this server on
AWS would be 3100 + (.14 * 24 * 365 * 3) = 6779.2, or about $188.31 per month.

Sure, its more expensive. But AWS provides an insane of value on top of the
server. Like instantly being able to provision additional capacity. I wouldn't
be at all surprised if, on a full-loaded cost basis, it is extremely
competitive with building his server. Heck, the employee salary expense of
building your own server will easily drive the cost of the server well beyond
the $3100 up front amazon fee.

I love building hardware too (never had a computer I didn't build except for
laptops). But my mind boggles at AWS value proposition

~~~
codinghorror
Wait, so you can basically buy a server at AWS EC2 for a term of 3 years?
That's what this heavy utilization reservation sounds an awful lot like.

~~~
twelve45
Yeah for the closest possible comparison in your scenario, you want to do a 3
year reserved with heavy utilization.

When you reserve an instance, you are committing to a higher upfront in order
to get a lower hourly for the reservation period. The low/medium/high
utilization is sort of a knob that allows you to further control this upfront
vs. hourly cost. With high utilization you have the highest upfront, but also
the lowest hourly cost. If you are planning to run the server 24x7, this will
also give you the lowest total cost.

------
tobias3
I myself like to do this. But only for home servers. I would never do that for
a "serious" application. There are several reasons, the most important one
being that this is a perfect example where outsourcing the work is both
cheaper (economies of scale), more reliable (parts are tested better) and less
risky (companies which do that at a larger scale have better risk management).
If one the SSDs dies he has to drive there and switch it out, instead of one
person simply doing that for the whole data center. This is simply
inefficient.

And of course Amazon hosting is more expensive. It is more flexible; you can
spin up instances at your whim. You pay for that. It would be better to
compare it with standard dedicated server hosting.

~~~
peterwwillis
Have you worked with "professional support" in a colo before?

When a hard drive dies under vendor support, one guy from the vendor drives
out there to replace it. Half the time they don't test it after it's replaced
unless you request it. Sometimes the parts are duds. Sometimes they bring the
wrong part, or it doesn't fix whatever was broken on the server.

If it's the datacenter's remote hands, it can be hours, and on rare occasions
days, before somebody starts working on your issue or even answers the phone
or e-mail. Same issues come, and you have to have your own spare hardware for
the remote hands to use.

It's highly variant based on the datacenter and vendor(s) and other issues.
There's no guarantee that outsourcing will be reliable. You have to find the
best one possible and build a good relationship.

~~~
codinghorror
Having done this twice before, I _strongly_ , STRONGLY recommend picking a
colocation center that's less than an hour drive from someone who works for
the company. Or you.

Just in case, but that does come up from time to time and if it's many hours
away (or worst of all an airplane ride) you will get hours/days to fix things
as you noted.

~~~
peterwwillis
The shop I worked with, we had support on the machines, so Dell would send a
tech out to replace a part for the servers that had valid support contracts.
We still needed someone on the phone to remote-in to test the fix, of course.

For more complicated problems, our main colo was about 45 mins away from
almost all the SAs. We used remote hands for colos that were an airplane ride
away, and had varying (read: sometimes really shitty) results.

------
rdw
Where's he colocating those servers? Last time I dabbled with colocation, the
bandwidth costs per-server were by far the dominant cost. I found it difficult
to get anything at all for small quantities of servers.

Looking at he.net at the moment, I see they have a deal for $1/Mbps.
Presumably someone like Jeff Atwood can get twice as good a deal as that, so
he'd pay around $500/month for bandwidth for those servers. Going by the cheat
sheet ([https://blog.cloudvertical.com/2012/10/aws-cost-cheat-
sheet-...](https://blog.cloudvertical.com/2012/10/aws-cost-cheat-sheet-2/)),
that is within a factor of 2 of a yearly-reserved h1.4xlarge ($2263/month *
0.47 savings = $1199). It's almost equal to the three-year reserved machine
($2263 * 0.30 = $678).

Edit: He probably only needs 1 Gbps for all 4 machines, driving his bandwidth
costs down by a factor of 4, but we could start to take power/space/cooling
costs into account at that point.

So is he getting a better deal than that? I'd love to know where.

~~~
codinghorror
A lot of colocation centers offer "unlimited" bandwidth, though I strongly
suspect that may depend on whether you're running a porn site / megaupload
clone or not. :)

Fortunately bandwidth is one thing that has gotten substantially cheaper over
time:

[http://www.codinghorror.com/blog/2007/02/the-economics-of-
ba...](http://www.codinghorror.com/blog/2007/02/the-economics-of-
bandwidth.html)

~~~
rdw
Dangit, Jeff, you didn't answer my question. Share your secret colocation
deals with me! :)

Power is also a big cost as well. To quote the prgmr colo: "Please note;
power, generally speaking, is a bigger deal than rack units. I'm more likely
to let you slide on an extra rack unit than on extra watts; watts cost me real
money."

~~~
jaequery
check out fdcservers.net, their prices are almost mind boggling for unlimited
bandwidth. they now even offer 10gb unlimited in some areas.

------
mdgrech23
Hardware is cheap, programmers are expensive. System admins even more so. For
this reason I feel like cloud hosting providers are the way to go for
bootstrappers or startups. Once you get big you can do things like FB and
create your own datacenter. However the enjoyment and power that comes from
having full control over your entire environment should never be over looked.

~~~
Muzza
> System admins even more so.

Really? Honest question.

~~~
thaumaturgy
_Good_ sysadmins, yes. Keep in mind that the ideal sysadmin is someone who
knows the guts of BSD or Linux (typically, not both) inside and out; can
troubleshoot the strangest of problems very quickly; is constantly measuring
and improving performance; and can hack together shell scripts or Python or
even C code as needed.

Linux and BSD are hellaciously complex and prone to very strange behavior once
you start taking them into high-performance land.

If you're OK settling with, "can read syslog and look stuff up on the web",
those guys are cheap.

~~~
sbov
Note that this doesn't differentiate colocation and something like EC2 though
- both benefit from these kind of good sysadmins.

~~~
thaumaturgy
Hmm. I have to think about that a bit. On the face of it, EC2 and colocation
require completely different skill sets, and those skill sets don't overlap
very much.

Being a good Linux sysadmin might be fundamentally harder than being a good
EC2 sysadmin; I'm not honestly familiar enough with EC2 to know.

I would point out though that Amazon has a vested interest in making EC2 less
hard, so I would be surprised if the general opinion was that EC2
administration was just as hard as Linux administration.

~~~
BryantD
EC2 instances are running some OS, whether it's Linux or BSD or Windows or
whatever. The only thing that doesn't overlap is hardware tuning and
maintenance, and even there you should be using the same tools to figure out
if you've got a hardware bottleneck on your EC2 instance as you'd use to
evaluate performance on a standalone box.

------
thaumaturgy
maciej from pinboard has written some useful stuff on this too:

\- The five stages of hosting:
<http://blog.pinboard.in/2012/01/the_five_stages_of_hosting/>

\- Building servers: <http://blog.pinboard.in/2012/05/a_cloud_of_my_own/>

\- Going colo: <http://blog.pinboard.in/2012/06/going_colo/>

A bunch of people are probably going to respond to Jeff's article by saying
things like, "But VPS hosting means you get other people to deal with problems
for you", but in reality all that means is that you're at maciej's "monastery"
or "dorm room" stage of hosting, and your needs haven't yet driven you to get
"the apartment".

~~~
wiredfool
I love maciej's footnote:

"""* What is it with these aggro facility names? Rather than Hurricane
Electric or Raging Wire, I would much prefer to host with "Calm Multihomed
Oasis" or "Granite Mountain" or " Cooling Breezes Pure Sine Wave Mount
Bitmore". """

------
JoeCortopassi
_"...you don't need the redundancy, geographical backup, and flexibility that
comes with cloud virtualization"_

This is perhaps the single most glossed over topic in the entire article. If I
am a 1-5 person shop, maintaining a web app, virtualized hosting pays huge
dividends in that I _don't even notice_ if a hard drive or motherboard takes a
dump. There are additional costs that come with the benefit of being
abstracted away from hardware failure or geographic problems (building fire,
power out, etc), and that's something that every business has to evaluate for
itself.

~~~
codinghorror
Well, you'd always colocate enough servers so that you can lose at least one
machine without caring. E.g. HAProxy to 2 web tier machines on the back end.
HAProxy will fail over to just one server no problem.

(And yes, you can heartbeat so you have two cheap physical HAProxy machines,
too. This gets into sub-blade territory, where the 1U server is internally two
or more complete low-ish power servers with independent power supplies, etc. )

That's the whole premise behind FOSS, you don't need to worry about all the
licenses, and the hardware is so cheap it is effectively free and getting
freer by the second, so you throw a lot of cheap hardware at the problem.

But agreed, "lot" in this case means at least two so one can fail and you
don't need to care.

~~~
JoeCortopassi
I guess my bigger point though, would be that when using something like AWS, I
don't have to think/spend time on a lot of implementation details. When using
colocation services/hardware/failover, I'm just adding a bunch of little
things to my daily tasks and responsibilities. Sometimes this is a big deal
(like in a bootstrapped two man team) and other times it's not.

Ultimately, I think it comes down to priority instead of possibility. If your
company lives and dies on having reliable servers, you should probably roll
your own. But if servers are 'just' a technical detail to your overall
business model, then a cloud solution can be well worth the additional cost

~~~
BryantD
When you're using something like AWS, you absolutely need to think about
implementation details. As we've learned a few times now, sometimes AWS has
datacenter-wide outages. You need to stripe across multiple availability
zones, keep off-site backups, etc.

So yeah, you gotta think about it. A lot of the time, public cloud is the
correct solution; however, you should have a solid understanding of what you
need to do to run reliably in that cloud, how to build redundancy in the cloud
you pick, when you might need to move to a different solution, and how to make
those processes easier.

~~~
JoeCortopassi
I don't think I said that you don't have to worry about implementation details
_at all_ , just that a cloud based solution like Amazon is often many orders
of magnitude simpler than building/maintaining/repairing/replacing/updating
physical machines at a co-location center.

~~~
BryantD
We could argue about how many orders of magnitude, but I agree. It's
absolutely easier in some ways. I'm just saying that it's easy to fall into
the trap of thinking that EC2 (or whoever) is abstracted away from hardware
failure/geographic problems when it definitely isn't.

[http://arstechnica.com/business/2011/09/google-devops-and-
di...](http://arstechnica.com/business/2011/09/google-devops-and-disaster-
porn/) is a bit overdramatized but the final section is a great summary of
real stuff you need to think about on EC2, or any other provider. Again, I
know you're not minimizing these issues, but some people certainly do.

------
jaequery
i've known this secret for a while. i went from paying $4k+/month spanning
over 10+ servers in amazon ec2, to just 2 dedicated servers < $1k/month in an
unlimited 1gig colo. the performance difference i see is huge. i never
realized how actually slow EC2 is. their issue is definitely in slow IO, which
i think only their SSD instances can fix (which will cost me in the $8k+).

i now have a wicked setup, Xenserver Cloud, SAN, all Highly Available, don't
have to worry about bandwidth overcharges, and much much faster .

~~~
codinghorror
Particularly now that we have 6 GBps SATA and cheap(ish) 512 GB SSD drives,
the I/O differential can be _enormous_.

I thought about putting four 512 GB SSDs drives in a Raid 10, which would give
me striping performance levels without losing the mirror, but that seemed like
a bit too much overkill. These servers have 4 front drive bays, only 2 are
filled with the SSD mirror, so we could decide to drop in 2 more drives and
rebuild the array if we need even more I/O perf.

------
vmind
I much prefer the middle ground of dedicated servers over the hassle of
colocation and hardware management. (16GB RAM quad core with 2TB raid1 from
OVH for £65 a month are a good affordable level, and very quick and easy to
spin up new ones).

~~~
girasquid
If bang for your buck is a concern, you might be interested in this offering
from Hetzner: <http://www.hetzner.de/en/hosting/produkte_rootserver/ex4s> \-
you get double the RAM and a bit more disk space for a little bit less. I only
have a month's worth of experience to talk about it, but other than the fact
that it's in Germany (timezones) it's been great.

~~~
vmind
Yep, have used Hetzner servers too for some things and would use again, but
for our customers being 90% in the US, and OVH offer a canadian location (and
we may need some EU servers in the future).

~~~
girasquid
Fair enough. I'm always curious about why people choose one provider over
another - thanks for filling me in on your reasoning. :)

------
gprasanth
Network connectivity is one very big problem. What good is a blazing fast
server with 2Mbps (disruptive!) link? This is the very reason a lot of people
would hate their own servers. Cloud = _connectivity_ \+ (computing power) Yay!
for connectivity.

~~~
codinghorror
Of course it depends on what you're doing.

Running a porn site or image/video sharing service? You're going to go through
a ton of bandwidth and that might end up being your bottleneck way more than
performance.

~~~
dholowiski
> Of course it depends on what you're doing.

This sums up every comment here. Every web app is a unique snowflake and what
works for you (EC2/Dedicated, SSD, RAM bandwidth) is just as unique.

Much more important than building your own server, is fully understanding your
requirements, and deploying the appropriate solution. But it is still wicked
fun to build a server.

------
randomchars

        > But not the kind that go in your home. No, that'd be as nuts as the now-discontinued Windows Home Server product.
    

What's wrong with home servers? I've been wanting to build one for quite some
time.

~~~
burningion
In lieu of building a home server, I highly recommend the Synology line of NAS
machines. They've got everything you could possibly want out of a home server,
with a much faster configuration. Automatic backups, a built in Torrent
server, Plex media server, and remote file access all in one container.

Seriously a great product and value if you're looking for a home server.

~~~
evandena
I don't want to come across as a comment crapper, but I've been recommending
FreeNAS for years now. ZFS is perfect for home NAS purposes.

~~~
burningion
You're not comment crapping at all, you're offering a great alternative to
those who are time rich / cash poor, or who like configuring and want to learn
about building servers. For anyone who's cash rich and time poor, the NAS
products from Synology are great.

------
stephengillie
What happens when a HDD starts to go out? Do you have to pay & trust colo
personnel to replace it, or do you have to pay to have your server shipped
back to you?

I could see putting my own server into a colo if it were like a storage locker
- carry/cart your server into the building, mount it in the rack you've
rented, plug into the provided power and network connections, and lock the
security door.

~~~
patio11
Typically you'd pay for the first option. The term of art is "remote hands"
service.

There are options where you can install servers in cages. (Some
industries/companies require very strict physical access control.) This is
more for show than because it meaningfully increases your security against the
attack "your colocation provider is secretly The Adversary."

~~~
peterwwillis
If you buy the hardware from a vendor and get their 24/7 support they'll send
a tech to your cage to replace the drive. It's a little bit more expensive
than the machine he built, but with the knowledge that someone will go fix it
when it's broken. Worth the money, still ridiculously cheaper than AWS.

------
alberth
Why is Jeff building a server when he can get the EXACT same server at Hetzner
[1] for just 79 euro per month.

That translates to literally the exact same 3 year cost as building the
server.

[1] [http://www.hetzner.de/en/hosting/produktmatrix/rootserver-
pr...](http://www.hetzner.de/en/hosting/produktmatrix/rootserver-
produktmatrix-ex)

------
timc3
Happy rackmount home server user here, running ESXi with a load of dev
instances, staging servers, OpenIndiana for ZFS running decent amount of
storage connected to my editing workstations, databases, plus looks after our
family video and photos.

I still use our company servers that are co-located but having things at home
on the same network where you are developing is very compelling for me.

Its very quiet and cool (latest i7 processor).

But if you are hosting your own production boxes, I usually buy from HP or
Dell and know that I rarely need to worry about those machines. Would hardly
ever build a box to put into production unless it was something that was
rather more specialized.

------
halayli
I always say that there got to be an ec2-like solution but with real hardware.

Provisioning physical machines at the same convenience level as ec2 would be
awesome.

I ended up doing what Jeff did. Bought my own server and hosted it at HE for
$75/month. It's xeon 5650 with 48GB Ram + 1TB disk for $2k. Assuming the
machine will last 3 years, it's a $131/month. That's way cheaper than the
closest that softlayer offers
(<https://www.softlayer.com/Sales/orderServer/41/2087/>)

Most of the times, EC2 is really about convenience and not cost.

~~~
BryantD
Dell's Crowbar project: <https://github.com/dellcloudedge/crowbar>

Ubuntu Orchestra: <https://launchpad.net/orchestra>

Or build your own around cobbler & puppet/chef, but you're sort of reinventing
the wheel at that point. Still, sometimes that's fun.

~~~
donavanm
Also razor from emc/puppet. IMO a large scale cobbler config takes a bunch of
work to get right. the provisioning and host management space is just starting
to catch on to APIs and SOA. You're going to spend lots of time rolling your
own apis, libs, services, and integration.

------
dholowiski
I love building servers, it is amazingly geeky fun. And no doubt colocation
with your own servers will give you the best bang for your buck, with renting
a dedicated server coming in second. But you do have to be careful about
redundancy. If you can't do without a day or two down-time, you'll need 2 or
more servers, because when it's a dedicated server it's all up to you (or to
someone you hire/pay for) when something fails.

Sadly I live a 5 hour drive from the nearest co-location facility, so I'm
forced to rent a dedicated server.

~~~
codinghorror
Yeah I would never advise colocating just one server. Two at minimum. We'll
have five when all is said and done (still building the other four here).

------
darkarmani
The core of this article is that you get more performance for your money by
building your own servers and racking them. I think we can all agree there.

The problem with "hardware is cheap and programmers are expensive" is that
your hardware will fail when you least expect it and have programmers sitting
idle. Hardware is cheap, so have someone else assemble it and rack it.

If you don't need it up all of the time, this is a great way to get a lot of
performance -- IO and memory in particular.

------
duggan
Here is an incredibly lengthly argument which tries to be more balanced:
[http://rossduggan.ie/blog/infrastructure/cloud-vs-metal-
infr...](http://rossduggan.ie/blog/infrastructure/cloud-vs-metal-
infrastructure/)

Jeff is glossing over a lot.

If you're already massively invested in hardware, in terms of people,
processes and hardware, then you could argue that cloud architecture is less
valuable in the general case, otherwise, it's usually no contest.

------
btgeekboy
I like the part where he builds new servers on a rug. What's the over/under
for how long until parts start failing?

~~~
codinghorror
I have built many dozens of computers over the last 20 years and I've never
ever seen any static-related problems. I don't wear a static wrist strap, but
I do always touch something metal on the case before touching the internals.

~~~
ersii
I've been saying this as well, but has realised I don't know what problems
static electricity could or would cause - long-term nor short-term.

I've stopped saying that I've never ever seen any static-related problems and
I'm using the silly bracelet connected to something metal these days.

~~~
tlb
Static does cause damage other than immediate catastrophic failure. It can
degrade silicon in subtle ways, resulting in random errors or crashes or even
slowness due to retrying transactions. Wearing the wrist strap is cheap
insurance against mysterious badness.

~~~
codinghorror
Yeah, but it's also a little bit voodoo. Unless we can point to specific
instances of things going wrong, it's kind of imagineering a problem where one
does not actually exist.

That said I always advise touching something metal on the exterior case before
touching anything in a computer, and that's how I have always done it.

~~~
marshray
Static damage to a chip may not result in an immediate failure. It may
manifest as lessened performance, unreliability, or a shortened lifespan. I've
seen this demonstrated in training videos for the electronics industry. (I
think EEVblog has a static demo up on YouTube).

The semiconductor industry (i.e., the companies like Intel that made the parts
on your circuit boards) spends many millions, if not billions, of $ a year on
static-protective mitigations. They're pretty smart folks and also very cost-
conscious. They would not be spending all this money if it were pure voodoo.

However, parts mounted on a circuit board are much more resilient than loose
chips. I, too, use the "touching metal" method but am very careful about it.

------
papsosouid
Did nobody notice that his AWS figures are totally wrong? AWS is expensive as
hell, but even still $1400/month for 3 instances immediately looked incredibly
wrong to me.

>The instance types included in the Web Application customer sample are 2
small (for the front end), and 1 large (for the database).

Nope, the instance types are 2 small for web, 2 small for app, and 2 large for
DB. That's fully double what he's claiming it is. And he's ignoring the 4
300GB EBS volumes that are in that $1400/month as well as the load balancer
and 120GB of bandwidth. And that is entirely on-demand instances, if you are
comparing to a colo setup, you should be using the much cheaper reserved
instances.

~~~
codinghorror
We're just comparing the Amazon Web Application template as provided in their
calculator, with no changes.

You're right that it is x2 for each EC2 server which I didn't notice until
later, but that doesn't change the economics or performance story very much.

~~~
papsosouid
Yes, we're comparing the web application template as provided. Except you
compared less than half of it instead of comparing it. I think "I claimed that
AWS is twice as expensive as it actually is" certainly changes the economics
much. Your 3 servers is not 6 servers and a load balancer. Reliability counts
too, not just performance. Especially since most web apps are going to be 90%
idle on the AWS hardware.

