

AOL Retires 9,500 Servers, Saves $5 Million  - 1SockChuck
http://www.datacenterknowledge.com/archives/2012/03/29/aol-retires-9500-servers-in-uptime-roundup/

======
ChuckMcM
"Decommissioning a single 1U rack server can result in $500 per year in energy
savings, an additional $500 in operating system licenses, and $1,500 in
hardware maintenance costs, according to Uptime."

Well that seems a bit optimistic. Lets say your 1U server pulled 450 watts
(way over the top, its typically less than 375 unless you're doing the
supercomputer GPU thing) that is 3.9MWhr per year or at 11 cents/kWh or
$433.62. Lets say you put them in REALLY CRAPPY data centers with a PUE of 2.5
(modern ones are 1.2 - 1.5, Facebook's OCP is like 1.1 but with 2.5 that is
$1,084/year in 'energy' costs. License costs of, uh $0, well if you use a RHEL
support contract its about $125/machine depending, and we'll assume you lose
both disks and a power supply _every fucking year_ so your hardware
replacement costs are another $500? So rather than the $2,500/year in expense
Uptime claims I would put it under $2K worst case, and likely closer to $1.5K.

So inflated numbers aside, I don't doubt for a moment that it is a Good Thing
that AOL is killing off the oldest 25% of their gear. If it was on a 4 year
depreciation cycle then they could replace it with roughly 1/3 as many
machines using Sandybridge type machines and may 1/5 as many machines with
Ivybridge gear.

As a technologist I am sad sometimes that it isn't worth the power to run a 5
year old server, but it is the life we live in.

~~~
commandar
I'm sure AOL is factoring in power used for cooling and labor for maintenance
in those numbers.

~~~
ChuckMcM
I am sure they are. That is the rational for the 'PUE' adjustment.

Typical data center charges a power 'rate' for your actual power (either
metered or fixed) and then a 'environment' or 'rent' charge which covers the
cost of power they are buying to keep your stuff cool. That ratio, the power
you use in your machines, vs the total power you use inclusive of the
datacenter is the "PUE Factor". Old school data centers, which were built on a
model similar to a mainframe 'machine room' that larger corporations had, had
very expensive and clunky cooling which meant that while you paid maybe 11
cents for a kW of power to compute, the datacenter was charging you 25 cents
per kW to remove the that heat, so the ratio would be '(25+11)/11' or 3.27!
facilities built in the 90's had ratios in the 2.2 - 2.7 range, built in the
oughts its usually 1.8 - 2.3 and built in the 10's its closer to 1.3 - 1.8.
Dedicated facilities like Facebook, Amazon, and Google build get even better
ratios.

Another thing that these places will offer is called 'smart hands' or 'remote
hands' or 'tech on demand' who for $50 - $150/hr will go out and swap out a
part that you've drop shipped to the place. Assuming you've got power strips
that you can remotely power cycle and IPMI boards for doing 'boot from BIOS'
type work you generally need no staff at all on-site, so your grossed up
power/cooling/rent charge is all you end up paying. This makes it easy for ops
guys like me to compare options, which range from a low of about $150/kWh-
month to $250/kWh-month (some 'retail' co-location facilites go as high as
$600/kWh-month but that is not a bulk deal for someone like AOL or a search
engine like Blekko) So at the low end of $150/kWh-month that is (150 x 12 or
$1,800/yr and for a 1U server pulling < 500 watts net cost around $750/year in
all up data center 'recurring' costs). At $250/kw*month its only $1,500 a
year. My estimate at $1,084/year earlier is pretty achievable for anyone
putting 9,500 machines into a data center.

[edit: remove asterisks]

~~~
eru
Interesting. Do you have pointers to some background material? I'd like to see
what made cooling more efficient. Thanks!

~~~
ChuckMcM
Well there are Google's papers they presented at the data center summit and
the whole open compute project which touches on this as well.

Simply put the evolution has followed this path:

Machine Room -> air temp is 68 degrees 24/7/365

Modified Machine Room -> rows of machines are lined up alternately facing
toward each other and away from each other, cold air is preferentially
directed up from the floors in the 'cold' aisles.

Tolerance Limits -> Google and others establish that 'commodity' machines work
just fine at an ambient temperature of 80 - 90 degrees F so they cut back on
the level of cooling, substitute external air when its cooler than 75 degrees
outside.

Full containment -> various systems to provide cooling just to the active
hardware, places like SwitchNap in Vegas build structures around the rows,
third parties put plastic enclosures around rows to contain cold air, or to
force all hot air out of a plenum.

Most of the 'win' has been reducing the temperature differential between the
data center and ambient air, and reducing the volume of air that has to be
cooled.

Once you do that, alternate air cooling methods (like evaporation cooling) can
be used rather than compressor chillers.

[1] <http://www.google.com/about/datacenters/best-practices.html> (they have
additional tricks up their sleeve)

[2] [http://opencompute.org/project_category/data-center-
technolo...](http://opencompute.org/project_category/data-center-technology/)

------
__alexs
Well that's one way to spin slowly losing all of your customers as a good
thing I suppose.

------
b0b0b0b
9500 servers cost only $5M per year? Sounds like putting a positive spin on
evicting outdated, idle boxes. Did I tell you I ate my veggies last night?

------
smackfu
Here's the original blog post with terrible cowboy metaphors:
[http://blog.uptimeinstitute.com/2012/03/uptime-institute-
pay...](http://blog.uptimeinstitute.com/2012/03/uptime-institute-pays-bounty-
on-dead-servers/)

"AOL sent 9,484 head to the stockyards, representing a 26% turnover in server
assets across the company. The roundup resulted in a total savings of $5.05
million from reduced utility costs, maintenance, and licensing costs, and
includes cash in hand of $1.2 million from asset sales and reclamation.
Environmental benefits were seen in the reduction of almost 20 tons of carbon
emissions."

------
Terretta
> _"NBCUniversal finished second in the contest, as its infrastructure team
> removed 284 servers..."_

Interesting that's #2; that's not very many. Sounds like most companies who
retire more than this regularly didn't know this was a thing.

------
cleverjake
I am confused, are they removing the servers or upgrading them for more
efficient ones?

~~~
obituary_latte
I wasn't clear on the either. It didn't seem to state that the 9,500 servers
were being replaced by fewer, more efficient machines; just that it was
getting rid of them.

------
melvinng
Should I start buying IBM or Intel's shares? They have to replace the servers
right?

~~~
vandahm
Not if they're losing business.

------
jaybill
I got excited about the title of this article when I read the first two words,
but then I discovered more words after that.

------
cagenut
This is sortof the equivalent of a 400lbs person losing 100lbs. They're still
still on deaths door.

