

Critical responses to NYT story on datacenter energy waste - creamyhorror
http://news.cnet.com/8301-1001_3-57518536-92/nyt-story-on-data-center-waste-scares-some-frustrates-others/

======
creamyhorror
The Forbes article is also interesting reading:

[http://www.forbes.com/sites/danwoods/2012/09/23/why-the-
new-...](http://www.forbes.com/sites/danwoods/2012/09/23/why-the-new-york-
times-story-power-pollution-and-the-internet-is-a-sloppy-failure/)

I like this comment by Ludavico Corde:

 _The other thing that article totally was ignorant of, is just how far
servers have come in saving power. You mentioned visualization which is
growing by leaps and bounds. But what about processor (CPU) stepping
technology? I modern server chip like the Intel Xeon can scale way back on
power when it is idle or at a low utilization. Or using server blades like IBM
Blade Servers? Modern class servers like HP Proliant, Dell, IBM Blade Servers,
are engineered to the ninth degree with power management technology. To insure
less power is consumed when the server workload is low._

I'd long assumed that servers, even commodity hardware, were good at reducing
power use during periods of low utilization. How true is this? And I'm also
given to understand that air-conditioning uses a lot of power, so I'm
wondering if coolant systems do any better.

Overall if the energy used during/for computation is really only 6-12%, then
there's probably a good amount of room for improvement. A good electrical
engineer with awareness of industry practices and metrics could probably tell
us whether there actually is a lot of waste going on, or if these numbers are
to be expected due to practical limits. Maybe Google, Facebook and Amazon are
running tight ships, but I can imagine inefficient practices going on in less
well run datacenters.

