

HP's new ARM-based Servers - zeit_geist
http://h17007.www1.hp.com/us/en/iss/110111.aspx

======
atlbeer
Here's a good article summarizing the HP site which does a horrid job
explaining what it actually is

[http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_...](http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_servers/)

~~~
sounds
I don't want to predict the future, but if HP thinks it can win over customers
with 100% marketing and no technical details, that's not a good sign.

I make the IT purchasing decisions for my company, and I know what an ARM CPU
is, and why it's a good idea. Therefore, I know that HP's approach doesn't
make sense: ARM hasn't been proven in this market. Therefore, customers and
vendors (like HP) need to work very closely and there needs to be openness
about design issues, performance tuning, etc.

HP's starting off on the wrong foot. Maybe there's enough money in ARM that
they'll figure it out. But if I have to make a prediction, I'm of the opinion
the ARM server market is going to take off so fast that HP will be left
behind.

~~~
wmf
_...win over customers with 100% marketing and no technical details..._

Hasn't the enterprise market worked that way for years?

~~~
yuhong
The fundamental flaw here I think is centralized purchasing.

~~~
skrebbel
No clue why that got voted down. I think you're right.

------
antpicnic
How odd that HP calls these servers Project Moonshot. A moonshot used to be
slang for something with a low probability of success.

~~~
rbanffy
It's against HP's immune system to sell machines without Windows. They already
try hard not to sell their HP-UX and NonStop boxes, but clients insist on
relying on them.

------
endlessvoid94
HPs urls are terrible: <http://h17007.www1.hp.com/us/en/iss/110111.aspx>

~~~
lreeves
No kidding. Clearly they're using IIS, which has the equivalents of
mod_rewrite built-in. Why would they keep using such awful URLs? The only
other major company still using cruft like that I can think of is IBM. Totally
unnecessary.

~~~
jonursenbach
Amazon is up there too.

[http://www.amazon.com/gp/product/B003O6G5TW/ref=s9_hps_bw_g6...](http://www.amazon.com/gp/product/B003O6G5TW/ref=s9_hps_bw_g63_ir01?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-5&pf_rd_r=13KXA06CD1CPTH1KGGVJ&pf_rd_t=101&pf_rd_p=801471022&pf_rd_i=14220161)

~~~
mbreese
At least with Amazon, all you need is:
<http://www.amazon.com/gp/product/B003O6G5TW>

Which isn't too bad...

~~~
cpeterso
Or even: <http://amazon.com/dp/B003O6G5TW>

or: <http://amzn.com/B003O6G5TW>

------
checoivan
Most are overlooking the change in thinking with these servers.

Think of servers with ARM processors in the pipeline which will come with 64+
cores, available for cellphones. The biggest problem in a datacenter is not
space or processing power, it's energy consumption and heat dissipation.
Walking into a datacenter gives you the feeling that the place looks empty
with plenty space to fit 6x more servers. Today this can't be done because
there's no capacity for more air conditioning to cool more servers in the
building.

Also, the way of processing has changed in the last years, for example with
map reduce, which makes having many cores way more useful than a single server
with a massive 5 ghz core. Actually today, many servers are IO bound, not CPU
bound. There's exceeding cpu capacity.

Think of having a server, with 64 ARM cores and and array of SSD's. This won't
heat up as much as mechanical disks or today's cpus, with very small IO
constraints due to SSDs speeds and far more parallel processing power.

~~~
malbs
Our solution to infrastructure/performance problems which dates back a fair
way is to simply throw more hardware at problems.

Lately we are starting to hit a limit - a power limit. We are actually limited
in the data center we use not by space, but by our power consumption, so we
now have a pseudo KPI of "reduce our power usage". It's a good goal, but
certainly not one I had ever contemplated.

A (future) stumbling block is our reliance on x86. Would love to be able to
move to ARM =\

------
jsn
From
[http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_...](http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_servers/)
:

 _The sales pitch for the Redstone systems, says Santeler, is that a half rack
of Redstone machines and their external switches implementing 1,600 server
nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m._

 _A more traditional x86-based cluster doing the same amount of work would
only require 400 two-socket Xeon servers, but it would take up 10 racks of
space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m._

Hmm, let's see. It's about 7-8 grands per Xeon server, something like HP
Proliant DL360R07 (2 x 6-core Xeons at 2.66GHz). It's 3 times as many cores as
Redstone, clocked at 2.66 times greater frequency each, and doing more
instructions per clock tick, too. And that's without hyperthreading.

Am I missing something big, or is Redstone solution neither cost-effective nor
energy-effective?

~~~
tmurray
You assume the application is compute limited and that the extra performance
on the Xeon translates into extra performance on a given application. That's
probably not a good assumption for this kind of workload.

~~~
miratrix
You're going to be I/O bound (network or disk), memory bound, or compute
bound. It's hard to imagine the Redstone systems besting Xeon based servers in
any of the three.

~~~
tmurray
It depends entirely on where your bottlenecks are. If the bottleneck is
entirely within your node, then this isn't going to be compelling. If you're
doing something that's very light on the resources within your node (serving
static content, etc) and your bottleneck is some other system somewhere else,
then these sorts of machines could be compelling purely from a space/power
POV.

~~~
jsn
If your nodes are not bound on some local resource, you can as well just run
them in virtualization containers on Xeon. The setup will be even more
flexible than with (less powerful) ARMs.

~~~
ricardobeat
But not nearly as space/power-efficient.

------
lgeek
I find it strange that they're using Cortex-A9 CPUs. I would have expected
anyone going for the server market with ARM cores to use Cortex-A15, which has
40 bit addressing with PAE.

~~~
wmf
The A15 wasn't available when Calxeda designed the chip.

------
jbellis
Does it support ECC ram? None of the other ARM servers I've seen yet have.

~~~
wmf
Yes, it is a real server.

------
dman
Is this another line of non-intel machines that the manufacturer will bury in
a hidden portion of their website and deny that even exists in public?

------
Donch
I think this is a highly significant move by ARM. It's amazing when you speak
to datacentre people and they tell you how much of your server charges go on
electricity and cooling. My recent example was £200 extra/year for an
additional Opteron 6128 and £400 extra/year for the increased power usage from
that processor!

There is an obvious gap in the market for low power, low heat generating, high
memory throughput server processors. I'd just like to see a reference Linux
distro which supports 16 ARM cores as well as a reference server card...

The specs:

<http://www.calxeda.com/products/energycore/ecx1000/techspecs>

only refer to 32-bit memory addressing as well (ie. <4GB of memory). Seems
like the wait will be for the ARMv8 64-bit processors to be integrated.

Interesting times!

~~~
ajross
No virtualization ability. No addressibility of more than 4G is a killer for
some apps. CPU horsepower density isn't quite as high as they say: at 72 quad-
core A9's per rack unit vs. 6.4 Xeons in a comparable 10U blade server. A
Nehalem clocks about 3x faster and runs about 1.5-2x faster per clock than the
A9 for "random server logic" workloads, so this appears to be higher by only a
little bit.

Power consumption isn't clear. I see no peak load wattage numbers, which
worries me for a product marketed expressly as a low-power option.

One advantage this architecture does have is density of memory bandwidth. They
have 72 DDR3 channels per rack unit vs. 25.6 for a blade server filled with
4-channel Westmere EXs (the Intel boards will stack the DIMMs up on the same
channel). So you might want to look seriously at it as a hosting platform for
a very parallel in-memory data store.

~~~
lgeek
I'm going to guess that by 'no virtualization ability', you probably mean no
hardware acceleration for virtualization. That doesn't mean you can't have OS-
level virtualization. In fact, I'm working on a hypervisor for BeagleBoard.

Plus, TrustZone has been hacked in the past to implement virtualization.

~~~
yuhong
The reason hardware support for virtualization is necessary on x86 is only
because of some design decisions dating back to the 80286.

~~~
smithian
Can you elaborate?

~~~
wmf
[http://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualizati...](http://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements#IA-32_.28x86.29)

------
trebor
I wonder if the Redstone part of the name came from a secret Minecraft fan at
HP's headquarters.

I hope it succeeds, just to give Intel a run for their money. I really think
that ARM is the future of computing (including the desktop).

~~~
ceejayoz
Given the "Project Moonshot" bit, I'd imagine it's from the Redstone rockets
that launched the Mercury astronauts into space.

------
joshu
No details or pricing whatsoever?

~~~
stuntprogrammer
You can get details on the underlying server architecture from Calxeda's pages
describing the system on chip, and the quad server boards that are in the
initial HP design:

<http://calxeda.com/products>

------
ricardobeat
Just yesterday I was dreaming of small server racks composed of Raspberry PI's
and BeagleBoards. Wish I had a few million lying around… or a cheap dedicated
link at home.

~~~
zhemao
Don't forget some BeagleBones for good measure. <http://beagleboard.org/bone>

------
shimon_e
How much cooling do these racks need?

It could be a killer sales point if these servers need no cooling.

------
jwatte
I hope it has ECC RAM, as opposed to most competitors I've seen before (such
as the atom based boxes)

~~~
stuntprogrammer
Yes; it's a properly server oriented processor, with ECC, fast interconnect,
large cache, etc.

------
spydum
So.. another blade server solution? I thought RLX tried this a decade ago, and
went home bankrupt?

~~~
wmf
RLX really needed scale-out software and DevOps, but neither was widely
available in 2001. Also, Calxeda is claiming a larger advantage than RLX did.

And wasn't RLX technically a success since it was bought by HP and _then_
canceled?

------
rovar
Project Moonshot? What's wrong, "Project Vaporware" was taken?

~~~
antoinehersen
From the Register article: "The hyperscale server effort is known as Project
Moonshot, and the first server platform to be created under the project is
known as Redstone, after the surface-to-surface missile created for the US
Army, which was used to launch America's first satellite in 1958 and Alan
Shepard, the country's first astronaut, in 1961."

