Hacker News new | past | comments | ask | show | jobs | submit login
HP's new ARM-based Servers (hp.com)
86 points by zeit_geist on Nov 1, 2011 | hide | past | web | favorite | 70 comments



Here's a good article summarizing the HP site which does a horrid job explaining what it actually is

http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_...


I don't want to predict the future, but if HP thinks it can win over customers with 100% marketing and no technical details, that's not a good sign.

I make the IT purchasing decisions for my company, and I know what an ARM CPU is, and why it's a good idea. Therefore, I know that HP's approach doesn't make sense: ARM hasn't been proven in this market. Therefore, customers and vendors (like HP) need to work very closely and there needs to be openness about design issues, performance tuning, etc.

HP's starting off on the wrong foot. Maybe there's enough money in ARM that they'll figure it out. But if I have to make a prediction, I'm of the opinion the ARM server market is going to take off so fast that HP will be left behind.


...win over customers with 100% marketing and no technical details...

Hasn't the enterprise market worked that way for years?


The fundamental flaw here I think is centralized purchasing.


No clue why that got voted down. I think you're right.


This. is why Itanium server revenue has exceeded entire AMD server revenue in recent years.


Ask them if they can lend you one that looks promising. Then run your workload on it and do some benchmarks. You should never rely on vendor-supplied data to make your decisions


Think of this as Calxeda/HP's minimum viable product in the ARM server market. Someone's gotta put something out there to get feedback.

I wouldn't be surprised if they give some servers away to big enough companies in exchange for feedback on how to make deployment software going forward.


Thank you, I looked at very link on the HP page, twice, and learned nothing about what they actually were going to sell. I googled "Redstone Server Development", and I found some more recent articles: http://www.crn.com/news/data-center/231902061/hp-intros-low-...


How odd that HP calls these servers Project Moonshot. A moonshot used to be slang for something with a low probability of success.


It's against HP's immune system to sell machines without Windows. They already try hard not to sell their HP-UX and NonStop boxes, but clients insist on relying on them.


It's also a federated authentication project for the non-web: http://project-moonshot.org/



No kidding. Clearly they're using IIS, which has the equivalents of mod_rewrite built-in. Why would they keep using such awful URLs? The only other major company still using cruft like that I can think of is IBM. Totally unnecessary.



At least with Amazon, all you need is: http://www.amazon.com/gp/product/B003O6G5TW

Which isn't too bad...



Luckily, everything from /ref= on is optional, and the alphanumeric code that comes after product is an actual product code. Replace it with the 9-digit ISBN for a book, and you'll get the book, for example.



That's intentional. They'd rather not give out a bunch of Google Accounts usernames (which would also be a lot of Gmail users).


I presume they use the urls to generate computer names. Have you ever heard of:

HP EliteBook 2760p

HP EliteBook 8560w


Terrible URLs? Yes; they are.

For this product, http://hp.com/go/moonshot works


Off-topic, but seriously no one except self-important webdevs gives a toss about URLs. They're addresses, not literature.

That URL looks fine to me.


Yeah, bullshit semantic/SEO cargo cult.


Most are overlooking the change in thinking with these servers.

Think of servers with ARM processors in the pipeline which will come with 64+ cores, available for cellphones. The biggest problem in a datacenter is not space or processing power, it's energy consumption and heat dissipation. Walking into a datacenter gives you the feeling that the place looks empty with plenty space to fit 6x more servers. Today this can't be done because there's no capacity for more air conditioning to cool more servers in the building.

Also, the way of processing has changed in the last years, for example with map reduce, which makes having many cores way more useful than a single server with a massive 5 ghz core. Actually today, many servers are IO bound, not CPU bound. There's exceeding cpu capacity.

Think of having a server, with 64 ARM cores and and array of SSD's. This won't heat up as much as mechanical disks or today's cpus, with very small IO constraints due to SSDs speeds and far more parallel processing power.


Our solution to infrastructure/performance problems which dates back a fair way is to simply throw more hardware at problems.

Lately we are starting to hit a limit - a power limit. We are actually limited in the data center we use not by space, but by our power consumption, so we now have a pseudo KPI of "reduce our power usage". It's a good goal, but certainly not one I had ever contemplated.

A (future) stumbling block is our reliance on x86. Would love to be able to move to ARM =\


A FusionIO ioDrive2 SSD card is 24W.

Not huge but if you add multiple of these cards in a server, the power adds up.


From http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_... :

The sales pitch for the Redstone systems, says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.

A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m.

Hmm, let's see. It's about 7-8 grands per Xeon server, something like HP Proliant DL360R07 (2 x 6-core Xeons at 2.66GHz). It's 3 times as many cores as Redstone, clocked at 2.66 times greater frequency each, and doing more instructions per clock tick, too. And that's without hyperthreading.

Am I missing something big, or is Redstone solution neither cost-effective nor energy-effective?


You assume the application is compute limited and that the extra performance on the Xeon translates into extra performance on a given application. That's probably not a good assumption for this kind of workload.


Why, for embarrassingly parallel workloads (like the ones they mention) it's a totally reasonable assumption. And for something not so parallel the gazillion of ARM nodes is all but useless.


The article mentions Hadoop, big data crunching, web serving and web caching. They may or may not be embarrassingly parallel, but that doesn't mean any of them are typically compute bound.

Look, today's multichip, multicore servers tend to be unbalanced for a lot of workloads. Their massive compute performance often burns power waiting for main memory, disk or network.


You're going to be I/O bound (network or disk), memory bound, or compute bound. It's hard to imagine the Redstone systems besting Xeon based servers in any of the three.


It depends entirely on where your bottlenecks are. If the bottleneck is entirely within your node, then this isn't going to be compelling. If you're doing something that's very light on the resources within your node (serving static content, etc) and your bottleneck is some other system somewhere else, then these sorts of machines could be compelling purely from a space/power POV.


If your nodes are not bound on some local resource, you can as well just run them in virtualization containers on Xeon. The setup will be even more flexible than with (less powerful) ARMs.


But not nearly as space/power-efficient.


If your workload runs on one or two Xeon servers, it probably isn't worth considering something like this. If your workload runs on racks of Xeon servers, it might be.

Then the question is, which hardware delivers the right balance of CPU, memory and IO bandwidth for the lowest capital and operating costs.

Also for what it is worth, each card has 60Gbps of general IO bandwidth, and another 48Gbps of SATA disk bandwidth.


Even if you triple the number of Redstone machines, you'll still use just ~30% of the energy and 7.5% of the cabling.

And each 4 ARM cores have their own memory channels and I/O ports, vs every 6-12 on the Xeon [corrected] (point being that CPU speed is not the only variable here).


The Calxeda chip is quad-core, so there's still sharing.


My bad. The tray picture shows 36 boards, I didn't pay much attention and thought those were 72 single-core nodes.


By my calculations the Redstone config has 6400 cores and the traditional one has 4800 cores. But discussing such vague claims is pretty pointless anyway.


The original Calxeda reference design from last year was a 2U rack-mounted chassis that crammed 120 processors (and hence server nodes)

also:

HP can cram three rows of these [4 CPU --jsn] ARM boards, with six per row, for a total of 72 server nodes

From that I conclude that in their calculations 1 CPU == 1 server.


Each CPU is a separate node in this configuration--separate DRAM, IO, etc.


I find it strange that they're using Cortex-A9 CPUs. I would have expected anyone going for the server market with ARM cores to use Cortex-A15, which has 40 bit addressing with PAE.


The A15 wasn't available when Calxeda designed the chip.


Does it support ECC ram? None of the other ARM servers I've seen yet have.


Yes, it is a real server.


Is this another line of non-intel machines that the manufacturer will bury in a hidden portion of their website and deny that even exists in public?


I think this is a highly significant move by ARM. It's amazing when you speak to datacentre people and they tell you how much of your server charges go on electricity and cooling. My recent example was £200 extra/year for an additional Opteron 6128 and £400 extra/year for the increased power usage from that processor!

There is an obvious gap in the market for low power, low heat generating, high memory throughput server processors. I'd just like to see a reference Linux distro which supports 16 ARM cores as well as a reference server card...

The specs:

http://www.calxeda.com/products/energycore/ecx1000/techspecs

only refer to 32-bit memory addressing as well (ie. <4GB of memory). Seems like the wait will be for the ARMv8 64-bit processors to be integrated.

Interesting times!


No virtualization ability. No addressibility of more than 4G is a killer for some apps. CPU horsepower density isn't quite as high as they say: at 72 quad-core A9's per rack unit vs. 6.4 Xeons in a comparable 10U blade server. A Nehalem clocks about 3x faster and runs about 1.5-2x faster per clock than the A9 for "random server logic" workloads, so this appears to be higher by only a little bit.

Power consumption isn't clear. I see no peak load wattage numbers, which worries me for a product marketed expressly as a low-power option.

One advantage this architecture does have is density of memory bandwidth. They have 72 DDR3 channels per rack unit vs. 25.6 for a blade server filled with 4-channel Westmere EXs (the Intel boards will stack the DIMMs up on the same channel). So you might want to look seriously at it as a hosting platform for a very parallel in-memory data store.


I'm going to guess that by 'no virtualization ability', you probably mean no hardware acceleration for virtualization. That doesn't mean you can't have OS-level virtualization. In fact, I'm working on a hypervisor for BeagleBoard.

Plus, TrustZone has been hacked in the past to implement virtualization.


The reason hardware support for virtualization is necessary on x86 is only because of some design decisions dating back to the 80286.


Can you elaborate?



Last I remember reading ARMv7 wasn't Popek/Goldberg valid. The Cortex-A15 is touted as adding this facility, no? Regardless, a 4G box is going to be a poor resource allocation environment for virtualized hosting, which is something closer to my point.


Yes, not all ARMv7 instructions meet the Popek/Goldberg virtualization criteria. The project I'm working on uses dynamic binary translation to trap & interpret these instructions. This is how VMware works without virtualization extensions on x86.


Perhaps it is time for distributed RDBMS with datastores purely in RAM.


According to http://www.theregister.co.uk/2010/08/25/arm_server_extension... there's an extension for 32-bit ARM processors that allows them to address 40-bit memory (1TB RAM). This before 64-bit processors (that should arrive in 2014) Btw I dont know if this option is available in Calxeda/HP solutions.


I wonder if the Redstone part of the name came from a secret Minecraft fan at HP's headquarters.

I hope it succeeds, just to give Intel a run for their money. I really think that ARM is the future of computing (including the desktop).


Given the "Project Moonshot" bit, I'd imagine it's from the Redstone rockets that launched the Mercury astronauts into space.


No details or pricing whatsoever?


You can get details on the underlying server architecture from Calxeda's pages describing the system on chip, and the quad server boards that are in the initial HP design:

http://calxeda.com/products


Just yesterday I was dreaming of small server racks composed of Raspberry PI's and BeagleBoards. Wish I had a few million lying around… or a cheap dedicated link at home.


Don't forget some BeagleBones for good measure. http://beagleboard.org/bone


I hope it has ECC RAM, as opposed to most competitors I've seen before (such as the atom based boxes)


Yes; it's a properly server oriented processor, with ECC, fast interconnect, large cache, etc.


So.. another blade server solution? I thought RLX tried this a decade ago, and went home bankrupt?


RLX really needed scale-out software and DevOps, but neither was widely available in 2001. Also, Calxeda is claiming a larger advantage than RLX did.

And wasn't RLX technically a success since it was bought by HP and then canceled?


How much cooling do these racks need?

It could be a killer sales point if these servers need no cooling.


Project Moonshot? What's wrong, "Project Vaporware" was taken?


From the Register article: "The hyperscale server effort is known as Project Moonshot, and the first server platform to be created under the project is known as Redstone, after the surface-to-surface missile created for the US Army, which was used to launch America's first satellite in 1958 and Alan Shepard, the country's first astronaut, in 1961."




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: