I make the IT purchasing decisions for my company, and I know what an ARM CPU is, and why it's a good idea. Therefore, I know that HP's approach doesn't make sense: ARM hasn't been proven in this market. Therefore, customers and vendors (like HP) need to work very closely and there needs to be openness about design issues, performance tuning, etc.
HP's starting off on the wrong foot. Maybe there's enough money in ARM that they'll figure it out. But if I have to make a prediction, I'm of the opinion the ARM server market is going to take off so fast that HP will be left behind.
Hasn't the enterprise market worked that way for years?
I wouldn't be surprised if they give some servers away to big enough companies in exchange for feedback on how to make deployment software going forward.
Which isn't too bad...
HP EliteBook 2760p
HP EliteBook 8560w
For this product, http://hp.com/go/moonshot works
That URL looks fine to me.
Think of servers with ARM processors in the pipeline which will come with 64+ cores, available for cellphones. The biggest problem in a datacenter is not space or processing power, it's energy consumption and heat dissipation. Walking into a datacenter gives you the feeling that the place looks empty with plenty space to fit 6x more servers. Today this can't be done because there's no capacity for more air conditioning to cool more servers in the building.
Also, the way of processing has changed in the last years, for example with map reduce, which makes having many cores way more useful than a single server with a massive 5 ghz core. Actually today, many servers are IO bound, not CPU bound. There's exceeding cpu capacity.
Think of having a server, with 64 ARM cores and and array of SSD's. This won't heat up as much as mechanical disks or today's cpus, with very small IO constraints due to SSDs speeds and far more parallel processing power.
Lately we are starting to hit a limit - a power limit. We are actually limited in the data center we use not by space, but by our power consumption, so we now have a pseudo KPI of "reduce our power usage". It's a good goal, but certainly not one I had ever contemplated.
A (future) stumbling block is our reliance on x86. Would love to be able to move to ARM =\
Not huge but if you add multiple of these cards in a server, the power adds up.
The sales pitch for the Redstone systems, says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.
A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m.
Hmm, let's see. It's about 7-8 grands per Xeon server, something like HP Proliant DL360R07 (2 x 6-core Xeons at 2.66GHz). It's 3 times as many cores as Redstone, clocked at 2.66 times greater frequency each, and doing more instructions per clock tick, too. And that's without hyperthreading.
Am I missing something big, or is Redstone solution neither cost-effective nor energy-effective?
Look, today's multichip, multicore servers tend to be unbalanced for a lot of workloads. Their massive compute performance often burns power waiting for main memory, disk or network.
Then the question is, which hardware delivers the right balance of CPU, memory and IO bandwidth for the lowest capital and operating costs.
Also for what it is worth, each card has 60Gbps of general IO bandwidth, and another 48Gbps of SATA disk bandwidth.
And each 4 ARM cores have their own memory channels and I/O ports, vs every 6-12 on the Xeon [corrected] (point being that CPU speed is not the only variable here).
HP can cram three rows of these [4 CPU --jsn] ARM boards, with six per row, for a total of 72 server nodes
From that I conclude that in their calculations 1 CPU == 1 server.
There is an obvious gap in the market for low power, low heat generating, high memory throughput server processors. I'd just like to see a reference Linux distro which supports 16 ARM cores as well as a reference server card...
only refer to 32-bit memory addressing as well (ie. <4GB of memory). Seems like the wait will be for the ARMv8 64-bit processors to be integrated.
Power consumption isn't clear. I see no peak load wattage numbers, which worries me for a product marketed expressly as a low-power option.
One advantage this architecture does have is density of memory bandwidth. They have 72 DDR3 channels per rack unit vs. 25.6 for a blade server filled with 4-channel Westmere EXs (the Intel boards will stack the DIMMs up on the same channel). So you might want to look seriously at it as a hosting platform for a very parallel in-memory data store.
Plus, TrustZone has been hacked in the past to implement virtualization.
I hope it succeeds, just to give Intel a run for their money. I really think that ARM is the future of computing (including the desktop).
And wasn't RLX technically a success since it was bought by HP and then canceled?
It could be a killer sales point if these servers need no cooling.