
Copper enables the ARM server ecosystem - esolyt
http://www.dell.com/learn/us/en/04/campaigns/project-copper
======
voidlogic
I think x86/amd64 virtualization has made ARM a much less compelling option
for servers than what it would have been a few years ago.

I'm looking forward to benchmarks of new ARM server CPU (esp. AMDs), but in
the past comparing scale out ARM boxes and tradtional servers in the same
space has:

    
    
      1. Been much more more in favor x86/amd64 under low load (much lower latency for users)
      2. Been pretty similar under very high concurrent loads
      3. Not had compelling differentiation in power consumption
    

In short 48 1.6 GHz ARM cores doesn't farewell against even a lower end
x86/amd64 server with dual quad core CPUs with SMT (hyperthreading), that is
32 logical cores at 3.0 GHz. And the x86/amd64 is much cheaper.

In reality I could fill a rack with these arm blades or have two quad socket
x86/amd64 servers be equivalent or better...

I hope this changes for the sake of consumer options, but I need to see
benchmarks + power usage stats to believe it.

~~~
lazyjones
> _In reality I could fill a rack with these arm blades or have two quad
> socket x86 /amd64 servers be equivalent or better_

Quad socket x86 is still prohibitively expensive, but in the price per
performance assessment you might be right (at least if you do not consider the
expensive high-end x86 CPUs). The ARM blades look interesting for bare metal
clouds though.

~~~
wtracy
> _Quad socket x86 is still prohibitively expensive_

Is there an engineering reason for this, or is there just not enough volume
for those boards to be economical?

~~~
lazyjones
I'd vouch for the 3rd possibility: the prices are artificially high because
the customers who buy these configurations can afford to pay the premium.

~~~
ams6110
This. Purchasers of the highest-end and/or costly experimental architectures
tend to be research institutions or national labs that are using grant money.

~~~
wmf
In the case of quad-socket, it's actually enterprises running SQL databases
and VMware consolidation where the cost of the software license exceeds the
cost of the server.

------
ChuckMcM
I suppose this is one of the things you can do when you take DELL private, no
institutional shareholders to sue you because you 'threatened their value'
with a radical product idea.

As a systems guy I love the concept, but I'm a bit sad at the implementation.
I would have loved to see the back plane of these things connect to a 'switch
module' and take the connectors off the front. Basically a 48 port GBE switch
with quad 10GbE uplinks out the "end" of the case would have been much nicer.
Installing a nice SDN stack in the switch hardware such that one could
virtualize the switch topology on the fly and you've got a box that you can
configure in lots of ways and still get some economies of scale in both the
CPU and switch infrastructure. Install a 24 port 10GbE switch on the top of
the rack, and you've to 6 "copper" (18U), 24 port switch (1U), for a rack with
288 hosts, 2.3T of RAM, 288T of storage, and assuming a non-blocking 24 port
switch 1. 883Mbits between any two hosts. Add a 1U boot/config management
server into the rack and that is a heck of a gizmo.

~~~
zebra
This is is almost proof of concept technology. If it takes off I'm sure that
DELL will invest a lot more R&D in it.

------
zokier
They left out two most critical bits of info: power consumption and price. You
can already get ridiculous amounts of x86 cores in a rack if you want, but
those can get pricy to buy and to keep running.

~~~
ghostdiver
This type of infrastructure is not insta win for many(most.. ?) of businesses.
Even if the price will be low and the power consumption will make a
difference, still there will be HUGE cost of rewriting software.

Not all businesses need to write scalable software, because their current
technology stack is just good enough and will always fit to one beefy machine,
but on the other hand it will never fit on one ARM server module.

ps. hardware is cheap, while engineering work isn't

~~~
_delirium
That really depends on what kind of software you're running. There's a wide
range of portable C/C++ Unix stuff that compiles fine on ARM. If you run a
distro like Debian or ubuntu, you can just install the armhf port and have all
that stuff.

Where it gets tricky is if it's something written for an environment that
itself has no ARM port yet, such as the JVM.

~~~
theatrus2
OpenJDK Hotspot has a relatively mature ARM port. Due to "embedded" licensing
it's not been supported from Oracle, but you can use it.

------
robabbott
Take a look at the HP Moonshot servers for comparison:
[http://h17007.www1.hp.com/us/en/enterprise/servers/products/...](http://h17007.www1.hp.com/us/en/enterprise/servers/products/moonshot/index.aspx#.UpHyoqWO5G8)

These boxes have internal slots for 45 blades. The current generation blades
are Atoms and run a variety of Linux OS offerings. Future blades will be
geared towards memcache, GPU, and other types of clusters. I got a couple of
these at work for eval a little while back, and it's a pretty interesting
package.

~~~
notacoward
Actually the closest comparison seems to be Viridis.

[http://www.boston.co.uk/solutions/viridis/default.aspx](http://www.boston.co.uk/solutions/viridis/default.aspx)

Both have 12 cards per shelf, each card containing four quad-core ARM
processors. The Dell Copper cards are a bit beefier, at 1.6GHz and 8GB and
1.4GHz vs. 4GB for the BL Viridis, OTOH, the Viridis processors are 64-bit and
the shelf is 2U instead of 3U, which might more than make up for the other
differences.

At that density, the big differentiator is likely to be power (and therefore
heat). Viridis claims 5W per server, which is even better than the SiCortex
boxes I worked on. I don't see a number for Copper, so my gut tells me it's
probably more. The question is _how much_ more.

~~~
ris
I don't know where you're getting the 64-bit thing from, but beyond that,
comparing clock speeds between a Marvell Armada XP and the EnergyCore in the
Boston machine is essentially meaningless.

~~~
notacoward
Yes, you're right, the EnergyCore isn't 64-bit, and of course there are other
differences between the architectures. Perhaps you could actually provide some
information on how those differences might affect the two systems'
capabilities. Or are you just here to snipe at others?

~~~
ris
My role here is to prevent people going away with misguided information
presented as fact and propagating it elsewhere. Or maybe I should just keep my
great big trap shut and sit here feeling smug instead.

There is too little information provided on the two examples to be more
precise, otherwise I would have gone into it. All we know is the Boston
machine is EnergyCore-based - now that could be a 1000 which is Cortex A9
based, or 2000 based which is A15 based - a 3 minute look didn't make it clear
which one you were talking about and which servers are based on what. The Dell
system just says it's using a Marvell Armada XP. No more information. The
Armada XP is (I think) based on a modified A15 core, but of course they won't
say this anywhere. I'm guessing this because the XP range claims "64bit
memory" which I suppose is their way of saying it has a 64bit physical address
space - a feature of the A15 range. Though of course Marvell have an ARM
license that would allow them to do something crazy like add PAE to an A9
based core. But I think that's unlikely.

Enough research for you? I would say the only real way to gauge the
performance difference between the two is to try your particular application
on it.

~~~
notacoward
In other words, you don't really know enough to say there's a difference. What
we do know is that they're the same basic architecture and instruction set, at
very similar process levels, so it's not at all unreasonable to estimate that
the performance difference is proportional to the clock-rate difference. That
clock-rate difference is probably dwarfed both by Viridis's 50% nodes-per-rack
advantage and Copper's 2x memory-per-node advantage, so the quibble just
wasn't worth it. Thanks for adding so much to the discussion.

~~~
ris
No - I did not at all say they were very similar. The truth is we have very
little to go on to know how much like a Marvell Armada XP is like an A15. In
the past, Marvells have been known to be quite different from their stock
counterparts.

------
mtgx
I don't think these are ARMv8 (64-bit). Applied Micro should be the first one
to the ARM server market with such chips, and I think they will be available
soon:

[http://www.apm.com/news/appliedmicro-announces-general-
avail...](http://www.apm.com/news/appliedmicro-announces-general-availability-
of-x-gene-system-development-ki/)

[http://www.apm.com/products/data-center/x-gene-
family/x-gene...](http://www.apm.com/products/data-center/x-gene-
family/x-gene/)

------
ksec
This is similar offering to SuperMicro 's MicroCloud. Except with ARM instead
of x86. As much as i love ARM and want them to succeed. I dont think there are
any advantage of using ARM in Server apart from Cost. And Since ARM is still
vastly under power for many Web Server Operation its time just isn't here. May
be two - three years later when 64Bit ARM with EightCore Chip costing less
then $50 dollars. But even so Intel's Atom would have a similar performance
with similar price. According to some new test on the Soon to be available 8
Core Atom for Server it is surprisingly performance / watt.

To simply put, the server market has longed for a low power part for certain
type of usage scenario. And Intel will soon has that covered.

~~~
tluyben2
I don't know about servers (only use/used Xeon x86) and i'm not sure it's
different software or if it's Win vs Lin, but I have never touched anything
saying Atom on the package being remotely useful for anything while I have a
lot of very useful ARM devices. When people coming up to me saying 'I have a
new laptop' and then it has an Atom sticker on the front, I know what's going
to happen; they'll bring it back after a few days. So Atom in my servers make
me cringe. Probably that's not justified?

I do know that in the datacenter we are colo'd, the vast majority of stuff
hosted on the dedicated servers there could just as well be hosted on a $5 ARM
board. A lot of companies hire complete servers for hosting a website which
gets 2 visitors/month. This is also my experience as a long term devop (used
to be fulltime, now parttime); servers I maintain, even clusters I maintain,
can be hosted on a few $ ARM boards. They want dedicated because it gives them
a sense of security and not being 'disturbed' by other sites so an ARM
'server' would suit well in that case. With the power consumption difference
and bare metal costs, it's many $100s vs a few $10 per month for the client.

Edit: somewhere in my brain i thought I saw very cheap Atom servers and I did;
[http://www.ovh.nl/dedicated_servers/isgenoeg_2g.xml](http://www.ovh.nl/dedicated_servers/isgenoeg_2g.xml)
. Currently in the OVH sold out mode, but when they are back i'll try one to
see how it holds up with one of those minimal sites.

~~~
ksec
Well for one I dont see any price advantage of ARM compared to Intel. ( In
this area ) 8 Core Atoms are Cheap enough. The Atom you are referring to are
First Gen Netbook Atoms. Which are indeed very bad without any serious
software optimization. The Atom i am referring to is the newest Avoton C2750.
Which will soon be available in the OVH new brand SYS range. Assuming every
thing else stays the same, Memory, SSD/HDD, Network etc, the cost difference
between a Atom Server and a ARM server are in the range of early double digit
figure. And there isn't a $5 ARM broad. The ARM SoC provided by Marvell Amanda
would have been double of that already. So the cost difference for a rented
dedicated server between ARM and Intel Atom would be at most a few dollars
difference per month. Not to mention Atom is still a lot of more powerful.
Website usage have spikes, no one wants a server that could barely handle the
load it is getting at today.

ARM has the advantage in Smartphone and Tablet where there are no software
compatibility concern, and it is working at 100s mW range. In Servers these
two advantage disappeared.

~~~
tluyben2
Yes, that makes sense. And first gen; the current ones for sale in shops are
still the first gen? The laptops I see brand new in shops, and they are not
netbooks, are unusable to me. Whatever you do beyond opening 1 single window
breaks it completely (under Windows anyway). Maybe those are older, I don't
know, but I think they are not first gen?

I hope that'll change as you say because I love the price ;)

~~~
ksec
As far as I know, (@everyone, Do correct me if i am wrong ) most of those are
stocks left overs or 1st gen design with smaller node. Which basically means
they still sucks. And without an SSD ( I/O bottleneck ) those laptop will suck
even more. Since the PC industry has absolutely zero care on user experience
they dont want to put a costly SSD into a Atom Notebook.

And Why would anyone want one when a Celeron or Pentium is only $20 bucks more
expensive but perform a lot faster still?

------
dham
Maybe I'm not opened minded enough, but I just don't believe the future is ARM
on the desktop/server. I believe the future is x86 on the phone. I feel like
it's less time until Intel gets the power correct as opposed to ARM getting
the speed.

~~~
TillE
Per Geekbench, an iPhone 5S / iPad Air is comparable to a MacBook Pro from
2009, and fully half as fast as some of the very latest laptops.

ARM is already extremely good for mobile applications. I don't have the
numbers in front of me, but they certainly seem to be catching up faster than
Intel has been able to cut down on power.

------
kogir
I'm honestly not interested in all the reasons you won't use these servers.

I doubt Dell designed these speculatively. One or more Dell customers likely
requested them.

Who are these customers, and what is their use case? That's what's likely
interesting.

~~~
wmf
But that's equally damning. Dell, SeaMicro, Viridis, and Moonshot _combined_
have almost no microserver case studies after 1-2 years. At what point do we
end the experiment?

~~~
theonewolf
Look here for some ideas on workloads:
[http://www.cs.cmu.edu/~fawnproj/](http://www.cs.cmu.edu/~fawnproj/)

It will be interesting, I think around the 5 year mark, to see the industry
case studies. Just because these projects were _announced_ does not mean they
were in _heavy usage_ over the entire 1-2 year timeframe.

------
tobykier
Intel astroturfing contact is out in force today

------
ck2
Are these really better per watt at serving content?

I'm all for x86/intel competition but I don't see this as real savings yet?

~~~
notacoward
Looks like they'd be _a lot_ better per watt.

[http://armservers.com/2012/06/18/apache-benchmarks-for-
calxe...](http://armservers.com/2012/06/18/apache-benchmarks-for-
calxedas-5-watt-web-server/)

This is hardly surprising to me. I worked on similar low-power high-density
systems at SiCortex for a couple of years. High clock rates and big caches do
improve performance, but they increase power consumption even more, so for a
parallelizable workload a larger number of "wimpier" processors can do more
work per watt.

~~~
mjg59
Those figures are pretty close to being lies. A quad-core 3.3GHz Xeon that's
only serving 6950 requests per second isn't going to be at anywhere near 100%
utilisation, which means there's no way it's running at anywhere near its TDP.
Same for the RAM. The x86 isn't going to be anywhere near 5W, but it's going
to be significantly lower than 102W. They're also ignoring the fixed disk and
PSU overheads, shifting the power side of the ratio in favour of the Calxeda,
and using a benchmark that's going to end up network limited is a solid way of
reducing the performance advantage of the Xeon.

I don't doubt that ARM _does_ have a better power/performance ratio, but it's
nowhere near 15x. Using such obviously misleading statistics leaves me
suspecting it's way closer than that.

~~~
notacoward
While what you say is true, it's also true that a lot of x86 servers sit idle
_precisely because_ they're network-limited. Wasting power that way is no
different than wasting power any other way. That effect far outweighs any
quibbling about whether the x86 really uses its full TDP (it won't) or needs
more support chips that more than make up for it (it will), or what kinds of
memory are involved, etc.

Just so happens that I have both an x86 box and an ARM box in my office. Maybe
on my next day off (that I'm not stuffing my face with turkey) I'll run some
benchmarks myself.

~~~
mjg59
Well sure, but you could replace those servers with a low-power x86 and get
most of the same benefit that Calxeda are touting. Massively overprovisioning
is going to waste power. An 8051 is probably going to give a better
power/performance ratio than a quad-core ARM for an embedded controller, which
tells me nothing about which I should choose to run my database.

~~~
notacoward
Where is that low-power x86 and its fleet of low-power support chips to do
what something like an Armada or EnergyCore can do on its own? They have yet
to come up with one.

As for your 8051 example, it's bogus because that chip simply can't do the
work. It can't run a real OS, and even if it could do that it couldn't keep
even a single Ethernet or SATA port busy. Therefore you'd need a lot more
nodes, each with their own network/storage/memory that don't come for free,
rapidly wiping out any savings on the CPU alone before you even get to the
high-node-count coordination problems that would make the whole thing fall
flat on its face.

The whole issue here is not just absolute minimum power but _balance_. In a
server environment, where the processor's job is about keeping ports full more
than about pure number-crunching, you have to start with what kinds of ports
you have. What processor and memory most exactly matches a commodity I/O
profile, neither running over nor falling short, while consuming the fewest
watts? Modern ARM chips are often a better answer to that question than
anything Intel makes. It's a shame that some people who've invested many years
in x86-specific expertise might find the market for those skills eroding as a
result, but that's the harsh reality.

~~~
mjg59
A quad-core Xeon is overkill for a static content webserver. An ARM is
overkill for an embedded controller. If you specify inappropriate hardware
then you'll end up with an inappropriate power/performance ratio.

As for low-power x86 - HP's Moonshot is in the ballpark of ARM blade devices,
and Baytrail pushes Intel even closer. ARM probably still wins, but the
figures are nothing like 15x. And once you take fixed costs like disk and RAM
into account, the difference ends up being even smaller.

ARM have done a great job of improving the performance of their cores. Intel
have done a great job of cutting the x86 power budget. Given that nobody's
really shipping ARM servers yet, it's still not clear who's going to come up
with the better product. The problem that ARM face is that they not only have
to be better, they have to be sufficiently better that it's worth the cost of
porting in-house applications to a new architecture.

------
fest
Not being an expert on servers, I can see a possible use-case for low-power
servers like this: strongly disk-bound tasks (e.g. infrequently accessible
storage area).

Although, a cost analysis might be required- I suspect that a more powerful
CPU with a lot more disks may end up costing less (per unit of disk space vs
total power consumption).

------
wtracy
Are there any ARM servers currently available to Normal People?

I got excited when I first saw the headline because the last time I looked I
didn't see anything available in the hobbyist/small business price range.

~~~
rwmj
No. Also there is no 64 bit ARM hardware available to anyone (if you ignore
the extremely locked-down iPhone 5S). I have used a bunch of ARM servers, but
unfortunately I'm under NDA.

However there is interesting hackable ARM hardware around. I would recommend
looking at the CubieTruck, Mele A1000G Quad (make sure it's the "Quad"
variant), and possibly the ODROID-XU. I write about these and others on my
blog ([https://rwmj.wordpress.com/](https://rwmj.wordpress.com/))

------
rbanffy
I couldn't find anything on pricing and availability, but, despite my
enthusiasm for Windows-proof servers, I am not sure if having a lot of
discrete servers in a 3U enclosure is better than having a lot of containers
running on, say, 3 1U servers. Containers (or even VMs) are much more
manageable than 48 physically distinct machines (to say nothing of having 4 on
each board, meaning an upgrade or defect on one means needles downtime for
three others).

If we could turn it into a NUMA machine with 48 CPUs, that would be a totally
different story.

~~~
ansible
_If we could turn it into a NUMA machine with 48 CPUs, that would be a totally
different story._

This particular version seems to only support 1Gbit Ethernet, so I don't know
how well that will work.

Still, we're talking about 192 cores with 384 Gbytes of RAM, that's got to be
useful for some kind of workloads. Can you get that kind of density with Intel
or AMD?

~~~
ris
> Still, we're talking about 192 cores with 384 Gbytes of RAM

Seeing as you're going to be keeping 48 copies of the same OS in that memory
whether you want to or not, you're going to need those 384G.

~~~
jws
I just checked a random server: OS and all process images are using 119MB, *48
= 5.5GB. I think I'll do just fine with the remaining 378G.

------
happywolf
Given ARM reigns the mobile space, and if it gains traction in the server
market, it will put tremendous pressure on the PC market which lies in
between.

~~~
venomsnake
It won't - it may evolve or change the PC market but it won't pressure it.
People will need computing powerhouses and big monitors.

My predictions are that we will hit cloud disillusionment soon and people are
going to rediscover the benefits of home storage and GPU power.

~~~
lhl
While there's a certain class of PCs that won't be much affected anytime soon
(high-end gaming, workstations), ARM is already inching into the HTPC/home
server space.

There are a bunch of i.MX6-based boards w/ GigE, SATA, and a surprisingly
strong GPU/VPU that, being very lower power consumption, are very well suited
for always-on use. They also work well enough media playback, web browsing,
etc.

You can see both AMD (APUs) and Intel (NUCs) aiming into that same target, but
they'll face stiff competition - the ARM systems are much cheaper, about $100
for fully functional boards.

~~~
venomsnake
But that is exactly what I was saying - I don't care about which instruction
set delivers my performance if it delivers it. So it won't put pressure on PCs
- it will put pressure on x86 vendors. But the overall home server (and even
workstation) market could increase. There is nothing that prevents high end
GPU card working with arm CPU.

~~~
lhl
Err, ok, I think it's a given that the number of computing devices in a given
home will increase over time. The GP, however, (and most of the threads here)
are discussing how evolving ARM will impact/encroach on the existing x86
market.

------
iyulaev
"Hitting The Market" != "there is no general availability of the Dell "Copper"
servers at this time."

------
imahboob
ARM was meant to provide low cost servers, however with the prices of cloud
servers coming down I am not sure how successful will these be.

------
iSnow
I want to rent one of those as a cheap dedicated server. Get moving, hosters
:)

------
diminish
exciting days ahead, if each node costs 2digits. The cpu is 64or 32bits?

~~~
zokier
With 8GB of ram, I'd assume that they are 64bit.

~~~
PyErr_SetString
If I'm not mistaking, ARMv7 supports PAE, which will allow you over 4 GB of
memory. Perhaps not all in the same process, but still...

------
wlievens
Is there a JVM for these?

~~~
wtracy
Apparently Oracle has an "early access" version of Java SE for ARM. I'm pretty
sure that there are a few open source VMs floating around with ARM support,
but I wouldn't count on any of them being full drop-in replacements for Oracle
Java.

(Obviously there's a bunch of J2ME and other embedded implementations, but I
assume that's not what you're looking for.)

~~~
wlievens
Nope. We're going to build a small compute cluster for java 7 code. It's for a
few dozen cores so nothing huge. I love these embarrassingly parallelizable
problems so the idea of using cheap servers has some appeal to me. Though the
individual job run time matters too so we need something somewhere in the
middle.

------
ptx

      > What is ARM?
      > An advanced RISC machine (ARM) server employs small,
      > low-power ARM processors, typically deployed as
      > systems on a chip (SoC) to reduce space, power consumption
      > and cost.
    

This looks like a perfect example of the kind of violation a trademark owner
is required to pursue to protect the trademark: Dell uses "ARM server" here to
mean "a server that is RISC-based and advanced" rather than "a server [based
on tech] of the brand ARM from ARM Ltd."

~~~
wtracy
ARM stands for "Advanced RISC Machine". I believe the copy is correct,
although I would have expected "advanced RISC machine" to be capitalized as
"Advanced RISC Machine" as it refers to a proper name.

~~~
ptx
That is exactly what I meant. The abbreviation that is the company name used
to stand for "Advanced RISC Machines", but "an advanced RISC machine" is
clearly a generic description of something that could come from any
manufacturer.

As Wikipedia says[1]:

"A trademark which is popularly used to describe a product or service (rather
than to distinguish the product or services from those of third parties) is
sometimes known as a genericized trademark[2]. If such a mark becomes
synonymous with that product or service to the extent that the trademark owner
can no longer enforce its proprietary rights, the mark becomes generic.

[1]
[https://en.wikipedia.org/wiki/Trademark](https://en.wikipedia.org/wiki/Trademark)
[2]
[https://en.wikipedia.org/wiki/Genericized_trademark](https://en.wikipedia.org/wiki/Genericized_trademark)

The point is that "ARM" as it is used in the text doesn't refer specifically
to the products of the trademark owner. If such usage becomes acceptable,
anyone could sell "ARM servers" and the trademark becomes useless.

