

32 cores of fast ARM in half-depth 1U case - rjek
http://www.baserock.com/

======
mdwrigh2
Calxeda has been building 16 core cards[1] for awhile now with about 18
fitting in a 1U-sized box[2], each card connected by 10GbE similar to this[3].

[1]:
[http://www.calxeda.com/technology/products/energycards/quadn...](http://www.calxeda.com/technology/products/energycards/quadnode/)

[2]: [http://semiaccurate.com/2011/11/03/calxeda-
launches-a-4-core...](http://semiaccurate.com/2011/11/03/calxeda-
launches-a-4-core-arm-server-chip/)

[3]: [http://www.calxeda.com/wp-
content/uploads/2012/06/EnergyCard...](http://www.calxeda.com/wp-
content/uploads/2012/06/EnergyCard-Product-Brief-612.pdf)

~~~
colanderman
Not to mention Tilera, who makes 36-core chips/cards [1], with 4 in a 1U box
[2], with each processor with connected to each its four neighbors by a 250
Gb/s link.

[1] <http://tilera.com/products/processors/TILE-Gx_Family> [2]
<http://tilera.com/products/platforms> [3] <http://www.tilera.com/technology>

~~~
JVIDEL
But Tilera uses its own architecture, ARM on the other hand is nearly
everywhere.

There were other companies like Tilera: Transputer, Connection
Machine.......SiCortex was founded in 2003 with a similar idea and was
shutting down by 2009

The problems with these architectures is cost: mass market tech like x86 and
ARM offers more bang for your buck

------
ajross
Not a 32 core computer. It's 8 4-core SoCs with independent storage connected
by a 5 Gbps network switch. Not a useless machine (it's a cheap way to buy
lots of DRAM bandwidth, I'll note), but not what the headline is selling as.

~~~
lvh
Did the headline change? The one I see now just says that it's 32 cores in a
tiny box, not that it's one machine specifically.

~~~
ajross
I didn't say it was factually incorrect, I said (implied, I guess) it was
misleading. You're seriously saying you read that and understood it meant 8
(!) distinct machines on one board?

~~~
lvh
I read it and understood it was a lot of cores in a small box. I presumed they
weren't actually one machine, yes; but I didn't feel the headline was trying
to imply anything either way.

------
montecarl
Looks neat. It should have a redundant power supply though. With redundant
power just one of these could be a good platform for deploying fault-tolerant
services.

~~~
EwanToo
I think the point of this kind of server is to build the fault tolerance at
the rack, or at least 1U, level, not at the level of individual components.

If an individual server fails then your service shouldn't even blink.

------
K2h
The picture shows 8 tiny fans mounted inside (not counting the fan probably
inside the PS)

I would have thought that part of the reason for going to ARM is the low power
- hopefully no moving parts. It would have been cool (ha) to see them pull it
off without fans.

~~~
ricardobeat
When you have 608 of those in a rack I think the fans are very welcome :)

------
patrickgzill
2GB per node - not enough IMHO (at least for the use I was thinking of ).

------
alexfoo
Cost?

~~~
icefox
At first I thought it was just a marketing page to see if there is interest,
but on the spec sheet it says "Order now from Codethink for September
delivery", but again, no price

------
rurounijones
What is the advantage of a half width server?

I mean with all the connectors on it it isn't like you can put two of em into
a 1U slot.

~~~
rjek
Half depth, not width. Everything comes out the front, so you mount them back-
to-back in a rack.

~~~
brk
Been a while since I was heavily involved in high-density data centers... How
does that work with traditional hot aisle/cold aisle layouts? Seems like you'd
need to adjust the cooling to have an air channel up the middle of the rack?

~~~
rjek
You could do, or have one of them exhaust to its "front", and the other
exhaust to its "back".

~~~
rurounijones
So the one at the "back" is getting pre-warmed air. Wouldn't this be a
problem?

*Note I am used to normal data-centers but not high density stuff. I don't know of the special provisions a high-density setup would have.

~~~
rjek
In general, high-density or supercompute systems are installed with custom
racks, cooling, power supplies, etc. This doesn't need anything custom, just
something that isn't always the default in some DCs.

They're ARMs, they don't need much cooling anyway.

------
AllTheThings
From my understanding of ARM architecture, ARM cores cannot process heavy data
loads in a power-efficient manner, making TDP under heavy server conditions
prohibitively high. Am I right about this? I've never tried making an ARM
server of my own.

~~~
rjek
What understanding is it that you have that suggests that? It can shovel data
via DMA as well as the next architecture.

~~~
AllTheThings
Well I'm not referring to reading/writing from a data store, but actually
processing the data in registers and such. Arithmetic operations and that sort
of thing.

~~~
rjek
High-density ARM servers are not ideal for every application. However, there
are plenty of applications where you don't need massive amounts of
computational grunt in each CPU, such as web serving, hadoop, mail and DNS
servers, etc. Also, the density and lower power/cooling requirements can often
offset the lack of performance. Each slab has a 260W PSU. The whole system
can't pull more than 260W, ever. Some Intel CPUs need 200W just on their own.

------
alexyoung
Imagine a Beowulf of those!

~~~
Symmetry
I know you meant that as a joke, but I'd imagine one of these really would be
an excellent candidate for being turned into a cluster.

~~~
alexyoung
I meant it as a joke, while also recognising that it would be ideal for being
turned into a cluster. It's made me nostalgic about the days when I built
Beowulfs at university!

------
electic
Completely random, but I think it is funny the company is named "CodeThink
Limited." That can't be good.

