

SeaMicro builds box with 512 Atom CPUs - mckilljoy
http://gigaom.com/2010/01/06/seamicros-secret-server-changes-computing-economics/

======
pmjordan
I'm not really sure what real need they're trying to fill here. A single Atom
core has a TDP of around 4W, performance-wise it's at about 10% of a mid range
Core2 Duo, which has a TDP of around 65W, although the mobile versions are
much more efficient than that (35W or so). To get 10 Atoms (or 5 if they're
N300 series dual-cores) running must take much more infrastructure than a
single Core2, which consumes power as well, so I doubt they're getting more
FLOPS/Watt or integer ops/Watt than a Xeon or Opteron cluster.

So the intention must be maximising I/O. What sort of workloads are so shared-
nothing that they can parallelise to this many non-shared-memory CPUs
efficiently? Content Delivery Networks? Seems incredibly niche; niche enough
that the CDNs probably have already built their own.

And what exactly is the I/O bottleneck on a Xeon system that a bunch of Atom
systems can do better? FSB/Memory throughput maybe? The Nehalems already have
a 192-bit, 1333MHz DDR3 memory interface per CPU and gigantic caches, along
with I/O that doesn't share data paths with memory accesses.

~~~
lallysingh
Frankly with the Niagra out there, this seems partially redundant. Then again,
making I/O tasks, essentially a chain of closures, into reasonable threads is
a pretty bad waste by itself...

Yup these atoms are essentially I/O processors, just running enough buffer
cache management, filesystem, and driver code required to keep other
components (network and disk) at high utilization.

The benefit for atoms here is the short latency. They have short pipelines
(better for the little actual computation required for I/O driving), and there
are presumably more cores available here than the equivalent Xeon system
(reducing queuing delays).

~~~
wmf
I think SeaMicro is supposed to be 1/Nth the price of Niagara, and it's x86 so
you don't have to recompile.

------
ajross
I'm having trouble understanding what the market for this is. It's not going
to win on instructions-per-die-area-per-second (NVIDIA and ATI are already
ahead of this mark now, with hardware rather cheaper than $100k). And with 512
distinct CPU packages, there's no way the interconnect is going to be faster
than the high speed serial stuff we're already using for stuff like SATA, 10G
ethernet, Infiniband, etc...

So it's basically a physically smaller supercomputer running low-power CPUs.
It probably wins on real estate and power metrics, and probably loses on cost
vs. racks of consumer stuff. Is there a market for that? Note that the
investment came not from a VC fund, but from the DoE...

~~~
wmf
I think SeaMicro is targeting commercial workloads, not HPC; that immediately
excludes GPUs as competitors. It could be somewhat cheaper and lower power
than a Nehalem cluster, but I also don't see anything revolutionary.

Their patent applications have some technical details:
<http://www.pat2pdf.org/pat2pdf/foo.pl?number=20080320181>
<http://www.pat2pdf.org/pat2pdf/foo.pl?number=20090216920> It looks kinda like
a cheap x86 Blue Gene...

------
robryan
Wouldn't this be similar to what Google does, large amounts of average CPU's?

~~~
almost
"But both of these firms are going against what is currently the biggest trend
in corporate data centers: commodity servers. Such boxes aren’t simply a
collection of low-power chips — they have to be networked from inside in order
to deliver optimal performance for the lowest power consumption"

------
jbellis
512 CPUs with no ECC? Ugh.

------
rbranson
I dunno, you've got to have pretty serious scale to have a 512-CPU outage not
hurt at least a little bit.

