

Intel unveils Knights Corner 50 core server chip - ukdm
http://www.geek.com/articles/chips/intel-unveils-knights-corner-50-core-server-chip-2010061/

======
wmf
This "article" is garbage and is full of errors; if you're interested in this
topic read the slides instead.
[http://download.intel.com/pressroom/archive/reference/ISC_20...](http://download.intel.com/pressroom/archive/reference/ISC_2010_Skaugen_keynote.pdf)

------
10ren
New technologies often get started in a niche. Because server rooms already
have multiple computers, and standard infrastructure for handling them, it is
a natural fit for many-core chips (though bandwidth to RAM/disk may be a
problem).

Once established in this market, the feedback loop with users who pay money
will drive the technology to improve to suit that niche. The bugs will get
ironed out, deficiencies worked around, specific infrastructure developed.
Chips will use less power, and will get faster and cheaper.

Eventually, other uses will be found for the product - perhaps on the desktop,
perhaps as mobile devices, perhaps in applications that were never imagined
before, because they were not conceivable. The key benefit might be from a
feature that is not considered very central to the technology, from an
engineering perspective, but happens to be unique with respect to the
alternatives.

Perhaps it could be compact size, low power consumption, doing many distinct
tasks, greater reliability through redundancy.

------
carbocation
I wonder how many servers are currently CPU-bound? I've always been I/O bound
(interchange that with memory-bound, depending on how much you would like to
spend on memory).

~~~
wmf
Many _HPC_ servers are CPU-bound.

~~~
Tamerlin
True, but commercially speaking they're the minority.

------
rbanffy
I would love to see this on a desktop. Why? Because that would generate a
powerful incentive to parallelize desktop software. Number of cores will do
nothing but increase for the foreseeable future and per-thread performance
will not go up much.

Even if they release an 8-core part at desktop processor prices, that would be
great.

OTOH, if parallelizing of desktop software improves much, there will less
reason to go x86 when compared to multi-core ARM-based designs.

~~~
Periodic
A lot of desktop applications aren't well suited for multi-core use. They have
been written for a long time as single-threaded applications and making them
multi-threaded is a huge task for a small gain. Some applications get worse
when first made to use multiple cores and it takes a lot of tweaking and
rewriting to get performance back to where it was.

2-8 cores on the desktop hasn't made a huge dent. 50-cores is a bit extreme.
It's going to thrive in the server market where one system is typically
servicing many requests from many users. Desktops are generally designed to
only service one user.

~~~
rbanffy
Is it really that small a gain? I am now with 6 open tabs in Firefox, one
terminal with four tabs, a music player decoding an internet radio stream, an
E-mail client and Emacs with two windows open and a Python process running in
it. There is also a desktop CouchDB running somewhere (and that's a highly
parallelizable animal) that deals with lots of the data my system generates.

If for nothing else, having more cores would prevent context switches on the
two cores I have.

Sadly, it's no surprise much software isn't designed for multiprocessors.
Prior to Windows XP taking over Windows 9x as the dominant OS for desktops, it
would make no sense to develop a mainstream x86 designed for multi-threading
apps - just consider the failure of the Pentium Pro (designed to run 32 bit
apps in a 16-bit era). Processors and programs have been optimized for so long
to cope with mono-threading OSs that it will take a while to get rid of this
legacy and to step into this parallel future. There is a good reason why most
desktop software is not a good fit with parallel processors - until recently
there were few desktop 4+ thread machines.

This is what I mean when I say Microsoft held back the PC's evolution for a
decade. I used 64-bit processors (Alpha) and multi-processor desktop machines
(MIPS, PPC and SPARC) years before similarly equipped PCs appeared in the
market.

~~~
queensnake
| I am now with 6 open tabs in Firefox, one terminal with four tabs, a music
player decoding an internet radio stream, an E-mail client and Emacs with two
windows open and a Python process running in it.

Only two of those 'normal' activities consume cycles. Your CouchDB may be
parallel, but that's not a typical desktop job.

The truths are a) most desktop use is over-covered by current, single or dual
CPUs b) you can't convert all sequential apps to parallel, as earnestly as
Intel and AMD might wish for it.

That is, until there are radical agents acting on your behalf, sussing out
interesting things on the internet for you and whatever, but those would
probably run in a cloud somewhere anyway. Of course this will be proved wrong
in time but, I don't think that current desktop apps, /just written for
parallel/, will ever use 50 cores.

------
Ixiaus
50 cores! I thought Sun's Niagara chip was crazy... With Intel pushing even
more cores now Concurrent programming paradigms are going to continue to grow
in importance.

~~~
rbanffy
Sadly, I doubt Sunoracle will be able to top that with a 16-core Niagara III
running 16 threads per core. In the meantime, Niagara II is shipping and this
Intel piece is vaporware.

I would also like to remind the more overly enthusiastic (me included) that
this family seems heavily targeted towards scientific (read FP-heavy)
computing and I would expect more x86-controlled/GPU-based solutions in that
space in the future. Niagara is more of a general-purpose animal targeted
towards web and database workloads.

------
Keyframe
I guess, in a few years I'll be working in HPC after all! At my home!

------
itistoday

      Each core in Knights Corner runs at 1.2GHz, is supported by
      512-bit vector processing units, has 8MB of cache, and four
      threads per core.
    

That's 200 simultaneous threads. Wow. That's almost like a GPU.

~~~
modeless
That's because it _is_ a GPU. It's Larrabee. Intel couldn't get enough
graphics performance out of it to compete with GeForce and Radeon in the
graphics card market. However, they can still compete in the GPGPU market,
where the specialized graphics hardware that GeForce and Radeon have is less
of an advantage, and Larrabee's x86 compatibility is actually useful. Intel is
afraid that GPGPU is going to encroach on their CPU turf, and this is their
answer.

