Hacker News new | past | comments | ask | show | jobs | submit login
Intel unveils Knights Corner 50 core server chip (geek.com)
39 points by ukdm on June 1, 2010 | hide | past | favorite | 24 comments



This "article" is garbage and is full of errors; if you're interested in this topic read the slides instead. http://download.intel.com/pressroom/archive/reference/ISC_20...


New technologies often get started in a niche. Because server rooms already have multiple computers, and standard infrastructure for handling them, it is a natural fit for many-core chips (though bandwidth to RAM/disk may be a problem).

Once established in this market, the feedback loop with users who pay money will drive the technology to improve to suit that niche. The bugs will get ironed out, deficiencies worked around, specific infrastructure developed. Chips will use less power, and will get faster and cheaper.

Eventually, other uses will be found for the product - perhaps on the desktop, perhaps as mobile devices, perhaps in applications that were never imagined before, because they were not conceivable. The key benefit might be from a feature that is not considered very central to the technology, from an engineering perspective, but happens to be unique with respect to the alternatives.

Perhaps it could be compact size, low power consumption, doing many distinct tasks, greater reliability through redundancy.


I wonder how many servers are currently CPU-bound? I've always been I/O bound (interchange that with memory-bound, depending on how much you would like to spend on memory).


Even if servers are I/O bound, this setup is almost like crunching 50 machines into one; applications will have lower latency access to data stored on larger memory.


It's like crunching 50 CPUs on top one memory bus. Even if there are, say, 5 buses (and a thousand pins for them), that's still 10 cores per bus.

Did they say how many threads per core?

I suspect it will be like having 50 286s in the same box...


Many HPC servers are CPU-bound.


True, but commercially speaking they're the minority.


Considering there is 8 meg of cache for all the cores, this must be for CPU-bound tasks only


The article says that it has 8mb of cache per core, shared by four hardware threads.


The article is mistaken; the 8 MB is shared between the 50 cores. There's some speculation that it might be a Non-Uniform Cache Access architecture, where local parts of the shared cache are faster to read/write than other parts. If so, that would certainly be an impressive step. A better article is here:

http://www.channelregister.co.uk/2010/06/01/intel_knights_co...


That's 400MB of cache on the die, which seems highly unlikely.


That would be far more remarkable than putting 50 cores on the die. It would be cause for riotous celebration.


> riotous celebration

Well, maybe. There are diminishing returns on increasing cache sizes (you solve the capacity misses but don't really deal with compulsory and conflict misses)...though it would be impressive an impressive feat of process technology. The closest to that amount at the moment I believe is IBM's Power 7, which has a 30MB on-die L3 implemented in eDRAM.


I would love to see this on a desktop. Why? Because that would generate a powerful incentive to parallelize desktop software. Number of cores will do nothing but increase for the foreseeable future and per-thread performance will not go up much.

Even if they release an 8-core part at desktop processor prices, that would be great.

OTOH, if parallelizing of desktop software improves much, there will less reason to go x86 when compared to multi-core ARM-based designs.


A lot of desktop applications aren't well suited for multi-core use. They have been written for a long time as single-threaded applications and making them multi-threaded is a huge task for a small gain. Some applications get worse when first made to use multiple cores and it takes a lot of tweaking and rewriting to get performance back to where it was.

2-8 cores on the desktop hasn't made a huge dent. 50-cores is a bit extreme. It's going to thrive in the server market where one system is typically servicing many requests from many users. Desktops are generally designed to only service one user.


Is it really that small a gain? I am now with 6 open tabs in Firefox, one terminal with four tabs, a music player decoding an internet radio stream, an E-mail client and Emacs with two windows open and a Python process running in it. There is also a desktop CouchDB running somewhere (and that's a highly parallelizable animal) that deals with lots of the data my system generates.

If for nothing else, having more cores would prevent context switches on the two cores I have.

Sadly, it's no surprise much software isn't designed for multiprocessors. Prior to Windows XP taking over Windows 9x as the dominant OS for desktops, it would make no sense to develop a mainstream x86 designed for multi-threading apps - just consider the failure of the Pentium Pro (designed to run 32 bit apps in a 16-bit era). Processors and programs have been optimized for so long to cope with mono-threading OSs that it will take a while to get rid of this legacy and to step into this parallel future. There is a good reason why most desktop software is not a good fit with parallel processors - until recently there were few desktop 4+ thread machines.

This is what I mean when I say Microsoft held back the PC's evolution for a decade. I used 64-bit processors (Alpha) and multi-processor desktop machines (MIPS, PPC and SPARC) years before similarly equipped PCs appeared in the market.


| I am now with 6 open tabs in Firefox, one terminal with four tabs, a music player decoding an internet radio stream, an E-mail client and Emacs with two windows open and a Python process running in it.

Only two of those 'normal' activities consume cycles. Your CouchDB may be parallel, but that's not a typical desktop job.

The truths are a) most desktop use is over-covered by current, single or dual CPUs b) you can't convert all sequential apps to parallel, as earnestly as Intel and AMD might wish for it.

That is, until there are radical agents acting on your behalf, sussing out interesting things on the internet for you and whatever, but those would probably run in a cloud somewhere anyway. Of course this will be proved wrong in time but, I don't think that current desktop apps, /just written for parallel/, will ever use 50 cores.


50 cores! I thought Sun's Niagara chip was crazy... With Intel pushing even more cores now Concurrent programming paradigms are going to continue to grow in importance.


Sadly, I doubt Sunoracle will be able to top that with a 16-core Niagara III running 16 threads per core. In the meantime, Niagara II is shipping and this Intel piece is vaporware.

I would also like to remind the more overly enthusiastic (me included) that this family seems heavily targeted towards scientific (read FP-heavy) computing and I would expect more x86-controlled/GPU-based solutions in that space in the future. Niagara is more of a general-purpose animal targeted towards web and database workloads.


I guess, in a few years I'll be working in HPC after all! At my home!


  Each core in Knights Corner runs at 1.2GHz, is supported by
  512-bit vector processing units, has 8MB of cache, and four
  threads per core.
That's 200 simultaneous threads. Wow. That's almost like a GPU.


That's because it is a GPU. It's Larrabee. Intel couldn't get enough graphics performance out of it to compete with GeForce and Radeon in the graphics card market. However, they can still compete in the GPGPU market, where the specialized graphics hardware that GeForce and Radeon have is less of an advantage, and Larrabee's x86 compatibility is actually useful. Intel is afraid that GPGPU is going to encroach on their CPU turf, and this is their answer.


Larrabee is that you?


Or a Sun Niagra.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: