
Intel Unleashes Its First 8-Core Desktop Processor - joaojeronimo
http://newsroom.intel.com/community/intel_newsroom/blog/2014/08/29/intel-unleashes-its-first-8-core-desktop-processor?utm_source=twitterfeed&utm_medium=twitter
======
reitzensteinm
Even the 5960X, the $999 8 core part, has a maximum memory size of 64gb,
unchanged since Sandy Bridge E.

That's disappointing, because while the CPU will likely remain close to state
of the art for quite some time to come, you'll most likely max out the memory
on day one and be stung by an inability to upgrade.

Of course, this was probably by design, so that they can sell you another,
virtually identical 8 core processor in two more years for another $999.

[http://ark.intel.com/products/82930](http://ark.intel.com/products/82930)

~~~
d4vlx
You could buy a slower Xeon for around the same price if you really needed
more than 64 gigs of memory.

[http://ark.intel.com/products/75269/Intel-Xeon-
Processor-E5-...](http://ark.intel.com/products/75269/Intel-Xeon-
Processor-E5-2650-v2-20M-Cache-2_60-GHz)

And it supports ECC.

~~~
reitzensteinm
The Xeon isn't overclockable, which is a big part of the niche this processor
sits in.

If you read my post again, I'm not saying that 64gb is too little right now.
It's probably the right match for the processor for most workloads, today.
32gb would seem weak with 8c/16t (I have that much in my 4770 system), and
128gb could be excessive.

But in two years, swapping in 128gb would be the no-brainer upgrade to this
thing. That this is being ruled out ahead of time is not a good thing.

(Barring an Intel microcode revision, as is being speculated by the sibling
commenters. But I'm not holding my breath, as Intel Ark is pretty definitive.)

~~~
tacotime
Idk... I'm struggling to see why an average user in the overclocking/high-end
pc market would run into the 64 gig limit assuming the high end market has a
relatively short part lifetime. I mean if you're in it for the video editing
then the sky is the limit but for an average user? A user could ram-cache 4
hard drives with a 4gb buffer each, power up the entire adobe suite including
Illustrator, Photoshop, start a browser session with 100 tabs and 10 video
streams, torrent client, email client, backup client, vpn, a couple modest ftp
and web servers, a transcoding media streaming server AND crysis 3 and still
likely have 10-20 gigs to play with. I think if you need much more than that
running concurrently you probably should be starting to think about server
hardware.

If you think 64gb will be an easy limit for an average user to hit in the near
future I would love to hear your envisioned use case.

~~~
bane
I think it's going to start becoming reasonable to package up applications in
VMs and distributing the VMs "appliances" to run instead of installing
software directly in the OS. I think this is going to start happening
regularly in the consumer space sooner rather than later (and already has in
some cases like with XP Mode). This is pretty much modus operandi in the
service space today.

There's lots of really good reasons to do this (sandboxing, ease of
installation, compatibility, snapshots/state saving, etc.) and VM tech at the
consumer level is good enough for most applications. Doing so also enables you
to distribute the same application for different host architectures relatively
easily (swap out the virtualization core with an emulation core).

VM technology basically will allow consumer software vendors to start treating
your computer like a set-spec videogame console instead of worrying about
millions or billions of possible complications from how your computer is set
up. Once VMs in the consumer space get good enough to really run high-end
games, imagine everybody just writes to some Valve defined Linux spec that
just happens to match some Steam Box, but you can install the VM for that game
on your Mac or Windows or whatever and get to gaming.

If this happens, VMs will chew through RAM faster than just about anything out
there.

So instead of installing and running Adobe Suite, you start up the Adobe Suite
VM and boom, 8GB of your RAM vaporizes. Fire up your web browser VM and boom,
there goes another 4GB. Your e-mail client annihilates 4GB more and now we've
eaten up 16GB of RAM to run a handful of applications. Open up an MS-Office
component and there goes another 8-16GB. Run a non-virtualized legacy app? Why
those all just get sandboxed into an automatic "old shit" VM so the virii keep
out.

This isn't inconceivable and I wouldn't be at all surprised if this wasn't
already on the drawing boards somewhere.

~~~
caycep
This was what was revolutionary about Quake III, no? It ran inside some iD
VM...

~~~
pmarin
He is talking about using VMs for real arquitectures (SO + apps).

------
rsiqueira
Intel's disclaimer says at the end of the page: "products do not contain
conflict minerals (tin, tantalum, tungsten and/or gold) that directly or
indirectly finance or benefit armed groups in the Democratic Republic of the
Congo (DRC) or adjoining countries."

------
seanalltogether
Anandtech review: [http://anandtech.com/show/8426/the-intel-haswell-e-cpu-
revie...](http://anandtech.com/show/8426/the-intel-haswell-e-cpu-review-
core-i7-5960x-i7-5930k-i7-5820k-tested)

~~~
leeoniya
and the recently discovered errata that cripple TSX instructions.

[http://www.anandtech.com/show/8376/intel-disables-tsx-
instru...](http://www.anandtech.com/show/8376/intel-disables-tsx-instructions-
erratum-found-in-haswell-haswelleep-broadwelly)

------
duskwuff
And where did they unveil this new processor?

At _Penny Arcade Expo_.

Times really have changed.

~~~
wmf
Too bad this processor is pointless for gaming. Maybe they should have
announced it at IBC.

~~~
tacotime
4real dog hardcore gamerz only use arm procs. Angry birds4life.

Seriously though, what are you talking about?

~~~
bsamuels
if you can find a game developer who cares enough to split their game's logic
down more than just "render thread" and "logic thread", then maybe an 8core
would be useful

~~~
yid
I seriously doubt if this is still true. Multicores have been common for more
than a decade now.

Edit: first Google hit to satisfy parent, since replying is disabled. Valve
goes multicore [2006] [http://techreport.com/review/11237/valve-source-engine-
goes-...](http://techreport.com/review/11237/valve-source-engine-goes-multi-
core)

~~~
sp332
And yet I notice you didn't provide an example. Honestly I can't think of one.

------
spiritplumber
Also, Parallax has just open-sourced theirs!

[http://www.parallax.com/microcontrollers/propeller-1-open-
so...](http://www.parallax.com/microcontrollers/propeller-1-open-source)

8-core microcontroller in 2006, not bad. They're releasing a better one later
this year, so they've opened the verilog design for the current one.

~~~
electrograv
This is awesome, but off topic. Give me a few days to refresh my Verilog, and
I can design you a 16 core CPU; this of course says nothing of the quality and
performance of that CPU.

I don't mean to bash Parallax but to make a point that making an N-core
processor is itself not impressive. Making an N-core architecture that
performs like this is.

------
Xcelerate
Could someone give me a simple explanation of what exactly hyperthreading
does? They tout 16 logical cores and 8 physical cores in this new chip. I've
read the Wikipedia page on it, but it gets too technical.

I do molecular dynamics simulations with LAMMPS, and I've noticed performance
on my laptop is best with 4 cores. Using all 8 "virtual cores" is actually
quite a bit slower.

~~~
ANTSANTS
A core is a mostly independent processing unit within a larger package. Some
hardware resources (like the memory controller, at least in non-NUMA devices)
are shared between all cores, but many are duplicated for each core. Some
examples of core-local resources would be their separate integer, floating
point, and sometimes vector execution units (boxes that you can stick some
data into and get a result out some number of cycles later), and some (but not
all, depending on the chip) of the various layers of caches that sit between
each core and main memory.

In hyperthreaded processors, each core can be further split into two
"threads". These threads share most of their hardware resources; you can think
of them as a thin veneer over a single core. These threads execute
simultaneously, making use of whatever resources their partner isn't using at
the moment.

Some examples (assume a single core processor with 2 hardware threads for
each): Imagine you're running a thread, and it needs to access main memory
before it can continue. Depending on the chip, this will take hundreds or even
thousands of cycles before the thread can continue. Hyperthreading is one way
to make use of this time; the other thread can run at full steam while the
first is waiting to get its results back from memory.

Another positive example: you're running some floating point DSP code (perhaps
your music player's equalizer) at the same time that you are compiling a new
build of a program. The DSP code will make use of a mix of integer and
floating point resources, while the compiler will probably not need to use the
floating point units at all. Hyper threading allows the music player to use
those resources that would otherwise be idle while the compiler is running.
The DSP code _will_ slow down the compiler because it is competing for things
like integer resources (which are needed for pointer arithmetic, for
instance), however there will still likely be an improvement over normal
multitasking on a single hardware thread.

Now, for a negative example: you are running two very demanding threads. These
threads are painstakingly programmed to use almost every resource they
possibly can at any moment, they very rarely need to stall to access memory,
etc. In this case, the two threads will only waste time fighting over the same
resources, kicking each other out of cache, etc, and it would ultimately be
more efficient to disregard hyper threading and run each thread sequentially.

Another negative example: you are running two instances of the same thread.
This will result in good utilization of some resources (such as code cache,
because each thread is executing the same program) but practically guarantees
contention over the execution units, even if the program isn't that demanding.

To sum it up, hyperthreading is usually a net positive for desktops where you
have a very heterogenous (and often not anywhere close to optimally
programmed) mix of programs that need to run at once, and usually a net
negative for high performance computing programs like your molecular dynamics
simulation where every thread is executing the same extremely demanding
program at once.

EDIT: And to go a bit further and explain what makes GPUs special, they're
basically the inverse of a hyper-threaded CPU, great at running a _lot_ of
homogenous threads. Instead of having independent threads sharing the same
resources, they have the same _logical_ thread (many designs sharing the
instruction pointer amongst many hardware threads, thus causing each to
execute the same instruction at any given moment with different inputs) shared
across cores that have their own indepedent execution units.

~~~
marak830
That is a great explanation, ill be saving tht one next time someone asks me!
Thanks!

------
graycat
Why just "client"? Why not use it in a server? What am I missing?

Cost per operation? Can get an AMD 8 core processor, 125 Watts, 4.0 GHz clock,
for about $180. So, $1000 for an Intel processor with 8 cores with hyper
threading stands to be cost effective? In what sense?

~~~
wmf
The server version is called Xeon and it's the same chip with ECC uncrippled.

One Haswell core is equivalent to two AMD cores. But yeah, AMD is dramatically
cheaper than Intel for equivalent performance.

~~~
thinkpad20
That leads me to a question that I was going to ask: what is it that justifies
the massive markup on Intel chips vs AMD? Is it just the name? Is there an
advantage in intel performance or power usage? If so, does it really make up
for the price? Because as someone considering building a computer from scratch
(I haven't in quite a while), that AMD price tag is very appealing.

~~~
tacotime
I think many people would agree that the price discrepancy does have a lot to
do with branding and marketing strategy.

Intel is a larger wealthier company and they assumedly pour a lot more money
into R&D than AMD. If you go purely by market capitalization, Intel is about
50-60 times larger and AMD. That's not necessarily a fair measure and ignores
a lot of variables but it does help shed a little light on the situation. In
addition, AMD's business model has them focusing a lot of their attention on
niches that intel doesn't seem as interested in. For instance AMD continues to
develop new ARM technologies that could provide a very important market edge
for them in the future as small "internet of thing" like devices start to
emerge and become a part of people's daily life.

Here's a decent article on the subject:
[http://analysisreport.morningstar.com/stock/research?nav=no&...](http://analysisreport.morningstar.com/stock/research?nav=no&ProductCode=mle&t=AMD)

------
programminggeek
I really don't keep up on this stuff much, but why is this still Haswell
based? Why not just do this on Broadwell?

~~~
loser777
Intel has a pattern of releasing variations of microarchitectures with higher
core counts after their immediate successors have been announced/launched.
Think Sandy Bridge-E after Ivy Bridge, Ivy Bridge-E after Haswell, and now
Haswell-E.

They probably do this because it's probably to easier to manage the larger die
size and greater complexity as the process has matured and yields have
improved.

~~~
bryanlarsen
Also, these are essentially rebadged Xeon's. Xeon's have higher requirements
for maturity, testing, stability, et cetera.

~~~
kyrra
And a lot of the time, many of these CPUs are almost identical to one another.
They will take a single CPU and brand it in 10 different ways depending on how
it does during manufacturing testing.

For a given CPU that comes off the production line, the max frequency it can
run at will vary from chip to chip. As well, if there are dead cores on that
chip they can just disable them and sell the chip at a lower price (though,
intel may not do this; nvidia has in the past with graphics chips).

------
Alupis
> Intel's first client processor supporting 16 computing threads and new DDR4
> memory will enable some of the fastest desktop systems ever seen.

Not necessarily -- as AMD fans (I'm one) have seen, the entire "more cores is
better" is not always true -- it _heavily_ depends on the workload, and
frankly, most games and programs are not utilizing these cpu's fully (yet).
Now, put something like a 2 x 16 core Opterons in a server and you have
yourself quite a powerful virtualization platform.

With that said - I'm interested in seeing it's price point and performance
compared to AMD's offerings.

~~~
taneliv
How well commonly quoted benchmarks (passmark, geekbench, cinebench etc)
measure a processor as a VM host? Obviously single core benchmarks are
somewhat representative, but miss things like cache sizes at different levels
and hyperthreading. Are there benchmarks that would take those into account or
would otherwise be good for planning VM host use case?

~~~
Alupis
For VM hosts -- the number of cores (plus their respective resources like
cache, etc) are more important than how performant each core is individually.
Usually for hosting companies, density is more important than raw performance,
making 32 cores in 1 physical host very attractive.

------
qwerta
This thing supports 8 DDR4 slots. Finally we are moving beyond 32GB RAM limit.

~~~
rythie
X79 Socket 2011, which this is the successor to already supported 64GB with 8
DIMMs so nothing has changed yet. They've yet to announce a socket 1150
successor, most likely it will have 4 DIMM max., however, I expect 16GB DIMMs
to appear in DDR4 soon.

------
lucb1e
I'm both excited and not. This is more power in a CPU and that's great
progress, but for a desktop? I mean servers, games and graphical applications
would be faster but the majority of our waiting time when using a computer is
on a single-threaded calculations. As someone who doesn't game a lot and uses
GIMP only for the most basic of purposes, I would much rather have an improved
dual core CPU that produces less heat in total (compared to 8-cores) and can
be clocked higher because of that.

~~~
umanwizard
Well, yeah. This is their _very_ highest-end processor, and costs more than
the entire desktop+monitor+peripherals that most people need. Not sure what
point you're trying to make. Do you think companies shouldn't continue pushing
the envelope of what's possible?

Edit: " _very_ highest-end processor" should read " _very_ highest-end PC
processor". I'm excluding the workstation-class Xeon.

------
ck2
I'd still rather have 6ghz 4-core but I guess that isn't going to happen
(anytime soon for a reasonable price).

------
Kompulsa
Why didn't they do this sooner?

AMD already has an Operton 16-core processor. I'm not saying that AMD is any
better, but I thought Intel would have started selling these from long ago,
judging based on the pace of the computer industry.

~~~
Kompulsa
I take that back, I see that Intel has ten-core commercial processors on the
market already.

~~~
moon_of_moon
More, in fact.

[http://ark.intel.com/#@ServerProducts](http://ark.intel.com/#@ServerProducts)

But your original point is a valid one: AMD introduced desktop 8 core
processors a couple of years ago, while this is intels first 8 core desktop
processor.

------
higherpurpose
Intel is getting disrupted by the book (they keep moving upmarket now). The
funny thing is they _know it_. But they can't stop it at this point. So they
just go along with it.

------
auvrw
this is a pretty naive comment, but it's really intended to be totally
serious: what's up with cores? like, why do we really need cores? is it really
fundametally better architecture to have a RISC sitting at the front of the
instruction pipeline to distribute x86 instructions to some internal set
(particularly wrt. to power consumption), or do we in fact just have cores in
order to increase fab yield [/ootbcomp.com-bootcamping]

~~~
wmf
Are you proposing a 32-issue processor instead of eight 4-issue cores? One
problem with that is that most software doesn't have enough instruction-level
parallelism to use such resources. Such a complex processor would also be
likely to become bottlenecked on central control structures like the issue
window (which can be solved with clustering, but then you're almost back to
multicore). But check out EDGE/TRIPS/E2 for some ideas in this area.

~~~
auvrw
I'm talking about something I don't understand....

------
Corrado
I wonder if Apple will announce anything that uses this processor in the Sep.
9th event? I could possibly see it being used in a refreshed Mac Pro or iMac.

~~~
m_mueller
This alongside with two high end NVIDIA chips (Geforce Titan) in a Mac Pro
would be insanely good. Not sure whether it's possible thermally though.

------
coldcode
How does this compare to a 3.0GHz 8-core Xeon E5?

~~~
wmf
About 5% faster than a E5 v2.

------
tracker1
Of course HP will now include it in a desktop with half the features disabled,
and no option in the BIOS to enable them.

------
Zardoz84
And the FX 8120 eight core CPU ??

------
surak
5yrs late

------
zapt02
Finally!

------
fdsary
Title Caps And "Unleaches". Intel Unleaches 8-Core Paralel Marketing On
News.Ycombinator

