

Intel's Ivy Bridge reviewed - mrsebastian
http://www.extremetech.com/computing/126879-intel-core-i7-3770k-review-ivy-bridge-lower-power-better-performance

======
Historiopode
A very interesting feature of the Ivy Bridge architecture is a new digital
random number generator code-named Bull Mountain, which uses teetering as an
integral part of its workings. [1]

While consumer-oriented reviews rightfully ignore such things, I reckon that
the HN crowd would be pleased to know more about such a nifty hardware
solution.

[1] [http://spectrum.ieee.org/computing/hardware/behind-intels-
ne...](http://spectrum.ieee.org/computing/hardware/behind-intels-new-
randomnumber-generator/0)

~~~
aiscott
Teetering must be the marketing term they chose... the phenomenon in play has
been known as metastability for as long as I've been a digital designer.

And generally speaking, it's something you want to avoid in your designs, not
only because of the unpredictability of the final output (whether it resolves
to a 1 or 0), but also because of the uncertainty in how _long_ it will take
to resolve.

But it seems perfect for generating randomness. Neat idea. The real trick here
is that it's only theoretically unpredictable. In real life, due to
"imperfections" in fabrication, the inverters will be biased to resolve one
way more frequently. So you might end up with a 60/40 split, for example.

I see they do address this in the article, and have added some "conditioning"
to eliminate the bias... I bet it's described in a patent somewhere.

~~~
callan
Also, there is a bit set after executing RDRAND that tells you if you got a
'good' random number.

~~~
Natsu
Does anyone else feel like that's the sort of thing that will become part of a
vulnerability someday?

~~~
batista
No. It's used as a measure to NOT become part of a vulnerability someday, i.e
so the system recheck and ask for another random number.

~~~
Natsu
If it's just a flag you can ignore, I think that someone will.

------
eblume
I guess I don't really understand why this is impressive - the GPU is still
not good enough to handle modern gaming and apparently has some issues with
media encoding as well (although I will admit that that section confused me
somewhat), and it sounds like the CPU is just a modest upgrade in efficiency.

I'm not saying it's bad, it looks like solid progress form Intel as we all
continue to expect, but I don't get why this is considered 'tick+' when it
seems less exciting to me than Nehalem.

~~~
batista
> _I guess I don't really understand why this is impressive - the GPU is still
> not good enough to handle modern gaming_

Because the huge of majority of the population does not care about "modern
gaming", but they do care about less costs and improved batter life with an
integrated GPU and would like to see it offer better performance for apps
where it's needed?

~~~
mbell
Your right that the majority of the population doesn't care about playing
battlefield 3, as an example. But, a very large precentage of the populous is
insanely interested in casual gaming, see farmville.

The value that gets skipped by mainstream "tech sites" is that, yea its kinda
crap at playing battlefield 3, but what it does do is create a baseline of
capabilities that you can expect out of an average device. In other words, it
opens the doors for mainstream adoption of technologies that weren't there
before, ultimately leading to a new baseline performance for those putting out
casual games.

I guess what I'm saying is that the next farmville could now have decent 3D
graphics where as that was out of the question prior to the mainstreaming of
decent GPUs. Intel plays a key role in that process.

~~~
batista
> _I guess what I'm saying is that the next farmville could now have decent 3D
> graphics where as that was out of the question prior to the mainstreaming of
> decent GPUs. Intel plays a key role in that process._

Well, you have some point, but I think that Intel's GPU capabilities are far
better than the needs of a casual 3D game. I would expect to first see some
casual 3D games that utilizes at least that power, before I would think that
more power is needed.

Also, a biggest problem is that MS doesn't support WebGL in IE, so no 3D
casual games, except if you model them with a 3D engine build on Canvas (which
would mean it would use only the 2D acceleration features of the GPU that the
canvas uses).

------
rwmj
Was anyone else looking at those die photos and thinking that it'd be better
to put 4 more cores in that space occupied by the GPU?

~~~
ChuckMcM
Not really, allow me to explain.

One of the challenges of GPU performance is getting the connection between the
CPU and Memory correct. That is why for years and years you've had the
'special' video card slot which was optimized for that, and then the general
purpose slots that didn't have the kinds of demands that video offered. So if
I get back space for a 'regular' slot and can put the GPU inside, that is a
win for me, even though I predominantly use them in servers which brings me to
...

CUDA, or more accurately, running general purpose compute tasks on shader
engines. Having the GPU there, even if I am not using it as a GPU, can provide
some impressive benefits if I can program it to operate on in memory data in a
fast parallel way. So in this regard it stops being a GPU and starts being a
kind of funky but powerful co-processor.

And of course the third thing is that keeping the power draw sane is
important. Its a small chip and pulling a lot of heat out of it is very
difficult. Additional cores, additional heat, higher operating temps (or more
exotic cooling systems).

One of the things that would be interesting (but won't happen on an Intel part
any time soon) is if there were a bridge port so you could run dual-CPU and
the GPUs could run in a crossfire/SLI mode where they each rendered every
other line or something along those lines. Since I use dual cpu motherboards
that would give me additional compute options that I don't have at the moment,
but it won't happen at Intel because it would disrupt the market positioning
of Ivy Bridge.

~~~
ajross
Ouch, please don't do this. You're mixing stuff up badly. The "special" video
card slot throughout history has simply been a higher bandwidth interconnect
relative to whatever else was there (well, in the AGP days it also had a
primitive IOMMU for doing DMA into userspace). The integrated graphics on
Intel parts, obviously, have the important distinction of sharing the memory
controllers (and, I believe, the L3 cache?) with CPU work. That does hurt
things under some loads, though many games are actually quite CPU-light these
days.

"CUDA" is an NVIDIA trademark for their compute platform, it's not a special
kind of technology. In fact Intel's i915 derived GPU works very similarly. In
fact, Intel's AVX instructions also work along basically identical lines.
They're all wide SIMD implementations of a bunch of parallel
tasks(/threads/lanes/warps, the Industry terminology is a mess).

Honestly, I think rwmj is right: for a consumer gaming rig, it _would_ be
preferable to have an 8 core part without integrated graphics. The reasons
that isn't going to happen aren't technical though: first, Intel needs to ship
different binnings of these same parts into segments (Ultrabooks, tablets)
where the graphics are required and where price is more constrained --
basically, they have to make these in large quantities anyway. Second (and
relatedly) they already do make "lots of cores w/o graphics" variants, but
they sell them into the low volume _server_ market where margins are much
higher. Selling an 8 core consumer chip would canibalize Xeon sales.

~~~
ChuckMcM
Just to clarify:

You: "The "special" video card slot throughout history has simply been a
higher bandwidth interconnect relative to whatever else was there..."

Me: "One of the challenges of GPU performance is getting the connection
between the CPU and Memory correct. "

Yes, its the GPU <-> (CPU/Memory) interconnect. It has, since the introduction
of PCI, been a 'different' slot than other peripherals. So I don't see how
we're confused.

You: ""CUDA" is an NVIDIA trademark for their compute platform, it's not a
special kind of technology. ... the Industry terminology is a mess)."

Me: "CUDA, or more accurately, running general purpose compute tasks on shader
engines."

The industry terminology is a mess, however most readers recognize the name
'CUDA', nVidia's implementation of a shader language, as that technology.
Further it was the introduction of using shaders as vector units which lead to
folks implementing things like the 'PS2 supercomputers'. (and to be precise,
no you cannot use nVidia's tools on PS2s)

You: "Honestly, I think rwmj is right: for a consumer gaming rig, it would be
preferable to have an 8 core part without integrated graphics."

Which is great, what I would love to see then is how you reason to that
opinion. What is it about the 4 additional cores that would improve the
'consumer gaming rig'. How are you measuring 'good' vs 'not as good'? Cost?
Triangles per second per dollar? Developer support?

"Second (and relatedly) they already do make "lots of cores w/o graphics"
variants, but they sell them into the low volume server market where margins
are much higher. Selling an 8 core consumer chip would canibalize Xeon sales."

I don't think anyone has argued that Intel couldn't make an 8 core processor
out of this technology, heck the feature size is small enough they could
probably to 12 or 16 cores and still get decent yield, but there are other
system issues associated with that, most notably cache behavior and size.

But the specific question on the table was that Intel has made a part which
they expect people to put into laptop machines and maybe even tablets. They
chose to make it a four core machine with an integrated GPU, are you arguing
that the consumer experience on those laptops would be improved if Intel
required an external GPU? If so I'd be interested on your take on that as
well. First, the way you evaluate value, and then the case for an external GPU
that maximizes that value.

------
cdrxndr
Agreed this report comes across a bit underwhelming, but the integrated GPU
boost is really what I've been waiting for. And it's more power efficient to
boot.

I don't play games, but I do have a crazy-big monitor and want a bit more
oomph drive the display before I get a notebook without a dedicated graphics
card. Still waiting on the dual core ...

------
K2h
for my usage pattern, still no reason for me to move from the core 2 quad.

~~~
luser001
care to elaborate? i have a core 2 quad, which i find more that enough even
for some of my "serious" programming work.

~~~
K2h
At work I'm on I5 and I7 stuff and never use all that (I'm one of the few here
on HN that isn't a programmer). At home, I was going to put in a 2600K at the
last build but there was that whole chipset recall problem and I needed
something in quick so I went back a generation and got the core2quad - 8 gig
of ram. usage pattern is really office applications and a few specialized plot
applications (Igor). after I added the SSD, and a newish video card the CPU is
just not the problem. frankly.. there is no problem. so no incentive to
upgrade.

When I started using computers heavily in the 90's you were lucky to get a
year out of a machine before it was just too slow to do the tasks you needed.
For me, those days died when the P4 came out. Unless your super CPU bound like
compiling - you just don't need it for basic office stuff.

------
comex
This is the first time I've heard of Quick Sync. Sounds useful.

It's only supported on Windows, with a proprietary SDK. No documentation.
Yay...

~~~
ajross
"Quick Sync" is a marketing term for the hardware video codec on the die, and
a software suite built around it. The Intel software is windows-only, but I'm
pretty sure the codec is documented. At least it has a section in the big SNB
graphics docs dump they did a while back. But AFAIK there are no free drivers
written to it.

------
rollypolly

      implements support for DirectX 11, OpenCL 1.1, and OpenGL 3.1.
    

This should be a boon for game developers.

~~~
angersock
Erm, is this real support, or just more cribbing from Mesa?

