

Intel Launches New Sandy Bridge Processors, Benchmarks Show Big Gains - MojoKid
http://hothardware.com/Reviews/Intel-Core-i72600K-and-i52500K-Processors-Debut/

======
Construct
Great progress, especially in the power consumption. However, last generation
processors are already very fast. A better upgrade for most developers would
be an SSD though. All of this speed is great, but useless as long as the
processor is waiting on a mechanical hard disk.

Once you get used to your applications opening almost instantly, it's tough to
go back to a mechanical HDD.

~~~
m_myers
I've been watching for them to drop into my price range for over a year now.
Are they really worth the premium?

~~~
khafra
I bought a 60GB one last year, which is plenty for my OS and applications--I
use a 1.5TB external for storage. It's the most tangible difference in my
computing experience since 4 years ago when I bought 8GB of RAM and got rid of
my swap file. If you're willing to spend money to make things faster, spend it
on an SSD.

------
Alex3917
It's amazing that you can get their top-of-the-line processor for only $294.
The best Pentium 4 cost $637 when it launched in June 2003, which is $757 in
today's dollars. This means the best Intel processor money can buy is 2.5x
cheaper today as compared with only 7.5 ago.

~~~
Tuna-Fish
This is only an artifact of market segmentation and release schedules.

LGA1155 is their budget line. 2600K is not the best processor they can make --
it is the best processor they can make give very stringent whole platform cost
limits. In addition to having the desktop budget line, they also have a
desktop high-end line. Last generation, these were the LGA1156 and LGA1366
respectively. The processors they just released are the budget line for this
gen, and the high-end line fits a socket called LGA2011, which will be
released later, and on which the processor cost will range from 300$ to about
1000$.

In every generation, new Intel processors gain roughly as much more power as
is the difference between the budget and high-end product lines, so it's
normal for todays budget processors to win over yesterday's high end. Usually
Intel releases the high-end first for better margins and to avoid this kind of
confusion -- this time, they started on the budget end, probably to pummel
AMD, who previously only competed on the budget segment and now cannot really
compete much at all.

~~~
Retric
Intel adding integrated graphics to their CPU's is basically spending ~20% of
their chip on something that most mid to high end consumers are not going to
use, and would be more cheaply produced as a separate chip on the low end.
Intel's basic assumption is AMD is so far behind they can't exploit it, and
they can exploit their monopoly to crush the ultra low end GPU market. But,
it's still risky.

IMO Intel focusing on the low to mid range of this release cycle is vital or
they are going to be eaten.

~~~
kenjackson
How is this more cheaply produced as a separate chip? If that's the case then
AMD could exploit it. I thought the point was that producing this would be
cheaper, passing on the cost savings to the consumer, while at the same time
offering better perf than low-end GPUs.

Everything I've seen seems to indicate that this is true. Did I miss
something?

~~~
Retric
The larger the chip the fewer chips you get from the same process. The larger
the chip the higher risk a failure. The larger the number of transistors the
lower your heat budget for the remaining chip.

But, far more importantly, producing a 32nm graphics chip is more expensive
(for now) than a 45nm chip using the same number of transistors. Intel has a
huge process advantage and is often on the lookout for useful things to do
with their huge transistor budget, but integrated graphics has huge downsides,
most notably memory bandwidth issues.

~~~
kenjackson
Regarding BW, the GPU now has access to onchip (L3) cache. Since low-end GPUs
use shared memory anyways, this actually saves bandwidth to main memory, and
also since it needn't go get main memory, also saves power (from the power
hungry GPU).

I get that you decrease yield, but w/o knowing the numbers, it's hard to
really see if its much impact on the bottomline. I suspect its a bigger win
than loss (not hard to speculate, since Intel apparently had made the same
bet).

~~~
Retric
8MB of Cache has far less value when dealing with modern 3D rendering than you
might think. Granted, building HW to deal with HD Video decoding is a huge win
from an energy use perspective, but 3D is a different beast that is not vary
cache friendly.

To give some idea of cache's relative value a GT110 is a 3 billion transistor
chip with less than 1mb of cache. A QX9650 is a 0.82 billion transistor chip
with 12MB of cache.

PS: At a minimum it's a 1 - (1/1.2) = 17% loss but adding in bad chips it's
probably closer to 25% until they start selling chips without video
capabilities. Also, I am not saying it's a bad long term move, just that in
the short term it's risky.

------
raganwald
Any HN perspective on the alleged DRM features built into the chips?

~~~
wmf
My "perspective" is that Sandy Bridge has nothing new on the trusted computing
front and Microsoft isn't smart enough to use that hardware anyway (see
Palladium).

------
acconrad
I love how these processors are already debuting on Amazon and the like for
under $200, and to throw it into a MacBook cover triples the price.

------
Symmetry
The most exciting thing about the new processors are the AVX instructions and
we're going to have to wait a bit more to see gains from those.

~~~
tocomment
Explain?

~~~
Tuna-Fish
The AVX extension adds a set of new instructions that can operate on 256 bits
of data at once (in the form of 4 64-bit floats, 8 32-bit floats, or various
other data types), and the new registers to fit them in.

These new instructions promise to significantly increase throughput for a
limited set of operations -- but to do that, the software has to be compiled
to use them, and often entirely rewritten to use the wider vectors. Few will
bother to do that until the new instructions have significant market
penetration -- which will take years.

I am not quite as optimistic as Symmetry is about the usefulness of these
latest extensions to x86 -- while many fringe tasks will benefit a lot, most
of what you do on a computer just isn't limited by the width of the vectors or
the amount of SIMD operations you can do per clock. IMHO, the insanely fast
caches are a much more significant improvement.

~~~
xpaulbettsx
They also allow the CPU register context to be saved/loaded with a single
instruction as well, User-Mode Scheduler in Win7 SP1 uses it (the normal
scheduler too afaik).

~~~
Tuna-Fish
That's a single microcoded instruction -- those are not typically any faster
than a bunch of normal instructions.

------
2mur
Desktop: Pentium 4 3.20GHz, 2 GB RAM (circa 2004) Laptop: Pentium 4 3.0GHz, 1
GB RAM (circa 1Q2005) {It's a Sager (Clevo rebrand) and has been an absolute
tank, but the screen is starting to dim)

Both run Ubuntu pretty well, for what I usually hack on (python/django) and
the desktop used to run the Source engine games pretty well (under XP). I've
recently started learning Android and Eclipse (and especially the AVD
emulator) and both of those machines just thrash.

I'm looking to build a new box and pick up a Thinkpad when they update. Going
to be an expensive year.

~~~
davidj
ha, my Sager 8xxx was a great laptop and lasted quite a while. Would you like
to buy it for parts? When it died I switched to a thinkpad t42 from 2004 that
I bought for $100.

