

An Inconvenient Truth: Intel Larrabee story revealed - s3graham
http://www.brightsideofnews.com/print/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed.aspx

======
akamaka
Fantastic article. This is the best piece I've seen written on this subject so
far, with lots of context and a good background on what's happened over the
last five years.

One thing I don't see mentioned, and which has been a huge red flag for me, is
Intel's seeming pre-occupation with raytracing. As a graphics programmer, I
find it absurd that Intel would hope to mount a technological disruption on an
industry that has been following a very steady course since SGI and Pixar were
founded in the mid-80s. I suspect that the Larrabee approach, as a whole, is a
case of arrogance in believing that they can jump ahead of everyone else
without having to slog through the hard, painful work of incremental
improvements that has been taking place at nVidia and ATI/AMD.

That being said, the author correctly points out that Intel has invested in
many technological mistakes in the past, and always eventually recognized
their mistakes and changed course.

~~~
rbanffy
I think they may have a plan B ready:

<http://techresearch.intel.com/articles/Tera-Scale/1826.htm>

It's not trivial to engineer a massively-multi-core general-purpose processor.
I would try it with MIPS or ARM instead of x86, but I guess this is not an
option for Intel.

AMD, however, doesn't need to preserve the x86 ISA's status as an industry
standard. Quite the contrary: anything they do to hurt Intel will only make
them stronger. And they have some folks with massively-multi-core expertise
in-house.

~~~
wmf
That chip appears to be worse than Larrabee in every respect. In particular it
is much harder to program, which would just make Intel's late drivers later.

------
tedunangst
"Building 100 or so millions of lines of code for the driver part is a
herculean task, but the sources at hand claim that they are working on target"

That's quite a driver!

~~~
blasdel
It's lots of test code, and lots of microcode that gets loaded onto the board
when it inits.

Most of that code isn't going to be running directly in the kernel's address
space in the shipping driver.

------
ntoshev
Same author says "Apple ditches 32nm Arrandale, won't use Intel graphics":

[http://www.brightsideofnews.com/news/2009/12/5/apple-
ditches...](http://www.brightsideofnews.com/news/2009/12/5/apple-ditches-32nm-
arrandale2c-wont-use-intel-graphics.aspx)

If this rumor is true, it is quite an extreme move. Is 3D graphics _that_
important nowadays? I thought most laptops were bought for work, not games.

~~~
nvoorhies
Apple's likely looking to unify their gui rendering path to the greatest
degree possible, and moving things OpenGL is the easiest way to do this and
have things still be nice and fast. They might might lose some on a $10 or $15
increase to the BOM on a laptop, but the potential savings in engineer hours
might be tremendous.

There's also the chance that a desire to use OpenCL more widely for internal
applications means that having a gpu that can run CUDA or the equivalent is
becoming a defacto requirement for all macs.

------
jrockway
Is this someone's attempt to be hired by TechCrunch?

~~~
wglb
Seems like they are aiming at a different target. This article goes very deep
with very specific inside knowledge of long-span development projects.
TechCrunch is more current events across the industry.

~~~
dkersten
Or current "rumours", in some cases!

This guys article is much much too in depth to be a TechCrunch article, IMHO.

