
Intel to Develop Discrete GPUs, Hires AMD's Raja Koduri as Chief Architect - namlem
https://www.anandtech.com/show/12017/intel-to-develop-discrete-gpus-hires-raja-koduri-as-chief-architect
======
unsigner
The connection between "AI" and "GPU" in everyone's mind is a testament to the
PR chops of NVIDIA. You don't need a GPU to run ML/DL/neural networks, but
NVIDIA have GPU tech so they're selling GPUs. What you need is the massive ALU
power and, to lesser extent, the huge internal bandwidth of GPUs. There are
huge chunks of GPU die area that are of no use when running NN-type of code:
the increasingly complex rasterizers, the texture units, the
framebuffer/zbuffer compression stuff, and on the software side, the huge pile
of junk in the drivers that allows you not only to run games from a decade
ago, but also run them better than last year's GPU. If you can afford to start
from scratch, you can lose a lot of this baggage.

~~~
mtgx
And yet Intel seems to be wanting to make GPUs for machine learning now...so I
guess Nvidia's PR worked against Intel, too?

But as I said in another comment, the truth is Intel doesn't seem to be
knowing what it's doing, which is why it's pushing in 5 or 6 different
directions with many-core accelerators, FPGAs, custom ASICs, neuromorphic
CPUs, quantum computers, graphcores, and so on.

By the time Intel figures out which one of these is "ideal" for machine
learning, and behind which arrows to "put more wood," Nvidia will have an
insurmountable advantage in the machine learning chip market, backed by an
even stronger software ecosystem that Intel can't build because it doesn't yet
know "which ML chips will win out".

If I would describe Intel is a sentence these days is "Intel doesn't have a
vision." It's mostly re-iterating on its chips and rent-seeking these days by
rebranding weak chips with strong chip brands, and adding names like "Silver"
and "Gold" to Xeons (and charging more for them, because come on - it says
_Gold_ on them!), as well as essentially bringing the DLC nickle-and-diming
strategy from games to its chips and motherboards.

Meanwhile, it's wasting billions every year on failed R&D projects and
acquisitions _because_ it lacks that vision on what it really needs to do to
be successful. Steve Jobs didn't need to build 5 different smartphones to see
which one would "win out" in the market.

~~~
newlyretired
Non-incremental advances require a lot of wasted-path R&D. If any of Intel's
projects creates a generational leap, it will pay off handsomely. When the way
forward isn't clear, I like to use the concepts from path finding algorithms
to drive strategy. Assuming you can afford multiple parallel efforts.

It's not clear if doing this in-house, or closely monitoring the state of the
art and then buying a company that develops a winner, is superior.

------
eganist
Can't fight them on price? Fight them on talent.

Whoever at AMD who refused to match the offer probably made a terrible
decision. This is about the worst time to lose that talent right after inking
a gpu die deal which, in light of this news, will only be temporary. AMD just
got played.

If I were AMD, I would review Mark Papermaster's comp and incentives to ensure
he doesn't leave.

(I'm long AMD)

~~~
eksu
I don’t think this was all about money. Raja had been trying to run Radeon
Technologies Group like an independent company and pushing for separation from
AMD for a while. HardOCP did a good piece on this ->
[https://hardocp.com/article/2016/05/27/from_ati_to_amd_back_...](https://hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility)

I think the recent Intel + AMD custom chip was probably the last thing Raja
did before he got pulled RTG got the reins put back on and now he’s hoping
ship to peruse what he’s wanted all along. To work with more independence.

~~~
Terribledactyl
What leads you to think Intel will give him more autonomy? Or is their
integrated graphics team setup differently than Radeon? I would suspect the
old ATI boundaries after 11 years would still be stronger than something Intel
has homegrown over the past 20.

~~~
0xbear
Well, he wouldn’t go there if Intel wouldn’t give him what he wanted, whatever
that might be.

~~~
MengerSponge
And if it's not corporate structure, maybe it's a boat.

More power to him.

------
ActsJuvenile
Raja, if you are reading this make sure your Intel GPU has two things that
competition doesn't:

1) FP8 half-precision training: NVidia is artificially disabling this feature
in consumer GPUs to charge more for Tesla / Volta.

2) A licensed / clone of AMD SSG technology to give massive on-GPU memory:
NVidia's 12 GB memory is not sufficient for anything beyond thumbnail or VGA
sized images.

My experience with Intel Phi KNL has been miserable so far, I hope Raja has
better luck with GPU line.

~~~
rbanffy
> My experience with Intel Phi KNL has been miserable so far, I hope Raja has
> better luck with GPU line.

I'd love to see the Phi approach taken further. I'm not a huge fan of having
different ISAs, one for my CPU, one for the compute engines of the GPU (to say
nothing about the blobs on my GPU, network controller, ME). I'd prefer a more
general approach where I could easily spread the various workloads running on
my CPU to other, perhaps more specialized but still binary-compatible, cores.

Heck... Even my phone has 8 cores (4 fast, 4 power-efficient, running the same
ISA).

~~~
Symmetry
When you've got a huge out of order engine the extra effort it takes to decode
x86 instructions is lost in the noise. When you're going with the flock of
chickens approach and you have a huge number of very small cores then the
overhead is killer. Intel tried to solve this by using medium cores with big
SIMD units but SIMD is just less flexible than a GPU's SIMT is.

Power and area generally scale as the square of the single threaded
performance of a core. The huge number of "cores"/lanes in a GPU are much
smaller and more efficient individually than even your phone's smaller cores.
And the x86 tax gets worse and worse the smaller you try to make a core with
the same ISA. Intel wasn't even able to succeed in competing with the Atom
against medium sized cellphone chips.

~~~
rbanffy
> but SIMD is just less flexible than a GPU's SIMT is

There is nothing preventing the x86 ISA to be extended in that direction. As
long as all cores (oxen and chickens, as Seymour Cray would say) can shift
binaries around according to desired performance/power, I don't care.

Binary compatibility is awesome for software that has already been written and
for which we don't have the source code. Pretty much everything on my machines
has source available.

The OS may need to be more aware of performance characteristics of software
it's running on the slightly different cores so it can be better allocated,
but, apart from that, it's a more or less solved problem.

Atoms didn't perform that much worse than ARMs on phones. What killed them is
that they didn't run our desktop software all that well (even though one of my
favorite laptops was an Atom netbook).

------
bhouston
I am not convinced that Intel can win here. They seem to not succeed with home
grown GPU tech and other big bang approaches. Now if they were to acquire
decent GPU tech then I would bet on them. Just the homegrown route seems to
not work out for them.

I suspect part of the reason is the long time frames for dev of this tech. I
suspect it is at least 2 years for this to see the light of day. That is
forever in this space.

Intel failed with Larrabee and itanium. Maybe this will go better?

~~~
mtgx
It looks like Raja will lead the development of machine learning-focused GPUs.
Isn't this Intel basically admitting that their Xeon Phi, Nervana, and Altera
GPU efforts to win the machine learning market are all a dead-end?

How many machine learning strategies is Intel going to try? Does it even know
what it's doing? Spending billions of dollars left and right on totally
different machine learning technologies kind of looks like it doesn't, and
it's just hoping it will get lucky with one of them.

And even if you think that's not a terrible strategy to "see what works",
there's still the issue that they need to have great software support for
_all_ of these platforms if they want developer adoption. The more different
machine learning strategies it adopts, the harder that's going to be for Intel
to achieve.

~~~
ironchef253
Intel needs a CEO that skates to where the puck is going to be, not where it
was three goals ago.

~~~
keganunderwood
That's scary of that's what is required of a ceo. You would either need to be
an oracle and predict where the industry will go or you'd need to _make_ the
industry go the direction you're taking the company.

------
diab0lic
The link is incredibly light on actual content but this seems to be good news
for AI enthusiasts as perhaps now we'll get a reasonable competitor to
CUDA/CUDNN and their associated hardware for running GPU accelerated machine
learning. Intel seems to be taking the ML/AI space seriously and this move
seems very likely to be related. Yes I'm aware of OpenCL as I am also aware of
it's level of support with libraries such as PyTorch, Tensorflow, Theano -- it
isn't the first class citizen that CUDA is. While those libraries aren't
perfect they offer the experience of writing the experiment on your laptop
without a GPU, validating, then running the full experiment on larger
hardware.

In my ideal world competition from intel would force NVidia to play nice with
OpenCL or something similar, and encourage competition in the hardware space
instead of driver support space. Unfortunately the worst-case looks something
more like CUDA, OpenCL and a third option from Intel with OpenCL like
adoption. :(

~~~
moonbug22
Only armchair spectators are still talking about OpenCL. It's as dead as
disco.

~~~
joefourier
I still use OpenCL on the daily, it allows me to program for both NVidia and
AMD GPUS simultaneously with the minimum amount of pain. And you can still use
it for mobile GPUs and specialized chips like the Myriad.

Do you have another open, cross-platform, widely compatible GPU programming
framework to recommend?

~~~
gcp
Same here. What else can you do to ship GPU acceleration to AMD and NVIDIA
people?

The alternatives recommended here aren't even serious IMHO. I'd rather switch
to CUDA and wait till Intel/AMD sort out a REAL compatibility layer than deal
with those.

Unless I'm mistaken, HIP still requires a separate compile for either platform
and what runtime do they expect end users to have exactly?! At least CUDA and
OpenCL are integrated in the vendor drivers.

Vulkan compute with SPIR-V seems to be the only real solution, but even that
is still very early. Sill waiting for proper OpenCL 2.0 support in NVIDIA
drivers :P

~~~
freeone3000
You simply don't ship. Enterprise deep learning doesn't ship their training
code - large models are trained on purpose-designed, dedicated hardware.
Hardware compatability doesn't matter, software does. Even that's flexible if
it's significantly faster.

(The models can be executed on low-powered, commodity CPUs. No need for any
GPU there.)

~~~
gcp
_You simply don 't ship_

That's totally an option for our product, great idea! Why did I never think of
this!

No seriously we _are_ shipping, using _OpenCL_ and it gives about a 20 times
performance advantage for most users regardless if they have AMD or NVIDIA
hardware. If something that's actually better than OpenCL comes along (or if
AMD RTG goes out of business) I'll switch to it no heart broken.

But that hasn't happened yet.

------
JosephLark
Interesting, given that just 2 days ago it was announced [0] that Intel was
going to start to use AMD for some of their integrated graphics. Now they're
going to complete against them in the discrete graphics space.

Also, Koduri recently left AMD after what many felt was a disappointing
discrete graphics release in Vega.

[0] [https://www.anandtech.com/show/12003/intel-to-create-
new-8th...](https://www.anandtech.com/show/12003/intel-to-create-new-8th-
generation-cpus-with-amd-radeon-graphics-with-hbm2-using-emib)

------
loeg
Wowza. If I moved to a direct competitor like that, my employee contract "non-
compete" clause would be brought out immediately. And I'm no C-level
executive, just an individual contributor. I wish Washington had California's
non-compete law.

~~~
cakebrewery
Haven't read much on it, but this happening right after the integrated GPU
deal with AMD just strengthens the "teaming up against NVIDIA" theme going on.

~~~
Rarebox
It's like an Age of Empires FFA where the losing players always team up
against the leader.

------
chisleu
I'm excited. I don't care if Intel wins. I just want a video card that doesn't
suck and works perfectly with linux. Even if I unplug my monitor sometimes...
Even if it's a laptop and it switches GPU for different outputs... Even if I
want to use the standard xrandr and normal ass linux tools for configuring my
monitor.

~~~
orbifold
Maybe that would happen if kernel developers were not such divas and thought
it was appropriate to use coarse language in public discourse. Nvidias
graphics drivers work perfectly on windows and they have the only OpenGL
implementation that is not a total joke on Linux.

~~~
madez
Why do you say the Mesa OpenGL is a total joke?

~~~
orbifold
This is a bit outdated [http://richg42.blogspot.de/2014/05/the-truth-on-
opengl-drive...](http://richg42.blogspot.de/2014/05/the-truth-on-opengl-
driver-quality.html?m=1) overview of driver status by a game developer, vendor
A is NVIDIA and as the article points out they are the only one with a
performant relatively bug free implementation. Also notice how he mocks Intel
for having two driver teams: That Linux expects to get special treatment by
integrating the driver into their Graphics abstractions leads demonstrably to
worse performance and less features than if you bypass all those abstractions
and use essentially the same driver for all kernels.

~~~
madez
Thanks for that link. I must say I deem it maybe not completely outdated, but
at least worthy an update.

But I'm in awe of the what one can read there.

"This vendor[Nvidia] is extremely savvy and strategic about embedding its devs
directly into key game teams to make things happen. (...). These embedded devs
will purposely do things that they know are performant on their driver, with
no idea how these things impact other drivers.

(...)

Vendor A[Nvidia] is also jokingly known as the "Graphics Mafia". Be very
careful if a dev from Vendor A gets embedded into your team. These guys are
serious business."

So, basically Nvidia is sabotaging OpenGL to fuck up the specs and then
implement other working variations and make the game developers use their
version? If that is true, fuck Nvidia.

"On the bright side, Vendor C[Intel] feeds this driver team[Windows Driver
Team] more internal information about their hardware than the other team[Linux
Driver team]. So it tends to be a few percent faster than driver #1 on the
same title/hardware - when it works at all."

What the fuck is going on in this industry? Intel is sabotaging its own Linux
driver team? Why?

"I don't have any real experience or hard data with these drivers, because
I've been fearful that working with these open source/reverse engineered
drivers would have pissed off each vendor's closed source teams so much that
they wouldn't help.

Vendor A[Nvidia] hates these drivers because they are deeply entrenched in the
current way things are done."

That, now finally, makes sense. Nvidia is strong-arming developers to not
support Mesa because they are afraid of it. Nvidia is afraid of Mesa. I think
this should be more widely known.

~~~
orbifold
The way I read this was a bit different: NVidia actually is the only vendor
that offers a performant, complete and relatively bug free implementation. For
example if you consider this
[http://gdcvault.com/play/1020791/](http://gdcvault.com/play/1020791/)
presentation then it is relatively clear that most major innovations were
first available as OpenGL extensions by NVidia. The playing field might have
levelled somewhat with the introduction of vulkan, which eliminates a lot of
code that had to reside in the driver before. The main reason why Mesa is
unlikely to catch up is because the backend compiler code is platform
specific, so unless NVidia decides to publish their platform specification, it
is unlikely that they will achieve meaningful success. Even if NVidia did
publish a specification and left driver development to the community it is
unclear to me who would be willing to do the free work for them.

~~~
Nullabillity
Keep in mind that the blog post is from 2014. Since then AMD has rewritten
their Linux driver (fglrx -> AMDGPU) which didn't really pay off before their
4xx series (released 2016).

------
chx
> GT4-class iGPUs, which are, roughly speaking, on par with $150 or so
> discrete GPUs.

Erm. Nope. No Intel iGPU is on par with the 1050 much less the 1050 Ti.

[http://gpu.userbenchmark.com/Compare/Intel-Iris-
Pro-580-Mobi...](http://gpu.userbenchmark.com/Compare/Intel-Iris-
Pro-580-Mobile-Skylake-vs-Nvidia-GTX-1050-Mobile/m132950vsm211022)

(I compared mobile chips since the most powerful GT4 can only be found in the
mobile chips.)

It's only slightly behind the 1030 which costs $73.

~~~
unsigner
Look at it another way: no Intel iGPU is on par with any discrete GPU, because
in price segments where iGPUs appear, discrete GPUs tend to vanish in a matter
of 1-2 years. There used to be a significant number of NVIDIA Geforce
MX420/440s, 5200s and 6200s. Then much fewer 730s. Now 1030s are practically
only in laptops. Intel has been nibbling away at this market slowly, but
steadly for a decade.

~~~
llukas
If driving FHD display is all that you want then integrated GPU is fine. But
we start getting 4k/UHD nowadays...

------
payne92
Finally! Intel has a lot of catching up to do.

As GPUs continue to evolve into general purpose vector supercomputers, and as
ML/deep learning applications emerge, it seems clear that more and more future
chip real estate (and power) will go to those compute units, not the x86 core
in the corner orchestrating things.

~~~
rodgerd
> Finally

Why on earth would you think Intel extending their near-monopoly is a thing to
celebrate?

~~~
dsr_
They don't have a monopoly in add-on GPUs. There are two strong competitors,
and Intel's current on-chip GPUs are comparatively pitiful.

------
shmerl
_> With his hire, Intel will be developing their own high-end discrete GPUs._

With Intel and AMD backing Mesa, things on Linux will get very interesting.

~~~
madez
I am sceptical about the consequences for user-controlled computing. AMDs GPUs
have made a positive development in the past, while Intel is unfriendly to
users control over the hardware they buy.

~~~
shmerl
Intel GPU drivers are open on Linux. How is that worse than what AMD are
doing?

~~~
madez
Intel does more than integrated GPUs.

~~~
shmerl
AMD too, but were are talking about GPUs in this case.

~~~
madez
Yes, and because of the above mentioned, I am sceptical about the
consequences. I don't consider it likely that the new GPU will work without
proprietary firmware nor that the documentation will be better than AMDs now.

~~~
shmerl
AMD GPUs also need firmware unfortunately.

------
artellectual
Damn, this is a major loss for AMD, losing Raja is definitely not the right
move. It would have been interesting to see the next iteration of AMD graphics
with Raja on board.

Threadripper, and the Zen architecture, put them back on the map, that’s some
serious hardware for the price. I wish they had just kept iterating on the
CPUs and GPUs.

Vega is not a bad product, it just doesn’t beat nvidia’s offering in the bar
charts, doesn’t mean it’s bad, it just means it’s second place which is fine
since it’s cheaper as well. Technology needs to be iterated on. Something must
be going on at AMD at the moment.

------
cameronhowe
Can someone explain this to me: Isn't the GPU industry all about Patents and
trade secrets (enforced by NDAs). Won't all Rajas expertise be tied up in
that?

~~~
ChuckMcM
Intel has a both a large patent portfolio and a lot of legal firepower in that
space, so no, I don't think folks like Nvidia will be able to "threaten" Intel
with patents. Nvidia might be able to threaten them with the monopoly card
(clearly Intel is using its dominance in desktops and laptops to move into an
adjacent market) but they have been doing that for many years with the
integrated GPUs so I would expect it to be a weak play.

~~~
cameronhowe
I must have expressed myself poorly. He's coming from AMD/RTG, everything he
knows is presumably what they use/own. Nothing to do with NVIDIA

~~~
ChuckMcM
No, I understood, I just called out Nvidia because they are so often on the
other side of a patent dispute with Intel. AMD and Intel have broad cross
patent licensing deals in place because of previous fights over patents on the
frontside bus, the instruction set, etc.

From a strategic markets point of view I see it this way;

Discrete GPUs gives Intel a shot of owning both pieces of high margin silicon
in a laptop / tablet design win (GPU & CPU) and potentially it gives Intel
additional ammunition to go after Nvidia or to mitigate their encroaching.

------
40acres
Intel is all in on becoming a "data company", with the recent design wins in
self driving cars & the AMD deal I'm confident that they will come out of the
AI HW race in strong shape. This move just reaffirms that.

------
ohyes
It isn't clear that AMD's GPU architecture has been really competing with
nvidia. We'll have to see how big a deal this is when AMD's APUs come out. I
expect them to be quite a bit better than intels integrated product.

This seems to be more of an direct competitive attack on AMDs integrated
product than it is competition with nvidia. It feels to me like building
discrete GPUs is almost a misdirection.

------
zachruss92
An interesting counterpoint here. I have a friend who works for Intel as an
algorithms engineer for their self-driving vehicle acquisition (Mobileye).
Currently, he's using 2 1080TIs w/ TensorFlow to perform deep learning. It is
possible that Intel could be looking to develop a chip used specifically for
this purpose (a bet on self driving cars) and not for mass-production/sale
outside of that tech. Either way, all of the GPU/CPU updates in the past year
is just going to create more competition, which is better for the consumer in
most cases.

~~~
p1esk
Well, the whole point of Mobileye acquisition was for Intel to have a
competing chip for autonomous cars. But it is possible that they are also
looking to compete on 1080Ti level. Which would be very hard.

------
farhanhubble
Nvidia is lightyears ahead in the GPU market besides if this GPU push is aimed
at the Deep Learning market Intel will have competition from the likes of
Xilinx too. IMHO they need to provide great software to go with their GPUs.
Traditionally hardware manufacturers have shipped barely usable software. They
should perhaps try to use opencl and keep the rest of the tools and libraries
open source.

~~~
deepGem
This is what many people outside the AI world don’t seem to understand. Nvidia
has a stranglehold in the form of CUDA and Cudnn. There isn’t any open source
equivalent to Cudnn. AMD is trying to push OpenCl in this direction but it
will be a long time before DL libraries start migrating to OpenCl. Like
tomorrow by miracle if al alternative GPU which is as good as the 1080ti
popped up, it would be useless in the AI market.

~~~
SJetKaran
No it won't. Especially if the price is competitive. Say for the price of 1
1080ti, if i can buy 1.5 units of comparable performance graphics card, i'll
surely buy it. There are already resources being spent on OpenCI based ML/DL
platforms
([https://github.com/plaidml/plaidml](https://github.com/plaidml/plaidml)).
The architectures keep getting bigger and training time keeps getting longer.
I think you underestimate this factor. I need as much gpu computing power as i
can buy within the budget.

~~~
deepGem
True, I would love for some alternatives such as plaidml. However, I can't
seem to fathom that Plaidml will be a worthy alternative to lets say
Tensorflow or Pytorch or Caffe. I hope I am proven wrong.

~~~
SJetKaran
I think support for OpenCI will eventually come to other frameworks. But the
main problem is the fact that AMD is still far from the NVidia in terms of
performance. Vega couldn't reach the performance of 1080Ti, and with Volta
next year, the gap is gonna drastically increase. If only AMD can fill the
gap, the support, i'm sure, will soon come after that.

------
gbrown_
It will be interesting to see how "discrete" thses GPUs will be. I'm assuming
they will only be "descrete" in the sense that they are not on the same chip,
but rather same package (via EMIB).

Either way surely this is a move by Intel to take away from Nvidia's consumer
share (which makes up the vast majority of their income) as Nvidia make
inroads into the data center market?

~~~
wyldfire
The big win that discrete GPUs provide to the cloud/backend marketplace (that
Intel sorta plays in via Xeon Phi) is from large banks of VERY fast memory
coupled with fast-clocked vector processors. But without a bunch of HBM or
something similar, the discrete GPU won't be able to do training at the scale
that NVIDIA and AMD do.

~~~
gbrown_
One would assume that in the data center for discrete cards Intel would do
something with their Nervana acquisition and HBM, or possibly (but less
likely) MCDRAM.

------
chucky_z
I don't know too much about Raja Koduri, but is leaving AMD and immediately
joining Intel not... really shady?

~~~
Maskawanian
How is it shady to change jobs? This notion that you should be jobless for a
period of time is ridiculous.

~~~
mathperson
they are companies directly competing...it is very possible AMD engineer could
bring proprietary information to INTEL. as in this literally recently happened
and received massive press coverage with UBER and WAYMO- resulted in intense
legal action.

~~~
awalton
The fact is that people have to have job mobility, and need to be trusted that
when they leave a company, they leave behind that company's secrets. Many
companies make you sign a document that attests this: if you have any company
data, you destroy it, if you have any company equipment, you return it, if you
have any company knowledge, you forget or neglect to discuss it.

Most people, honest people, have no problems understanding these obligations
and abiding.

Dishonest people, who lie about destroying documents, are why we have Uber and
Waymo battling it out.

~~~
userbinator
_if you have any company knowledge, you forget or neglect to discuss it._

Given how the human brain works, that's very much impossible to do...
"standing on the shoulders of giants" and all that, as the saying goes.

I'm sure some companies would love to be able to "reformat" employee's brains
when they leave, but (fortunately) that's not the reality.

~~~
ams6110
> standing on the shoulders of giants

Of course. No question that you take the sum of your education and experience
with you to each new job. The "company knowledge" limitations are around
specific trade secret inventions or verbatim recreation of such.

------
LoSboccacc
remember when project larrabee could raytrace quake in real time? hope this
will be another stab at an hybrid gpgpu

------
andreiw
Fool me once, shame on you. Fool me twice, shame on me.

Unclear what AMD thought they stood to gain with the Monday announcement - and
it didn't take long to have it play out in their disfavor.

I'm guessing Intel's GPU will never support OpenPower and Arm servers, and
will never ship on a CCIX-enabled adapter.

------
throwaway613834
Can someone explain how I'm supposed to interpret this along with the other
recent article on Intel & AMD creating a joint chip of some sort? Are they
competing or cooperating?

~~~
djrogers
They’re not creating ‘a joint chip of some sort’ - AMD will be selling GPUs to
Intel who will package them with their CPUs via EMIB.

[1]
[https://www.intel.com/content/www/us/en/foundry/emib.html](https://www.intel.com/content/www/us/en/foundry/emib.html)

------
wyldfire
Also here:
[https://news.ycombinator.com/item?id=15651848](https://news.ycombinator.com/item?id=15651848)

------
mc32
Since the days of Chips and Technology, intel has vacillated between going the
full hog and retracting from GPUs.

Wonder if this time they will stick with it for the long haul.

------
perseusprime11
I am not sure everyone here knows about Raja. He is a talent at a totally
different level. Big loss for AMD. AMD should have done all it could to keep
him.

------
alextooter
Intel pick the wrong guy and wrong path.It just don't work.

I think Intel should acquisition Nvidia, and let Jen-Hsun Huang lead the new
company.

------
krisives
Brings back memories of Larabee their last attempt at making a GPU before they
scrapped the project and wasted everyone’s time.

~~~
pjmlp
I was on the room at GDCE 2009, where they were praising the vector
instructions while presenting a session on Larabee.

------
abiox
intel once tried to do this with larrabee[0] some years back. hopefully they
learn from what went wrong there.

[0]
[https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)](https://en.wikipedia.org/wiki/Larrabee_\(microarchitecture\))

~~~
awalton
Larrabee was an attempt to see if the x86 architecture could power a GPU. The
answer was "not very likely", but it got turned into a product of its own
anyways because it turned out to be very interesting for other compute-heavy
use-cases. Larrabee's descendents became "Knights", which became the Xeon Phi
product line.

Keep in mind Intel currently builds GPUs - just of the integrated variety.
What's new here is that Intel is deciding to build discrete (standalone, like
those you'd plug into a PCIe port) GPUs.

~~~
craftyguy
It was definitely being targeted to compete with other discrete graphics
products, and at some point in the program they figured out that they would
never meet the performance necessary to compere effectively. So in order to
not have completely wasted several years of development, it was re-purposed as
a product targeting HPC (the first generation Knights/Xeon Phi product)

~~~
awalton
Intel really doesn't mind "wasting" time on innovation - they make tens of
billions of dollars a year and they're on top of the market. They can afford
to go down blind avenues, especially when the research spills out so well, as
it did in this case.

It definitely wasn't a "saving throw" that Larrabee's architecture got
repurposed. There were several teams at Intel working in similar directions -
one team worked on a "cloud on a chip", one team worked on high bandwidth
chip-to-chip interconnects, one team worked on on-chip networking... they all
came together and formed the Knights Ferry research project, which then got
turned into the Xeon Phi.

The "core" of Larrabee, its quick little Pentium-derivatives, went on to be
repurposed in the Quark product line and its lineage (e.g. the Intel PCH has a
"Quark" inside). The 512-bit instruction set got parted out and became AVX512
in is various incarnations. They definitely got their money's worth out of
Larrabee.

Nobody is disagreeing with the fact that Larrabee didn't turn into a discrete
GPU despite their attempts make it so. (It's also not surprising, seeing the
carriage turn back into a pumpkin with Cell and other Many Core architectures
fail to pan out to be good at graphics workloads). But that's a separate issue
from Intel building GPUs, since they have a _completely other team_ that works
on building productized and shipped GPUs.

------
microcolonel
Wow, this might save Intel. They are floundering in the server market right
now because they won't put enough PCIe lanes on their platforms, because that
means lower sales. If they can grow up in the GPU market, then that means they
basically win over AMD's latest maneuvers.

------
dis-sys
This again shows AMD is not ready for the battle with Intel.

Ryzen's chief architect left in 2015, now the master mind behind its GPU is
leaving. You need to be really religious to believe that AMD is going to get
any better in the coming competitions with NVIDIA and Intel.

------
reiichiroh
Ah, they sort of half-assed in trying with Real 3D. Was that Larrabee?

------
nilsocket
Never mind, for me it seems raj has too much of higher expectations, which
makes him a bad hire. Hope AMD finds someone committed and a real enthusiast
to do the job.

------
CrunchGo
Nice! this will be a great win for Intel

------
cpatil
Interesting timing for this announcement, given Nvidia earnings tomorrow.
Looks like intel is back its underhanded shenanigans.

------
m3kw9
Trojan horse 101 lesson for AMD

------
yalogin
What is a discrete GPU?

~~~
jaas
A discrete GPU is a GPU that's not on-die with the CPU. A discrete GPU is
usually something you stick in a PCIe slot.

A GPU on the CPU die, non-discrete, is often referred to as an "integrated
GPU" or "integrated graphics." They're typically not very powerful, though
they run common non-gaming applications just fine.

------
lonk
Nvidia and ATI should develop x86 cpus for the balance's sake.

------
mariusmg
Larabee reloaded.

------
user982
I can't wait for the twist in tomorrow's episode.

------
rurban
What? Is there no noncompete clause? Strange

------
gigatexal
what a coup this was.

Raja: "I'm...um...going on sabbatical." Lisa (CEO): "OK." Intel: "We're hiring
Raja!!..." Lisa: "WTF".

~~~
fermienrico
At the risk of turning HN into Reddit, I'd like to politely suggest to keep
jokes, puns, and other shenanigans off of Hacker News. If you have nothing
constructive to add to the discussion, please refrain from commenting. Thank
you :)

~~~
gigatexal
why do we all have to be so stoic and serious? can't we have a bit of fun?

~~~
fermienrico
Most people on HN are not looking for fun. They're here for information,
intelligent discussion, and constructive criticism.

Use Reddit for fun! There is plenty of fun on the internet. Like Reddit,
hackers don't want [Serious] tag.

