
NVidia will license their GPU cores to other hardware manufacturers - bashinator
http://blogs.nvidia.com/blog/2013/06/18/visual-computings-ascent-gives-nvidia-room-to-expand-its-business-model/
======
mtgx
This is the right strategy for Nvidia, especially in the mobile space,
considering they'll also be the only ones with the full OpenGL 4.3 support,
which means that if Kepler GPU's are popular, developers will be making some
pretty amazing _mobile_ games that will only be available on Kepler GPU's
because they'll be the only ones to have OpenGL 4.3 for years to come.

I only wish this was Maxwell coming out next year, not Kepler, which was
supposed to be in Tegra 4, but you know Nvidia and their delays...I also hope
they don't screw this up by making the GPU very inefficient. If they do that,
no one will want it, so this strategy will be irrelevant. They say it's very
efficient, but over the years I've become a lot more skeptical about stuff
Nvidia claims, at least in the mobile space.

If they can convince Samsung to use Kepler for Exynos 6, or Maxwell for Exynos
7, that would be a huge win for them. Samsung has been a little lost lately in
terms of what GPU's to use in Exynos SoC's, so this may be Nvidia's window of
opportunity, if they can prove their GPU is the best.

~~~
makomk
Well, NVidia are currently behind the competition in terms of OpenGL ES
support, so they actually need to hope game developers don't decide to take
advantage of the newest and greatest hardware features before they're ready.
That would leave them without a horse in the race altogether - even their new
Tegra 4 chip which hasn't launched yet can't support OpenGL ES 3.0.

Also, whether anyone licenses their hardware in the first place depends on how
expensive it is and how much power it sucks down. Remember that we're talking
about a full desktop GPU architecture here, the lowest power GPU that
currently uses it consumes more power than an entire tablet. NVidia have
screwed this up before, and given all the NRE costs of creating a new chip I
can't see anyone going for this until they can test the actual power
consumption in actual hardware.

~~~
cdash
I thought OpenGL ES was just a restricted subset of OpenGL, and old versions
of it at that. It seems kind of weird for them to be behind if this is the
case so maybe I don't understand this correctly.

~~~
antonyme
Not exactly - it's more complicated than that
([https://en.wikipedia.org/wiki/Opengl_es](https://en.wikipedia.org/wiki/Opengl_es)).
GLES was originally a simplified spec based on the desktop API, but optimized
for mobile and embedded devices. They have since leapfrogged each other and
removed (fixed function pipeline) and added (shaders) various features at
various times.

The latest versions of ES and GL are largely, but not completely compatible
AFAIK, with the desktop having a superset of features.

~~~
wolfgke
According to
[https://en.wikipedia.org/wiki/OpenGL#OpenGL_4.3](https://en.wikipedia.org/wiki/OpenGL#OpenGL_4.3)
OpenGL ES 3.0 is fully forward compatible to OpenGL 4.3.

------
sho_hn
I wonder what this will mean for free drivers.

While nVidia hasn't been entirely uncooperative (they do participate in X.org
upstream development to the degree that it affects their products), the only
free driver they've written for the line of GPUs the to-be-licensed Kepler
core is part of is an obfuscated, 2D-only token effort, and they have stayed
away from participating in the Nouveau project to write a fully-featured free
driver. Contrast that with Intel and AMD, where the free driver is the primary
effort or one supported with documentation, code and manpower, respectively.

On the other hand, they've contracted Avionic Design to write a free
3D-enabled driver for the GPUs in their Tegra 3 SoCs. If future Tegra is based
on a Kepler derivative (as indicated by the blog), and this prior commitment
forces them to release a free driver for it as well, I wonder if one day this
hypothetical free driver might have a shot at replacing their closed source
codebase. Or they could join forces with Nouveau and finally put some of their
resources behind it, similar to what AMD has done.

~~~
vacri
Contrasting with AMD, the AMD drivers are a complete mess. I have a card that
was released about a year ago (HD7800) and I can't dual-screen on it with the
AMD drivers (maximum total resolution is 1920x _1920_ \- yes, that's not a
typo). Regardless of effort, the graphics drivers AMD produce are terrible,
and the free version doesn't deal well in the gap (pitcairn chipset). I would
not be pointing at AMD as a role model in this regard.

~~~
sho_hn
I wasn't. I agree that the closed-source drivers AMD also still produce are
friggin' terrible, and far worse than nVidia's closed-source drivers. AMD has
been much better about supporting efforts to write free drivers than nVidia
has, however. In terms of serving as role model I'd say Intel takes the lead
by actually developing their sole drivers out in the open _and_ releasing
hardware documentation so others outside the company can help out. Even Intel
as had their missteps however (e.g. the infamous Poulsbo driver mess).

~~~
vacri
For sure, Intel is the golden standard. It's just that AMD drivers suck -
proprietary or free - and nvidia's don't - proprietary or free. I've got a
pitcairn chipset GPU (AMD, released about a year ago) and I cannot get dual-
screen working with hardware acceleration. The free drivers can't enable it,
and the proprietary drivers are limited to the very bizarre 1920x1920 maximum
joined resolution. I've noticed plenty of others struggling with AMD at the
moment, and these days it's "just choose nvidia". A complete switcheroo on
several years ago when it was "just choose ATI" (well, if you wanted something
more than intel, that is).

~~~
lgeek
Offtopic, but:

> the proprietary drivers are limited to the very bizarre 1920x1920 maximum
> joined resolution

That's not true. I'm currently using 2 1080p screens on a HD7870. You just
have to initially set it up using amdcccle or maybe aticonfig, or to edit
xorg.conf manually, after which you can use RandR-based tools as normal. I
dislike that the AMD drivers lag behind in software compatibility and that
they do things in non-standard ways, but they provide all the functionality
I'd expect.

Getting back to your problem, in amdcccle set up Display Manager > Multy-
Display, or manually in xorg.conf, Section "Screen", SubSection "Display"
needs to have an entry called 'Virtual', which looks something like this:
Virtual 3840 1920.

~~~
vacri
Well, it is true in a sense because that's the limitation that they give you.
I figured you could manually fight with the xorg.conf (which I did last time I
had a non-intel card years ago), but by that stage, I'd already spent heaps of
time fighting with the card, and it was a work machine and I wanted to get
work done. I just wanted to dual-screen Cinnamon, and the free driver did that
with software rendering... with my quad-core pegged to 70%. But I live in the
console, so I don't notice it except when I open htop - it was deemed 'good
enough'.

In any case, the AMD drivers, at the current time, require stupid shennanigans
to go beyond basics and the nvidia ones don't. The domain knowledge you need
is quite high, and the target shifts so rapidly that the search engines are
filled with out-of-date cruft.

Thanks for the info, by the way - I do appreciate it, but there is already a
replacement card coming...

------
geuis
Can someone explain the inner workings of this part of the industry? So I know
enough about graphics cards to make smart buying choices, but I don't know
anything about the business. The biggest question is how is this different
than gpu's now? I can get the same model of card from EVGA, Nvidia, BFG, etc.
I go with the one with best reviews and lowest price.

~~~
leoedin
This is about the silicon rather than the unit. Currently third party GPUs are
manufactured using Nvidia-manufactured GPU chips. Nvidia sells these and
manfuacturers will use their reference design, perhaps with modification, to
produce a functional GPU.

What is being discussed in the link is actually licensing the GPU design
itself, which would allow other companies to fabricate their own Nvidia GPU
containing chips. This allows third parties to produce system on a chip
designs (SoC) which incorporate Nvidia GPUs connected to other stuff, perhaps
a few ARM cores or some custom DSPs or similar.

This is the approach that ARM currently take with their designs. They don't
operate any fabs, but simply sell what's known as the "IP Core" to third
parties (Qualcomm, Broadcom, Texas Instruments etc...)

~~~
geuis
Thanks, that makes a lot of sense.

------
antonyme
This is hugely significant. The No.1 GPU vendor in the PC market has seen the
writing on the wall, and is not standing still.

They clearly see the PC market in decline, and mobile/embedded is the way of
the future, with Android as the dominant platform. Compare this to only a few
years ago, when the Wintel duopoly reigned supreme.

For years, people have been talking about Linux taking on Windows on the
desktop, but it hasn't really materialised. However, Linux has done something
far more significant - leapfrogged the humble PC and become the platform upon
which both so many Internet services and mobile devices are built. Exciting
times...

~~~
cookiecaper
I wouldn't take it to mean that PCs are in decline. I would just take it to
mean that nVidia recognizes mobile isn't going away, and that there is a lot
of money to be made there. That's doesn't necessarily mandate the death of the
PC. I don't think that anyone expects smartphones to disappear, and I'm still
puzzled as to why so many are so anxious to see their counterpart machines
that can facilitate serious work (or play, for that matter) go away.

~~~
bashinator
I think the idea is that there are orders of magnitude more content consumers,
than content creators. To a greater and greater extent, mobile devices are
fully capable of displaying all the content you might want (tellingly, the
exception being high-end 3D games). Once traditional PCs are _only_ useful for
content creators, the economy of scale for manufacturing those systems may no
longer apply.

------
abuzzooz
The way I see it, this will directly cannibalize Tegra sales. Up to and
including Tegra 4, NVIDIA's mobile GPUs weren't based on their desktop GPU
technology. Starting with the next-gen Tegras, they will have a Kepler-based
(and later a Maxwell-based) GPU in their mobile SoCs, architected from scratch
to be power-efficient. That will be a big deal. Imagine running CUDA apps in
the palm of your hand.

But with this step, it seems to me that Tegra will not have any differentiator
any more (unless NVIDIA keeps some features to itself). Could NVIDIA be
adopting ARM's strategy?

------
stefanve
I think this could be big especially in the mobile space. I can imagine that
smaller ARM manufactures/designers will use this over PowerVR or there own
graphics implantation. In desktop space I don't see any one willing to do
that, maybe IBM for a super computer. But while this is exiting I think it
also shows that they don't see a very bright future for them self. I guess
with there less then hard selling ARM proc and the increasingly better
graphics chips from intel the need for nvdia as a h/w manufacture is declining
fast

------
pjmlp
Maybe Intel could license them and finally provide proper GPUs.

~~~
slacka
While they're at it open-source the nVidia Linux drivers. And end war and
poverty.

~~~
Narishma
Nvidia's Linux driver is more or less the same as their Windows (and MacOS,
etc...) driver. They can't just open source it. Intel has an open source Linux
driver and a closed source Windows driver.

------
trippy_biscuits
So now, projects that need parallel crypto-cracking can use GPGPUs instead of
FPGAs. This works with any other strategic GPGPU application as well. The
military no longer needs to buy a bunch of Sony gear just for the CPU. Defense
contractors or grant-funded university research departments can design and use
a GPGPU solution and have more control and customization than using off the
shelf parts.

------
ldng
Feel like the next logical step. They're already customizing chips for PC
constructors (I remember not being able not being able to install a linux on
an HP laptop because the customized chip wasn't compatible with drivers). So
the next step is just delegating the chip manufacturing.

~~~
Aissen
It's not customizing, it's just configuration. Have a look at
[http://forums.laptopvideo2go.com/topic/9243-forceware-
update...](http://forums.laptopvideo2go.com/topic/9243-forceware-updaters-
quickstart-guide/)

------
orik
This is a pretty big deal. Excited to see what sorta devices come of it.

------
shmerl
Is Nvidia going to release EGL drivers for the desktop to enable Wayland? They
wrote a lot about Android there, but didn't say anything about this.

~~~
wmf
Magic 8 Ball says "try running SurfaceFlinger or Mir on all ARM desktop
machines so you can reuse Android drivers".

~~~
shmerl
I'm not really interested in either Mir or SurfaceFlinger. Wayland on native
glibc EGL drivers, that's something already. Besides, Mir is relying on the
same drivers that Wayland does, so they are facing the same issue.

------
subb
So, what does this mean exactly? They will license the design of the GPU cores
so others will be able to build the chip themselves?

~~~
antonyme
I expect they will do very much what Imagination Tech does now with their
PowerVR graphics cores. License them to SOC designers who integrate them with
a CPU (invariably ARM) and other peripheral cores (eg. DSP, Ethernet, I/O,
etc) and then get someone like TSMC or Samsung to fab the silicon (since most
chip designers are fabless).

~~~
momerath
>(invariably ARM)

Aside: Imagination Tech recently acquired MIPS.

~~~
tawm
There's also a PowerVR core in some of the Intel Atom chips.

------
venomsnake
How power efficient are they? With mobile devices moving almost as many pixels
as desktop PCs nowadays it will be a power drain.

~~~
antonyme
Not sure, but the latest Tegra offering has some surprisingly impressive
performance/watt figures. (I attended an NVidia roadshow a while back, but
don't have the brochures here.)

------
_pmf_
This could increase the competition in the right way, i.e. without further
fragmenting APIs / ABIs.

------
aspensmonster
So... They want to get out of the business of actually making things and just
sell IP cores now?

~~~
sho_hn
They've never really been in the business of "making things" as end-users
would see it. They've designed chips and reference PCBs, but they contract
outside foundries for manufacturing (unlike, say, Intel, who have their own
fabs) and the silicon goes to retail partners like ASUS and Gainward and many
others to actually put into the channel (with minor downstream customization
like custom heatsink designs). They've only recently actually sold some things
directly to customers, like the SHIELD gaming handheld.

~~~
Eyght
It will be interesting to see if anyone will use their GPU's for a product
that directly competes with the SHIELD.

~~~
sho_hn
I'd say SHIELD is pretty much a PR effort to incite others to do exactly that
- it creates buzz around the virtues of nVidia's mobile tech for gaming, and
so could make licensing nVidia GPU IP an attractive selling point. I don't
think nVidia expects SHIELD to be a high-volume seller on its own, it's pretty
niche form factor-wise.

This is also why you have a bunch of graphically intense games on the Play
store which are Tegra-exclusives, from developers who cut a deal with nVidia.
For example Arma Tactics.

------
willvarfar
Does this mean that NVidia is moving towards being an ARM instead of an Apple?

~~~
acchow
Do you think ZF is like BMW?

~~~
reeses
Is Rolls-Royce like Boeing?

This is a fun game.

