

Nvidia Brings Kepler to Mobile Devices - hartcw
http://blogs.nvidia.com/blog/2013/07/24/kepler-to-mobile

======
reaperhulk
Nvidia has a long history in mobile of over-promising on performance and power
consumption. Given the expected ship date of this product it would not be
surprising for it to be only slightly faster (or quite possibly a bit slower)
than products that will ship in the interim. See: Every single Tegra.

~~~
mtgx
This time they're completely changing the architecture, though, but Nvidia is
also known for huge delays, so not sure how that will work out for them. I
think Kepler was already supposed to be in Tegra 4. And now they only get one
extra year to port Maxwell to Tegra 6, after they put Kepler in Tegra 5.

So I'm a little skeptical, too - but maybe not so much on the performance side
(as they say, it's hugely scalable, and has OpenGL API support that is simply
unmatchable by the competition over the next few years), but about power
consumption.

They say it's very efficient and uses very little power, but lately Nvidia's
focus has been too much on performance, and too little on power consumption.
Here's a tip for you Nvidia: _If your chip can 't be put in a phone, don't
call it a "mobile" chip_.

I think that's how all mobile chip makers should think. Because at least then,
they can make a very efficient chip that helps a phone last for 2 days, and
for tablets they just scale up from that and overclock it or add more cores.
But if they do it the other way around, they'll get greedy and want to put too
much performance in it, and force OEM's to use bigger and bigger batteries
(which adds to the cost, and so on). Or it forces the chip makers to _fork_
the chip line, into a lower-end more efficient one, and a higher-end more
performance oriented one, like it happened to both Nvidia and ARM.

Another tip for Nvidia: _When in doubt, use the smaller process node_. They
failed to do that with Tegra 3, when everyone else moved early to 28/32nm, and
they paid a huge price for it in the market, in both lost branding and lost
customers. So next time, choose the smaller process node, even if it's more
expensive. It will be worth it, because both OEM's and customers will be
asking for it. So what I'm saying is Tegra 5 needs to be 20nm next year, not
28nm.

~~~
benjamincburns
> If your chip can't be put in a phone, don't call it a "mobile" chip.

I think this is a case of semantic collision. For quite a long time "mobile
graphics" meant "laptop graphics."

~~~
wmf
I suspect mtgx was talking about Tegras that were marketed for phones but have
been mostly used in tablets.

------
bigdubs
I am a total novice when it comes to chip design; but aren't there obvious
tradeoffs with any architecture? Isn't it really hard to have a silver bullet
that is super powerful / wide in its processing path and efficient for power?

I would imagine that for mobile, the "insert favorite gpu metric here" per
watt would be more important than the aggregate metric.

~~~
benjamincburns
Absolutely. I think a lot of what we're seeing here is largely the result of
process reduction. That's not to say that improved design isn't at play as
well, but the sub-30nm processes seem to have been a threshold for "desktop-
performance" processing at size and power levels which can be truly mobile.
Further, graphics processors _always_ benefit tremendously from process
reduction because they're essentially just very large arrays of very simple
processors.

On the power front, smaller process means less energy needed to switch
transistors. To illustrate the point, I was working with some 22nm NAND flash
at my last job. In order to better understand the the characteristics of the
chip's raw bit errors (errors before error correction is applied) we had a
lengthy conversation with one of Micron's engineers. It turns out that in a
22nm NAND chip, the difference between a set and an unset bit is approximately
the charge equivalent of 8 electrons.

------
breckinloggins
It would be fantastic if convergence devices like Ubuntu Edge could use this
in both a mobile as well as a "docked" power state.

The way I'm envisioning it, the processor would detect when the device is
docked in a full power desktop or notebook configuration. At that point it
would switch to a higher power profile for PC gaming and work station class as
computing tasks.

~~~
rikf
I wonder how you would deal with the extra heat output in a desktop state.

~~~
nobodysfool
I was thinking about this quite a bit, and I imagine that you'd have a docking
station of sorts that has it's own video card in it, with better cooling.
Otherwise, I suppose you could have a docking station with active cooling.

------
rossjudson
I would really like to be excited by this, but since the stretch goal for the
mobile industry is building the next plants vs zombies, my expectations are
minimal.

------
6ren
It's telling how they mention incredible performance/power ratio only in
passing, but focus their pitch on the unified architecture across different
classes of machine (mobile to supercomputer), rather like Fred Brooks' 360
project at IBM.

------
mtgx
This needs to be in the next OUYA or version 2.0 of the rumored Google
console. I'd pay $200 for such a console with a great controller. Not sure
about SHIELD. Portable consoles are not really my thing, and that extra screen
and battery adds a lot to the cost, but I imagine it would be a lot more
compelling with it, too.

~~~
xradionut
I'd rather have an updated version of the MintBox. Imagine having a decent
portable, powerful enough Linux workstation/router that can be deployed
anywhere, home, office or in the field.

------
moomin
I'd say this was one of those cases where changing the original headline was
perfectly acceptable.

