
Nvidia  Tegra X1 Preview and Architecture Analysis - ismavis
http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview
======
annnnd
How sad to see all this work go into performance while it is impossible to get
a graphic card that had open source Linux drivers (at least ones that work
100% of the time). And I'm talking about suspend function, not about some
advanced games. I mean, how difficult can it be?

~~~
valarauca1
Actually very.

Lets go back in time to the modern days of the early to mid 90's. GPU's were
very new. Back in that day if you wanted to interface with a GPU you had to
know what language it spoke.

Every company had their own language, architecture, etc. So if you made
software you had to make a different version for each hardware GPU. Case and
point 5 version of MechWarrior, 1 version for each GPU company on the market.

:.:.:

Eventually we got Glide, Mantle, DirectX, and OpenGL. These aimed to
standardize our interface to the GPU so that the GPU hardware guys can focus
on building blindingly fast vector processors with no regard to architecture
backwards compatibility, etc.

Want to completely change register layout every 2 years? No problem. Want to
redo caching every 18 months? No problem.

This has lend to blindingly fast GPU scaling. And the market exploding as
GPU's currently lead in GFLOPS/watt. Also allows for average customers to have
access to TFLOPS level of computing which in turn have us the currently
growing open science and data initiatives.

:.:.:

Pandora was let out of their box, its going to be very hard to put them back
in. DirectX/OpenGL is more or less a Compiler for the byte code that LLVM/GCC
generates for 3D acceleration. This byte code is then compiled into native
code for the GPU based on the GPU driver for DirectX/OpenGL.

This is the problem. Your GPU isn't a device, so much a fully feature complete
computer that is programmed on the fly by your CPU.

~~~
singlow
But what does that have to do with making the suspend features work? I get it
that keeping the 3d performance up with open source drivers is hard. But why
can't nvidia help out the open source drivers with the basics like
sleep/suspend/refresh/resolution etc.

~~~
MrDom
Money. They release proprietary drivers, but Linux isn't a big enough segment
of their client base for them to spend a ton of time and money getting the
drivers right. If there was a linux based distro that was able to compete with
Apple and Microsoft, then things might be different.

------
zdw
The really freaking cool parts of this star on page 4:

[http://www.anandtech.com/show/8811/nvidia-
tegra-x1-preview/4](http://www.anandtech.com/show/8811/nvidia-
tegra-x1-preview/4)

12 input camera board with 2 GPUs as a platform for automatic driving cars?
Sounds awesome, especially with the crazy full tower + multi GPU rigs I've
seen in the trunk of cars for this sort of stuff.

------
higherpurpose
> X1’s GPU is looking very good out of the gate – at least when tuned for
> power over performance.

Isn't it such a shame Nvidia never seems to do that in mobile, then? Tune it
for power, that is. Yes, the GPU and the chip overall are "impressive", but
all of Nvidia's mobile chips have been at the time of launch (perhaps with the
exception of Tegra 4).

I really hate to say this, but I want Nvidia to succeed and it really bothers
me that after so many Tegra generations they still don't get it. The problem
is Nvidia always _over_ -optimizes for performance, and then the whole chips
turns into a dud that nobody wants, whether OEMs or consumers.

Instead of doing "2x" performance increase, I would've been happier with a
1.5x performance increase, at a much more manageable and _sustainable_ power
envelope for a mobile device. I'm not sure why they keep doing this despite
this strategy not working. Doing the same thing over and over again and seeing
no results is the definition of insanity after all. Perhaps it's the whole
"PC-centric" DNA they have, which makes them think in terms of PCs that have
plenty of ways to cool down.

I guess this is why "incumbents" keep failing when they get disrupted, even if
they adopt the disruption technology relatively early. The old ways of doing
things are still ingrained in every decision they make.

And what happened to that Icera modem technology they bought? It's been years
now but still not seeing it integrated into their chips. Have they abandoned
all hope of entering the smartphone market already? They seem to already be
refocusing quite aggressively towards the automotive market.

~~~
modeless
They need 2x performance increases every year to keep up with Apple/Imgtec and
Qualcomm. The only reason their performance looks impressive is they announce
long before shipping and compare their unobtanium to everyone else's shipping
devices. If you compare shipping devices to shipping devices (e.g. the TK1 vs.
A8X benchmarks they have in the article) they're not ahead on performance.

~~~
listic
As far as I can see, current gen NVidia tablet chips (K1) are still on top,
rivaling even some previous-gen (1.5 years old) x86's in graphics performance.
[http://www.futuremark.com/hardware/mobile](http://www.futuremark.com/hardware/mobile)

Aren't they?

~~~
Grazester
The K1 I think was the first chip Nvidia really impressed me with.

