
Nvidia announces Project Denver ARM CPU for the desktop - zhyder
http://www.engadget.com/2011/01/05/nvidia-announces-project-denver-arm-cpu-for-the-desktop/
======
matthew-wegner
And Microsoft just announced ARM support in Windows 8...

~~~
commandar
I think it's also interesting if taken into the context that a lot of the
Nvidia-Intel feud stems around Nvidia's development of an x86 CPU that had to
be shelved due to Intel patents. The chipset licensing disputes, etc, seem to
all stem from Intel being upset about Nvidia attempting to encroach on their
turf.

It makes me wonder if Nvidia was lobbying Microsoft to expand Windows support
beyond x86 as a direct result of all this.

Either way, Nvidia's pretty clearly wanted to expand into the desktop CPU
market for a while now, and with Intel blocking their entrance to the x86
market, it only makes sense they'd start examining other options. ARM seems
like a pretty logical place for them to end up given the options.

~~~
leoc
> Nvidia's pretty clearly wanted to expand into the desktop CPU market for a
> while now

While trying to just hold position as a supplier of GPUs for x86 PCs would be
very difficult for nVidia now that both of the big x86 CPU manufacturers are
pushing their own GPU systems hard (and increasingly, integrating them in the
CPU). This is probably quite an up-or-out situation for them.

~~~
sliverstorm
Or, what if they just need to expand to be able to afford continuing the
development of cutting-edge graphics? The profit margins must be getting
thinner.

------
jcl
Kind of amusing, considering that the ARM architecture was originally created
for desktop computers.

<http://en.wikipedia.org/wiki/ARM_architecture#History>

~~~
rythie
I remember we had these in Acorn machines, when I was at school in the 90s (in
the U.K.). They were always very fast I remember. They were replaced with PCs
later on, but due to the budget the school was on, these ending up being much
slower at the time but that was most likely due to Windows needing a lot more
memory than RISC OS did.

~~~
megablast
They were fast, but they were not necessarily faster than x86 machines. They
were RISC compared to CISC(old x86), so hard to compare them directly.

~~~
rythie
I'm pretty sure it was to do with the RISC OS being lightweight memory wise
than anything. I had some good spec'ed PCs at that time but those relatively
cheap Acorn machines were always faster in basic OS tasks.

~~~
regularfry
As far as I recall, at least part of that was down to hardware acceleration.

They were still awesome machines, though. Fantastic for learning assembler on.

------
zhyder
Blog post from Nvidia: [http://blogs.nvidia.com/2011/01/project-denver-
processor-to-...](http://blogs.nvidia.com/2011/01/project-denver-processor-to-
usher-in-new-era-of-computing/)

~~~
wmf
Wow, Bill Dally is still having the RISC vs. CISC discussion. That's daring;
most computer architects consider it a settled issue.

~~~
wtallis
Which way do you think the issue was settled? Last time I read much about the
issue, there seemed to be decent arguments both ways: CISC allows denser
packing of instructions, but all CPUs are RISC internally, but the CISC
decoders are negligible overhead, but good compilers are easier to write for
RISC, ...

How valid are the various arguments these days? The compiler complexity
argument seems to work well against the Itanium, but I'm not sure that it
makes much difference between ARM and x86 or x86_64. The instruction decoders
on an x86 processor really do look minor on paper, but why hasn't Intel been
able to produce a chip with competitive performance per watt, especially given
their fab advantage? It seemed like they were really trying with the Atom, but
it still hasn't gotten down to the power levels where ARM really shines, even
though the high-end ARM cores are now out-of-order superscalar designs.

~~~
wmf
The consensus is that RISC vs. CISC doesn't matter because microarchitecture
trumps ISA: the cost of instruction decoding is now small, I-caches negate any
difference in code density, compilers can produce efficient code for any
reasonable (i.e. non-Itanium) architecture, etc. I am honestly surprised to
see Dally's complaints.

~~~
anonymous246
Do you have any insights into why then x86 chips consume so much power than
the ARM equivalents?

Btw, I tend to agree with you,and think Intel's engineers are just being lazy
wrt power. I expect this Nvidia chip to get crushed technically when Intel's
engineers gear up and really work on power (like they did when Transmeta's
Crusoe came out).

~~~
effn
Many arm chips support both a RISC ISA (the original ARM), and a more CISC-y
ISA (thumb2). They consume less power and perform better with the latter. So
well, in fact, that some chips don't even bother with the legacy RISC mode.

~~~
Aegean
legacy RISC mode? last time I checked it was called the ARM mode, which is the
absolute standard.

Thumb has benefits, but there are also limitations which is why I wouldn't
call it a replacement.

~~~
crnflke
Not anymore - Thumb2 is the new standard and supports everything that ARM does
(at least, the last I read it did).

One of the big differences is that you now need a marker instruction for
predicated sequences, but obviously the encoding is quite different.

------
iwwr
Are there particular advantages to an ARM CPU on a desktop machine? Assuming
software compatibility is not an issue.

~~~
blinkingled
Very hard to see ARM advantage on mid/high end desktops. Where Intel is with
Sandy Bridge - it will be a long time before ARM reaches that performance.

And then low cost, lower performance desktops - why would anyone make one when
laptops are more convenient?

I fail to see the point of ARM on "desktop" - Server yes (power consumption),
Laptops yes (battery life, form factor) - but desktop?

~~~
leoc
But surely Intel's performance lead is basically a matter of process, not
architecture. Presumably nVidia is going to strive very hard to narrow the
process gap - and if it doesn't succeed at that, it's not as if an x86 arch
would have saved it.

~~~
blinkingled
Nvidia may well narrow the low end x86 and high end ARM performance gap but at
that point for a _desktop_ they would have same/similar performance of x86 at
may be a lower cost but at the huge disadvantage of lack of compatibility -
apps and peripherals likewise.

EDIT: [http://arstechnica.com/gadgets/news/2011/01/nvidias-
project-...](http://arstechnica.com/gadgets/news/2011/01/nvidias-project-
denver-cpu-puts-the-nail-in-wintels-coffin.ars) says this isn't about the
desktop as much as it is about servers and workstations. Makes much more
sense. John Stokes rightly points out - "this is a very tall order, and a lot
of things could go wrong here. Right now, the GPU execution part is the only
one where confidence is warranted based on a track record. With the system
integration stuff and CPU part, NVIDIA is in uncharted territory. "

~~~
leoc
Sure. On the one hand the mid-to-high end desktop isn't the be-all and end-all
of high-end chipmaker revenues anymore. And on the other hand a mid-to-high
end Windows ARM desktop might be usable with some combination of native
Windows and Office, an increasing supply of new ARM-native Windows binaries
from ISVs (the Internet should help by making distribution much easier), Web
apps running in ARM-native Web browsers, and butt-slow emulation for old but
indispensable x86 binaries. It seems to me that the biggest issue could turn
out to be PC games. Presumably nVidia either has to get the big games studios
to issue ARM ports of their next and recent games, or it has to go on being
successful at selling standalone GPUs for gamers' x86 PCs, or it has to give
up on its gaming constituency for a time at least. I presume that since most
PC games are now written to port to PowerPC and/or Cell, doing an ARM port
isn't the adventure it might once have been?

~~~
blinkingled
Many games already run on iOS and Android which are ARM based and so at least
those games could be ported.

Why Nvidia might want to compete with either of the PC, Portable Gaming
Systems (PSP, iTouch, PSPhone), Xbox360 and PS3 without either a solid
advantage or agreement with big game studios is beyond me.

~~~
leoc
nVidia may not have much choice but to try, if it's being locked out of the
x86 CPU market at the same time that the market for third-party discrete GPUs
on x86 PCs is being squeezed hard. And supporting ARM Windows may not that big
a burden for PC game publishers _if_ Windows' ARM support is first-class and
producing an ARM build is largely just a recompile for the studios.

------
cyrus_
This is a big win for nVidia on the supercomputer side of things. They will
soon have to face integrated CPU-GPU solutions from Intel and AMD which
greatly simplify the process of building and programming a supercomputer.
They've just one-upped them both by creating a similar offering with better
performance on the GPU side (where the FLOPS are) and better power efficiency
on the CPU side. In the race to the exaflop, nVidia just changed the odds
dramatically.

------
sliverstorm
Man, I wish I was working for them, but I haven't even graduated yet. It'd be
really cool to be involved with this stuff.

~~~
potatolicious
Keep your grades up. When I was in college NVidia had, emblazoned in bold text
across their intern ads, that a minimum 80%+ was _required_ for consideration.

Part of the reason I never bothered with them - IMHO a company who heavily
bases their hiring choices on school grades is not someone I want to work for.
It betrays a belief in bad/unreliable indicators/metrics.

~~~
endtime
A friend of a friend was asked to implement sqrt in hardware during a phone
interview with nVidia. If you're smart enough to do that and you can't
maintain the equivalent of a 2.7 GPA then that betrays a poor work ethic.

~~~
boredguy8
Or far more interesting questions abound than the ones in your "Communications
101" class. (Also, top 80%+ isn't the same thing as a 2.7 GPA - completely
different systems.)

~~~
krschultz
Do you mean top 20%? Because the top 80% is well, almost everyone. Or the 80th
percentile would be the top 20%.

I'm not nitpicking, I just took the 80% to mean 2.7 as well.

------
coryrc
I used to think the alternative-CPU-taking-over-the-desktop would be cool. In
P4 days of 120W+ processors, ARM looked like a savior.

But now, the only difference would be who is pocketing the $50-100 I'd spend
on a new processor.

~~~
alimoeeny
Strong competition is (almost) always good, at least for the consumers.

~~~
rbanffy
That and the x86 ISA is utterly disgusting.

~~~
coryrc
In what way does that matter? Who writes x86 assembly for these chips?

Compilers produce denser code on CISC (x86) than RISC (ARM), so x86 has an
advantage over ARM.

[http://www.csl.cornell.edu/~vince/papers/iccd09/iccd09_densi...](http://www.csl.cornell.edu/~vince/papers/iccd09/iccd09_density.pdf)

~~~
effn
ARM recommends using Thumb2 for non-legacy software. Thumb2 is denser than
x86, so actually ARM is the one with the advantage here (unless you have an
existing ARM codebase).

~~~
rbanffy
Isn't Thumb targeted for memory-constrained architectures?

~~~
Symmetry
Thumb1 sort of sucked at performance so the only people who used it were the
ones who were very memory-constrained. Thumb2 is much better.

------
jdavid
Risc is good. ;-) The power of that p6 chip is too much for you.

nvidia congrats, this has been a long time coming. if feel like nvidia is tron
in this case and intel is the mpc, fight for the user.

it's time to embrace massive parallelism. this will change everything.

I am really excited and happy to be a shareholder of nvidia now.

------
Charuru
This makes sense in the era of webapps and python and java. If people were
still reliant on programs explicitly written for x86 this would never go
anywhere.

