
AMD to Nvidia: prove it, don’t just say it - Garbage
http://blogs.amd.com/play/2011/03/25/2056/
======
pmjordan
There's a review of both cards in the latest c't Magazine [1] (German) - the
benchmarks suggest that the AMD card wins in most disciplines, except for
tesselation benchmarks and a couple of games at lower resolutions. AMD's
OpenGL drivers are also (and have always been) much worse than nvidia's in
various respects, so the nvidia card wins on more OpenGL-based benchmarks.

Both cards seem completely removed from reality though, as they apparently
sound like jet engines under load, consume almost 400 Watts of power, and the
590GTX actually throttles itself if you run it under certain loads (such as
Furmark) to avoid overheating/drawing too much current. Oh, and they cost over
€600. I realise "maker of the fastest graphics card" is a marketing thing, but
it's pretty meaningless in practice.

FWIW, if you want to spend a lot of money on graphics cards, you're probably
better off buying a fairly high-end model (but not top-of-the-range) every
hardware iteration, usually 6-12 months, and selling the previous one instead
of dropping a crapload of money in one go.

[1] 590GTX review: <http://www.heise.de/ct/artikel/Gegenangriff-1213480.html>

6990HD review: <http://www.heise.de/ct/inhalt/2011/08/74/>

~~~
mike_esspe
BTW, you can recover the cost of AMD videocard mining bitcoins :)

If mining in a pool, for 5970 it's around 7 BTC/day with the current
difficulty. (Unfortunately Nvidia is much slower in integer calculations).

Our mining pool: <http://deepbit.net>

~~~
pmjordan
How do bitcoins compare to the cost of electricity? I've got a (fairly modest)
Radeon HD 5770, which consumes around 100W under load. At €0.20/kWh, that's
€0.48/day (leaving aside the power needed for running the rest of the system).
For sake of argument, let's say I can get 2BTC/day with this hardware. Is that
worth more than the 48 cents it cost me to do the calculations? (sorry if the
answer to this is obvious, I've only heard of bitcoins, never looked into
them)

This aside from the moral concern about the environmental impact of wasting
energy on pointless hash calculations. I probably waste more energy in other,
albeit less obvious ways.

~~~
mike_esspe
Currently you can sell 1 BTC for around $0.85-$0.9:
<https://mtgox.com/trade/history>

That is not a waste of energy, it's required for protecting transactions from
double spending.

If it's a waste, than banks computer farms is a waste too.

~~~
pmjordan
_If it's a waste, than banks computer farms is a waste too._

Sorry, I don't understand this part. I'm pretty sure real currency isn't
backed by hash function brute forcing. (I assume they use contracts along with
some form of public key cryptographic hashing to sign transactions.)

~~~
mike_esspe
Banks run a lot of servers for accounting, online banking, swift and other
internal needs.

Bitcoin network does it with the help of cryptography and proof of work
hashing. Probably energy spendings are comparable.

------
DanBlake
I currently have a Radeon 5970- Its my first ATI card out of many nvidia cards
in the past.

I will never, ever, EVER get a ATI card again. While it might be faster, the
driver support is so horrible its not worth it. Crashes, lousy bugfixes,
catalyst control panel sucks, etc..

Also, now that nvidia has their own eyefinity support, the only reason I had
to try ATI in the first place is no longer relevant.

My next card will definitely be a nvidia card. I dont care if its marginally
slower than the better ATI card. Its worth it to have not hacked together
software/drivers.

ATI should focus on better software/drivers/etc.. and less on getting 1 extra
FPS over the nvidia cards.

~~~
aphexairlines
Driver support for what? I don't see either company supporting kms, gem,
wayland, etc.

~~~
Tuna-Fish
AMD does help the development of the open-source radeon driver that uses kms,
has been modified to use the gemified ttm (which is assumed to be a step in
moving to gem support once everything fits), and certainly supports wayland.

AMD also makes and supports the non-foss fglrx driver, which they cannot open-
source due to licensing restrictions. The open-source driver is not yet even
near to fglrx in functionality or speed, but it does seem to be slowly
catching up. Some features will probably never work with the open-source
drivers -- notably the video decoder, because AMD is afraid that if they
release it's specs, the HDCP key embedded within can be stolen. (That this
point is completely moot as HDCP is broken doesn't seem to matter to them.)

~~~
vetinari
I've always considered ATI's excuses for not providing access to certain parts
of hardware to be bullshit.

In past, they tried to avoid publishing specs for TV-out, due to Macrovision
(or so was the excuse). In the end, it was reverse engineered, later TV-out
got obsolete and today, nobody cares. ATI however grew an user base that will
not purchase ATI card again.

Today, they make life more difficult to those, who want to use UVD. UVD is
quite simple functionality, basically you provide compressed buffer and the
card decompresses it. You can put the result back to system ram, into texture,
whereever, it has nothing common with the output (In theory, it may flag
originally AACS-ed buffers and enforce secure video path for them, but in
practice nobody cares, the pirates have better ways to capture content
anyway). So they again are losing those who want to build video players for
the living room due to the theoretical attack that nobody cares about. So they
think Zacate will be more popular that Atom+Ion? I doubt it and quick look at
xbmc forums confirms my opinion.

------
trotsky
The weird thing about competing in the "fastest video card" space is that it's
always these dual gpu cards which pay significant performance penalties to
split the workloads between the two chips as compared to 2x a single gpu. The
difference between a winner and loser in the dual gpu shootout (when things
are pretty close otherwise) probably amounts to the quality of the gpu
splitting code in the drivers (the same stuff as the sli/crossfire code as I
understand it).

While there is no doubt that real world tests of these dual gpu powerhouses
will show who can eek out the most fps, if you really just want to compare
chip tech vs chip tech it's probably better to compare at the single top end
gpu vs. single top end gpu level.

------
tzury
Dears at AMD,

From my own private perspective (hacker/developer !gamer) nvidia is providing
a comprehensive toolkit and documentation for developing on top of the GPU
platform (<http://developer.nvidia.com/object/gpucomputing.html>)

Do you have the same?

~~~
Tichy
ATI Stream SDK

~~~
Retric
AMD Accelerated Parallel Processing (APP) SDK (formerly ATI Stream)

<http://developer.amd.com/gpu/AMDAPPSDK/Pages/default.aspx>

------
viraptor
Says the company which previously hacked the drivers to be higher in
benchmarks... Well that was ATI on its own back then, but still - not such a
good idea for people who remember that.

~~~
mrb
Nvidia too has a history of cheating in benchmarks.

Here is one report from Futuremark themselves (makers of 3DMark):
[http://www.futuremark.com/pressroom/companypdfs/3dmark03_aud...](http://www.futuremark.com/pressroom/companypdfs/3dmark03_audit_report.pdf)

More recent PhysX cheating:
[http://www.theinquirer.net/inquirer/news/1048824/nvidia-
chea...](http://www.theinquirer.net/inquirer/news/1048824/nvidia-
cheats-3dmark-177-drivers)

~~~
AshleysBrain
In the appendix of that Futuremark report, the screen caps are rendered so
badly, it looks more like serious driver bugs than cheating. Why were
Futuremark so sure it's not bad drivers or some other technical fault? Malice
vs. incompetence etc.

~~~
nosht
They state that those rendering errors are only visible when using free-camera
mode, not the standard pre-defined camera. That's why it looks buggy, it was
never "optimized" to look well outside of the benchmark's default camera
settings.

They also point out that preventing NVIDIA drivers from detecting 3DMark
results in the scene being rendered correctly.

If these claims are true NVIDIA can't shift the blame to some incompetent
programmer.

------
gavanwoolery
I agree with the premise of the article - however, as one notch in Nvidia's
favor, I think most benchmarks demonstrate that the GTX 590 scales better in
SLI than the Radeon 6990 does in Crossfire. Provided, you have a nuclear power
plant in your computer to power SLI/Crossfire...

------
Silhouette
I wish they'd put up some honest info about their workstation vs. gamer cards
as well. We all know it's mostly the same hardware under the hood in many
cases, and that the software and/or support aspects are where they try to
justify the cost. Still, if you need to put together a new multi-purpose
machine for any sort of serious graphics/video/multimedia work, it's next to
impossible to find any meaningful guidance on what sort of spec is best. I'd
have a lot more respect for an argument about nVidia not quoting benchmarks if
AMD themselves didn't just assume that if you're running software made by say
Adobe or Autodesk you should probably buy a workstation card, because.

------
dlevine
This argument is stupid. Overall, they won't sell more than a few of these
cards. The vast majority of computers out there (I would guess 99%) can't
support a card like this.

At the mid-range, the two need to remain essentially price-competitive at most
performance levels, leading to a stalemate. The two things this has done is:

1) Drive profit margins way down

2) Increase bang for your buck at lower price levels. It's amazing how much
card you can get for < $150, and it almost doesn't make sense to buy a card
for more than $250 (the current $200-$250 cards are nearly as fast and much
cooler/quieter than the top-of-the-line from a year ago).

Maybe having the highest-performing card will give the perception that the
lower-end cards also perform better, but I would guess not.

In real news, I recently bought a Geforce GTX 460 for $90 after rebates. It's
pretty much enough card for the vast majority of games, and I could buy 7 for
less than the price of one of these cards. And, best of all, it's quieter than
my case fan nearly all of the time.

~~~
kiuhygjk
So to put it in simple terms:

Chrysler is claiming they are better than Volkswagon because they have a deal
with fiat who make Ferrari who have a car that can go around one specific
track faster than a Lamborghini which is owned by VW

------
iskander
Nvidia seems to be betting a bit of its future on the superiority of CUDA over
OpenCL for general purpose computing. They share the same model of parallelism
model, but Nvidia keeps baking more support for unrestrained C++ into both the
hardware and compiler (see recent addition of function tables, unified address
space, general purpose cache). I think if AMD continues to focus primarily on
gaming then it's just a matter of time before their cards/drivers have some
slight edge in the gaming domain.

~~~
ak217
Nvidia already has grabbed a huge chunk of market and mindshare in HPC, and
have stated that Fermi was developed as a compute platform as much as a GPU.
The "most FPS" contest is really beside the point. AMD is going to suffer as
soon as optional CUDA acceleration in common libraries becomes more
widespread.

~~~
Silhouette
I suppose the question is why anyone would support CUDA only, instead of
something like OpenCL. NVIDIA have been promoting CUDA quite aggressively for
some time, but the bottom line is that even today, several years after these
technologies became widely available, most mainstream applications for
graphics, video, CAD etc. still don't use that power on either AMD or NVIDIA
graphics cards. If CUDA really has meaningful technical advantages over
OpenCL, why aren't these applications using it?

