
AMD Announces Dual-Vega Radeon Pro V340 Card with 32GB of ECC HBM Memory - rerx
https://www.tomshardware.com/news/amd-radeon-pro-dual-vega-v340,37694.html
======
xxs
What's more surprising toms hardware actually questions the viability of
NVidia 2080... given the recent shocker of 'just buy it'[0]

That being said, I wonder if the card would have solved its memory controller
issues, as the desktop Vega draw too much power when it comes to the memory
controller. The desktop version pretty much needs water cooling to produce
decent results.

[0][https://www.tomshardware.com/news/nvidia-rtx-gpus-worth-
the-...](https://www.tomshardware.com/news/nvidia-rtx-gpus-worth-the-
money,37689.html)

~~~
bri3d
"Just buy it" was an opinion column with a prominent counter opinion column
that seems to have been quite effective as revenue clickbait.

~~~
graedus
I saw this in context, but in the end it doesn't make the piece defensible.
The "counter opinion" was that you shouldn't buy (pre-order) the card _yet_.
It's not like it was an NVIDIA hit-piece, just entirely reasonable advice
about cards that aren't released yet. "Just pre-order it and don't wait for
reviews and benchmarks, by the way here are NVIDIA's claimed performance
figures aren't they great" is pretty suspect coming from the editor-in-chief a
reviews and benchmarks site.

Edit: also just some really insane lines in there: "When you die and your
whole life flashes before your eyes, how much of it do you want to not have
ray tracing?" ???

~~~
tracker1
Dear editor in chief for $MAGAZINE would you write an encouraging opinion
piece for $BAG_OF_CASH?

I'm not saying this is the case, but it really felt that way seeing the
article.

------
bitL
I wish at least TensorFlow was finally running on Vega... Otherwise the card
pretty much misses the target when compared to P40, as that one is built
specifically for Deep Learning with inferencing. DaaS doesn't seem a large
market.

~~~
AlphaSite
It does, does it not?

[https://gpuopen.com/rocm-tensorflow-1-8-release/](https://gpuopen.com/rocm-
tensorflow-1-8-release/)

~~~
bitL
I was looking at OpenCL support first:

[https://github.com/tensorflow/tensorflow/issues/22](https://github.com/tensorflow/tensorflow/issues/22)

Anyone using ROCm? Is is stable enough/does it support everything in TF 1.8?
How is the performance comparing to 1080Ti? Their port of TF doesn't seem to
have many users, only 6 issues in TF upstream:

[https://github.com/ROCmSoftwarePlatform/tensorflow-
upstream](https://github.com/ROCmSoftwarePlatform/tensorflow-upstream)

If it were comparable with 1080/Ti and working flawlessly, 32GB would be very
enticing!

------
0xcde4c3db
Anyone know what realistic pricing would be for this? It seems like it could
easily be $3500+ (2x Radeon Pro + premium for more VDI seats per server).
There's a link about the Tesla P40, but it's not clear to me how comparable
that is. I'm also not sure that it's even sold as a separate product as
opposed to only being available via system builders.

------
PoespasAR
Will they also make a consumer version? Like the 295x2?

~~~
sp332
Its TDP is 300W. It has a passive heatsink and relies on the server chassis
having big loud fans pushing air from an air-conditioned cold aisle. Dual-slot
PC cards seem to top out around 250W, with nvidia's 2080 Ti Founders Edition
at 260W with vapor chamber cooling. AMD could go with a fancier cooling
system, like the water cooler they put on the R9 Fury X, or they could slow it
down to fit in a lower TDP.

~~~
AstralStorm
Current desktop Vega 64 can go almost 300W in LC versions. It can be pushed
more with custom loops, easily 400W.

Source: actually owning Vega FE and tested overclocking using a custom loop.
It can easily chew extra wattage when overvolted and overclocking while still
staying stable. From performance per watt view, it is ridiculous though.
Getting a 10% performance boost by doubling wattage over undervolted stock.

------
wowDude

      HBM Memory
    

High Bandwidth Memory Memory? Isn't that like saying ATM Machine? Or DNS
System?

~~~
froogie
But is not perfectly reasonable use of language, if you consider that the
language as written there is for humans to read and understand, not for
machines to expand acronyms and be confused about semantical errors?

But sure, what you describe has a name, RAS syndrome.

[https://en.wikipedia.org/wiki/RAS_syndrome](https://en.wikipedia.org/wiki/RAS_syndrome)

~~~
wowDude
So just spell out the whole thing. Just say _High Bandwidth Memory_. Leave out
the inscrutable TLA.

------
jobserunder
Eth mh/s?

------
microcolonel
TL;DR AMD put two Radeon Pro WX 9100 chipsets on one card and added some form
of technology somewhere in the stack to help hypervisors share the resources
of these among VMs. The memory figure is the sum of the two 16GB ECC HBM
stacks.

