
AMD Demos 7nm Vega GPU - jamesblonde
https://www.anandtech.com/show/12910/amd-demos-7nm-vega-radeon-instinct-shipping-2018
======
bitL
If they get TensorFlow/Torch/MXNet running with no changes in the code, 32GB
HBM2 for training models would be awesome! These days hitting memory
limitations is biting more than general slowness of GPUs during Deep Learning
training.

~~~
c2h5oh
Hm? not close enough?

[https://github.com/ROCmSoftwarePlatform/tensorflow](https://github.com/ROCmSoftwarePlatform/tensorflow)

[https://github.com/ROCmSoftwarePlatform/cutorch_hip](https://github.com/ROCmSoftwarePlatform/cutorch_hip)

[https://github.com/ROCmSoftwarePlatform/mxnet](https://github.com/ROCmSoftwarePlatform/mxnet)

[https://github.com/ROCmSoftwarePlatform/hipCaffe](https://github.com/ROCmSoftwarePlatform/hipCaffe)

~~~
nl
The problem is that it’s TF 1.3. TF is up to v1.8, and if you want to use any
recent model you really need 1.4+.

And even ignoring that, you still can’t use PyTorch.

~~~
c2h5oh
[https://github.com/ROCmSoftwarePlatform/tensorflow-
upstream](https://github.com/ROCmSoftwarePlatform/tensorflow-upstream) is 1.8

~~~
nl
Oh nice! What state is it in?

------
jimnotgym
I wonder if this will get the Crypto-mining people interested too? I am really
looking forward to them having more specific hardware to waste their money on,
so we can return to buying GPU's for gaming at a sensible price!

~~~
icelancer
>> I am really looking forward to them having more specific hardware to waste
their money on

That's not how it works. You won't make GPUs that suck up all the demand while
you get good gaming GPUs. If it can process stuff fast - either in computation
or in memory speeds - then cryptominers want it, and so do you. The card you
wish to exist simply doesn't, and if the cards are good, then they are
profitable, which means they buy more of everything.

Most intelligent GPU miners have a diversified farm of Nvidia and AMD cards in
some proportion to cover the specific algos that are good on each platform.

Wouldn't hold your breath.

~~~
rdlecler1
If anything, more money to chip makers means more incentive to build better
chips. Gaming will benefit from these advances.

------
ipsum2
Is anyone doing machine learning on AMD hardware right now? What's the
preferred pipeline to go from Tensorflow to GPU?

~~~
currymj
AMD is scrambling to build a CUDA-esque thing called ROCm and get various
frameworks including TF ported over, but it’s not even in “early adopter”
state yet, so i doubt anyone is using it. However they seem to be making
progress.

~~~
singhrac
It's not that far away. PyTorch merged a build script semi-recently, though I
don't know if it's fully tested or supported.

[https://github.com/pytorch/pytorch/blob/master/tools/amd_bui...](https://github.com/pytorch/pytorch/blob/master/tools/amd_build/build_pytorch_amd.py)

~~~
nl
That “disabled_feautures.json” file indicates it will be a while before it is
useful though.

------
jamesblonde
Here's a benchmark comparing the latest ROCm software with Vega cards from
2017 (with tensorflow 1.3). TLDR; - Vega's are nearly as good as V100s for
CIFAR-10!

[http://blog.gpueater.com/en/2018/04/23/00011_tech_cifar10_be...](http://blog.gpueater.com/en/2018/04/23/00011_tech_cifar10_bench_on_tf13/)

~~~
tntn
That post doesn't specify whether the training was fine with single or half
precision floating point. Given that it's on the website of an AMD GPU cloud
provider, I tend to think they probably used fp32 since it will make vega look
better.

------
jimnotgym
Being pedantic but shouldn't the graphic for the 'Roadmap' say "Next-Gen <7nm"
or "7nm-"?

That looks like things will begin getting bigger again in a few years?

~~~
bedros
basically 7nm+ means tweaks to the process with improved speed and power
consumption, stays 7nm from [0]

""" TSMC plans to introduce a second improved process called 7nm+ a year
later, which will introduce some layers processed with EUVL. This will improve
yields and reduce fab cycle times. The 7nm+ process will deliver improved
power consumption and between 15-20% area scaling over their first generation
7nm process. """

[0]
[https://en.wikichip.org/wiki/7_nm_lithography_process](https://en.wikichip.org/wiki/7_nm_lithography_process)

update: link

~~~
kingosticks
Where do wikichip get their info? Is it rumours since I don't see any sources
listed?That's not what I heard about 7nm+.

------
shmerl
So, are gaming 7nm GPUs from AMD coming this year or next?

~~~
all_blue_chucks
Odds are we will see overclocked 14nm Vegas with an X suffix before Christmas
this year, and 7nm Vegas next year.

