
Tensorflow 2.0.0-Alpha0 - Gimpei
https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0
======
rolleiflex
One thing I've always wondered is, why is TensorFlow NVIDIA-only? I was hoping
a new major release would fix that, but it doesn't appear to have.

I've recently gotten an eGPU for my Macbook Pro for playing games in whatever
little off-time I have on Windows. I also wanted to use it on Mac, though, and
Macs only support AMD graphics. So I got a Vega 64. The other half of the
reason was that I wanted to play with deep learning.

Turns out TensorFlow just does _not_ work on AMD. There is a fork of it
maintained by AMD, but they only support it for Linux, and the code in
question reaches deep enough to hardware layer that it cannot be run in a VM.
It's also always at least a few versions behind as well.

With NVIDIA GPUs more expensive and poorly performing per dollar than they
have ever been, TF2 could have been the moment they made a major improvement.
Sad to see that is not the case.

~~~
endorphone
AMD dropped the ball by not dedicating the sort of resources that nvidia has
on the deep learning space (and playing the long, but risky move of throwing
all in with ROCm). Similarly, Apple dropped the ball by being so wed to AMD.

Coincidentally I was just digging through the Tensorflow code to find where it
delegates to CUDA -- an enormous task given that it is a huge volume of
dependencies many layers deep -- because ultimately most neural nets entail a
significant amount of simple math. It seems, and I say this in the
nonsensical, overly confident way that we do when we look on from the outside,
that it should be trivial making it work with OpenCL or even just a branch for
AMD.

~~~
zdw
AMD is a far smaller company than either nVidia or Intel, its two main
competitors.

There are a variety of reasons for this - anticompetitive behavior from those
competitors, chip designs like Bulldozer that focused on many wide execution
units at the expense of single threaded performance, etc., but what it comes
down to is that they just don't have a comparable amount of resources
dedicated to ROCm compared to what nVidia has invested in CUDA.

I'm quite impressed with their current and near future designs in the CPU
space, but GPU design is hard and beginning to hit a performance wall -
witness the lackluster reaction to nVidia's current RTX chips in many areas.

~~~
rolleiflex
I think the reason for the lackluster reception NVIDIA has gotten to the
current RTX designs is only in a small part caused by the actual poor design.
Yes, real-time ray tracing, the new feature in these cards, is so bad that
it's barely usable — but we're used to that, v2 will be better. I have a lot
of sympathy for NVIDIA engineers trying to improve the state of the art, it's
never easy.

However, the issue with NVIDIA is that they have adopted the antics of the
game development space they cooperate with, and they have a poor attitude
towards their customers as a result. So far as outright saying that their
customers are stuck, so that the pricing doesn't matter, you'll just pay for
it and get on with your life anyway. One major example of this is the banning
of their 'gaming' cards from data centres via an EULA, because they couldn't
sell their 10X-priced 'enterprise' cards, since they performed within 10% of
each other.

Another example is RTX, again. They've stopped producing their 10xx series
cards to sell more RTXes at a higher margin, because, boy it turns out people
still want and prefer older cards compared to the RTX ones, in consideration
of the price-bloat that NVIDIA has saddled the RTX with.

~~~
izacus
> Yes, real-time ray tracing, the new feature in these cards, is so bad that
> it's barely usable — but we're used to that, v2 will be better. I have a lot
> of sympathy for NVIDIA engineers trying to improve the state of the art,
> it's never easy.

Can you explain this more? All the tests of RTX in actual games have been very
positive and showed significant improvements in visual quality.

~~~
dragontamer
The results I've seen show the $1200 GPU playing at 1080p resolution with less
than 60FPS.

RTX is clearly a compute-hog: its barely usable even if you pay for the best-
of-the-best GPUs. I mean, not to knock Nvidia down or anything, raytracing is
one of the most computationally difficult problems in existence right now.

But from a practical and pragmatic perspective, you suffer a major loss in
frame-rates and force the GPU to drop in resolution to make the jump to
raytracing. Even if you spent $1200 on the card...

\-----------

I'm personally excited to see offline renderers use the RTX features to
accelerate offline raytracing. That's probably the more important use of the
technology. As it is, RTX isn't quite fast enough for "real time" yet. Just
grossly accelerated offline raytracing (which is still impressive)

~~~
izacus
According to PC Gamer and Digital Foundry, the 2080Ti can drive Metro Exodus
with RTX on at average 55fps with drops down to 30fps at 4K resolution so your
claim seems grossly exaggerated -
[https://cdn.mos.cms.futurecdn.net/YhDHpgGrAmpmnP4LUBEgvg.png](https://cdn.mos.cms.futurecdn.net/YhDHpgGrAmpmnP4LUBEgvg.png)

Also last I read, RTX really isn't designed for offline raytracing and doesn't
really bring much to the table. Its use is in realtime.

~~~
dragontamer
[https://www.hardocp.com/article/2019/01/07/battlefield_v_nvi...](https://www.hardocp.com/article/2019/01/07/battlefield_v_nvidia_ray_tracing_rtx_2080_ti_performance/3)

Battlefield V Ultra 1080p RTX on is ~63 FPS average (not minimum FPS, but
average), which means it will regularly dip below 60 in practice.

> Also last I read, RTX really isn't designed for offline raytracing and
> doesn't really bring much to the table. Its use is in realtime.

On the contrary! RTX Cores are a hardware-accelerated BVH-traversal. That has
HUGE implications for the offline rendering scene.

See NVidia Optix for details: [https://devblogs.nvidia.com/nvidia-optix-ray-
tracing-powered...](https://devblogs.nvidia.com/nvidia-optix-ray-tracing-
powered-rtx/)

IMO, this is the killer-feature of RTX. Accelerating those Hollywood Renders
from hours-per-frame to minutes-per-frame. NVidia Optix takes industry-
standard scene trees and can use the RTX Cores to traverse them for hardware-
accelerated raytracing.

Or more specifically: coarse AABB Bounds checking, which is a very compute &
memory heavy portion of the Raytracing algorithm.

Even if RTX is too slow for video games, any improvement to offline rendering
is a huge advantage.

~~~
ykombinaught
Optix was developed a decade ago and doesn’t require RTX cards. It run on any
Maxwell or newer card. It’s NVIDIA’s version of RadeonRays.

What NVIDIA is trying to sell people is a denoiser.

And are you sure RT Cores even exist or are they just tensorcores by a
different name?

[https://youtu.be/3BOUAkJxJac](https://youtu.be/3BOUAkJxJac)

------
runesoerensen
Also [https://medium.com/tensorflow/test-drive-
tensorflow-2-0-alph...](https://medium.com/tensorflow/test-drive-
tensorflow-2-0-alpha-b6dd1e522b01)

~~~
dynamicwebpaige
If you're interested in upgrading your models, also make sure to check out
this Medium post: [https://medium.com/tensorflow/upgrading-your-code-to-
tensorf...](https://medium.com/tensorflow/upgrading-your-code-to-
tensorflow-2-0-f72c3a4d83b5)

------
hgasimov
Thanks god Tensorflow is doing something with global variables and sessions

------
p1esk
It feels more like Pytorch now:
[https://github.com/tensorflow/docs/blob/master/site/en/r2/tu...](https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/eager/custom_layers.ipynb)

------
diminish
What are you using in your projects?

Is TF the dominant tool in commercial or startup DNN projects?

~~~
minimaxir
TF (or Keras via TF) is definitely the most popular framework for newer
projects, although some argue PyTorch is holistically better.

------
DevX101
Any thoughts on how this version now compares to PyTorch 1.0?

------
truth_seeker
Cool. I would love to see pypy support for this major version.

