
Tensorflow 2.0 - Gimpei
https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0
======
317070
As someone who uses tensorflow a lot, I predict an enormous clusterfuck of a
transition. Tensorflow has turned into a multiheaded monster, supporting many
things and approaches but none of them very well.

I mean, when relying on third party code, things like
`tf.enable_control_flow_v2() and tf.disable_control_flow_v2()` can and will go
horribly wrong. It looks like some operations are changing behaviour depending
on a global flag being set. And not just some operations, but control flow
operations! That will lead to some very hard to figure out bugs.

In my opinion there are some architectural problems with TF, which have not
been adressed in this update. There is still global state in TF2. There is
still a difference in behaviour between eager and non-eager mode. There is
still the control flow as a second class citizen.

If you need to transition from TF1 to TF2, consider doing the TF1 to pytorch
transition instead.

~~~
XCSme
Not only upgrading is hard, but also installation (on Windows at least). For
each Tensorflow version you need a specific python version, a specific CUDA
version, specific tensorflow-gpu version, and many other easy to get wrong
things. The problem is not the requirements, but that it's very hard to know
what versions are compatible. There are endless threads on Github of people
trying to use Tensorflow but failing after spending days trying to install it.

~~~
hadlock
Tensorflow sounds like an ideal candidate for running in a container. List out
all your approved compatible versions in the Dockerfile and distribute it with
your source code and anyone can reproduce your results with the exact same
setup.

~~~
XCSme
Yes, but for personal use, unless someone else already made those containers,
I will still have to go myself through the trial-and-error process of finding
the right combination of versions. Yes, second installation will be easier,
but if I just want it on my PC it doesn't really help.

~~~
hadlock
You can `docker pull username/mytensorflowcontainer` and start from someone
elses' work. Looks like Tensorflow has a how-to on the site:
[https://www.tensorflow.org/install/docker](https://www.tensorflow.org/install/docker)
including working cpu-only and gpu-enabled examples.

------
abakus
Recently I found that a lot of TF2.0 Keras' functionaly does not support eager
execution. This makes Pytorch still significantly easier to prototype with
than TF2.0.

If you miss Keras' way of defining NNs, you can use PyWarm:
[https://github.com/blue-season/pywarm](https://github.com/blue-season/pywarm)
which offers a fully functional NN building API for pytorch.

~~~
choppaface
Does pytorch have a Tensorboard equivalent? For rapid prototyping, I find
Tensorboard a lot more useful than, say, print statements and such that you
can get through eager execution. Tensorboard is also crucial for post-hoc
analysis, and the Summary format is clean enough to use as a primary data
artifact (e.g. use loss recorded in summaries versus some alternative hand-
crafted text file).

~~~
smhx
It has TensorBoard integration itself (which can be installed independently).

[https://pytorch.org/docs/stable/tensorboard.html](https://pytorch.org/docs/stable/tensorboard.html)

------
visionscaper
I decided a few weeks ago to transition to PyTorch (was using Keras before)
and I must say that I really love it! How PyTorch is structured gives me the
right balance between ease of use and the ability to make customisations.
Further, using DistributedDataParallel, dividing the work over multiple
processes, where each process uses one GPU, is very fast and GPU memory
efficient.

Before my switch I tried out Keras for Tensorflow, and even got a lot of
support from Google in my endeavours to resolve the issues I encountered
(kudos to Google for that!). In the end I felt it was still not mature enough.
Further, although I do believe TF and Keras are moving in the right direction,
I still felt that in some cases the way the software was set up just didn't
sit well with me.

Maybe it is worth to try again in a year or so, or by then I will tryout Swift
for Tensorflow, which I think has a great future ahead.

------
minimaxir
The most important change in terms of usability, IMO, is the use of tf.keras
as the _recommended_ interface to TensorFlow. There hasn't been a case yet
where I've needed to dip outside of Keras into raw TensorFlow, but the option
is there and is easy to do.

That said, TF 2.0 changes a lot. Many repos might break, so expect to see lots
of tensorflow==1.14 in requirement.txt files from now on.

------
axegon_
Disclosure: I'm a big BIG tensorflow fan.

I've been using the rc's for a while now and I must admit, it's a big step up
for projects you are starting from scratch. Migrating... Probably not as clean
as I would like to admit but it does the job. Overall tf 2.0 removes a lot of
the boilerplate code, which is awesome.

------
Fr0styMatt88
Might be slightly off-topic, what’s the latest state-of-play for AMD/OpenCL? I
hear in various places that AMD is fantastic for compute, but everyone seems
to be using CUDA.

I’ve got a very expensive Bitcoin mining rig paperweight at the moment with
two Vega 64s (along with another Vega 64 in my main rig) — it’d be great to
re-purpose them for something potentially useful.

~~~
pixelhorse
AMD is great when it comes to raw compute power for the money.

However, AMD has a history of shipping poor OpenCL drivers, so everyone just
went with what actually worked - NVIDIA.

------
m0zg
Still pretty terrible compared to PyTorch _unless_ you need deployment to
device, in which case it's basically the only game in town. Or at least the
only _viable_ game.

Case in point: people are still trying to figure out on their github how to
apply global weight decay when training a model, and to get a "correct" resize
for segmentation you have to fall back to the legacy 1.x API and specify
align_corners=true there. These bugs existed for many years, and nobody gives
a damn. That said if your choice is between 1.x and 2.0, 2.0 is much easier to
work with, especially if you use something other than TF (e.g. PyTorch) for
data pipeline and augmentation. You can hook that up pretty seamlessly if you
train in eager mode.

~~~
Reebz
I think it’s getting easier by the day to port a PyTorch model to something
that can be production ready, like Tensorflow Lite. It’s cumbersome, but
doable. For me, I like to optimize my workbench and just deal with the final
steps of pain to get it to prod.

~~~
smhx
There might be some additional help on the way directly in PyTorch for on-
device :)

\- iOS:
[https://github.com/pytorch/pytorch/pulls?utf8=%E2%9C%93&q=is...](https://github.com/pytorch/pytorch/pulls?utf8=%E2%9C%93&q=is%3Apr+ios+)

\- Android:
[https://github.com/pytorch/pytorch/pulls?utf8=%E2%9C%93&q=is...](https://github.com/pytorch/pytorch/pulls?utf8=%E2%9C%93&q=is%3Apr+android+)

------
spicyramen
Reality is that most production ML workflows still use 1.x,( not even the
latest 1.14[5]. Migration to 2.x does not make sense in many existing use
cases unless you want to build your new algorithms from scratch. From previous
experience with TF, API changes so much that is impossible to keep up from an
Enterprise perspective, I expect TF team do some type of LTS version for
Enterprise and continue improving 2.x, which clearly was a response to Pytorch
and is an evolution of TF.keras and eager execution plus all the cleanup.

------
manojlds
Blog post - [https://medium.com/tensorflow/tensorflow-2-0-is-now-
availabl...](https://medium.com/tensorflow/tensorflow-2-0-is-now-
available-57d706c2a9ab)

------
fareesh
Is there something which has opinionated defaults where you can hack on
projects without getting into all the boilerplate?

Like there is a mostly finite set of typical solved things you would want to
use ML for like image classification, object detection, etc.

I find myself spending my time copying an example verbatim and replacing the
csv / images with my own.

~~~
solidasparagus
Are you looking for advanced models that you can train on your data pretty
much out-of-the-box or simple, easy-to-read models that help you learn the
underlying concepts?

If it's the first case, what you want to do is find the best GitHub repos for
the task(s) you are trying to do. Make sure the GitHub repo has a model zoo
and good support and start from there. In CV, if you are trying to do high-end
work, the repos to check out are:

\- [https://github.com/facebookresearch/maskrcnn-
benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) (don't be
misled by the name, it has support for lots of the high-end modern CV models)

\- [https://github.com/open-mmlab/mmdetection](https://github.com/open-
mmlab/mmdetection)

\-
[https://github.com/TuSimple/simpledet](https://github.com/TuSimple/simpledet)
(haven't explored this one as much, but it looks very solid)

If you want easy-to-read code for non-trivial tasks, I would suggest taking a
look at Gluon (GluonCV - [https://gluon-cv.mxnet.io/](https://gluon-
cv.mxnet.io/) and GluonNLP - [https://gluon-nlp.mxnet.io/](https://gluon-
nlp.mxnet.io/)). I haven't worked much with the fast.ai library, but that's
probably also a good suggestion.

------
mark_l_watson
I think that I take the effort to update most of my side projects to TF 2, or
mark the github repos as no longer supported for really old code.

Some may complain about big API changes but I think it is occasionally healthy
to tag old stable versions and do massive code refactoring.

I assume that TensorFlow.js is also getting updated - I find it almost equally
nice to prototype with (the bundled examples are first rate).

------
amrrs
On a different context, what're some good resources to get started with Deep
learning using TF (besides the stuff Google put on YT)

~~~
woadwarrior01
The 2nd edition of Aurélien Géron's book[1] was written for Tensorflow 2.

[1]: [https://www.oreilly.com/library/view/hands-on-machine-
learni...](https://www.oreilly.com/library/view/hands-on-machine-
learning/9781492032632/)

~~~
sireat
The first edition of the book is fantastic!

Aurelien's writing is clear and clean compared to most other books on ML.

------
hn2017
Any online courses using TF 2.0, such as Coursera? Or books coming out soon?
That's key for proper adoption.

------
samstave
If I were to start literally at ground zero with zero knowledge of programming
or tensorflow. Where should I go?

~~~
mantap
Install jupyter notebook and play around with numpy/scipy. Neural networks are
not really for outright beginners, and tensorflow is doubly not.

------
mrfusion
So if you want to work in deep learning should you learn pytorch and
tensorflow? Or just one, which one?

~~~
p1esk
fast.ai

------
hn2017
As far as production deployments of ML models, what percentage use TF vs
PyTorch?

------
Havoc
So which Cuda version and cudnn is this compatible with? Same as beta?

~~~
SloopJon
This documentation seems to describe 2.0:

[https://www.tensorflow.org/install/gpu](https://www.tensorflow.org/install/gpu)

It says CUDA 10.0 and cuDNN >= 7.4.1.

~~~
Havoc
Yeah that's same as beta.

Was incredibly fussy when I tried it & tends to silently fail over to CPU if
it doesn't like something (and worse builds of versions. Eg 7.4.1 comes in
10.0 Cuda flavour and 10.1)

------
adamnemecek
Just use julia.

~~~
whlr
I like julia a lot (use it everyday, it's my primary language right now), but
this isn't really a reasonable recommendation, imho. Julia seems like it could
be really great for ML, but I'm not sure if the current libraries are mature
enough to wholeheartedly recommend.

I'd love to be proven wrong about that, though.

~~~
adamnemecek
What are the mature libraries? Use PyCall and whatever your heart desires.

~~~
abakus
My heart desires python then

~~~
adamnemecek
Julia consumes python and give you so much more.

