
TensorFlow 2.0 is coming - albertzeyer
https://groups.google.com/a/tensorflow.org/d/topic/announce/qXfsxr2sF-0/discussion
======
heydenberk
> Since the open-source release in 2015, TensorFlow has become the world’s
> most widely adopted machine learning framework

I'd wager that non-trivial (non-tutorial?) usage of scikit-learn is
significantly higher.

~~~
andbberger
'computation graph framework' isn't as sexy, I guess

------
thosakwe
The #1 thing I would love to see in future versions of Tensorflow is a better
C API, with documented support for gradients, which are essentially for
porting Tensorflow to other languages.

As someone who is not very familiar with Python, I found it very difficult to
port code over, and so I've mostly paused my porting effort for the time.

Other than that, I quite like Tensorflow, and intend to use it more and more
as time goes along.

~~~
kodablah
Amen, especially for the training side. Python and C++ leave a lot to be
desired. Extrapolating from this quote:

> Support for more platforms and languages, and improved compatibility and
> parity between these components via standardization on exchange formats and
> alignment of APIs

I hope that means a solid C API, but it might also mean higher level e.g.
protobuf/grpc or something.

------
newfocogi
I feel like I can see the fingerprints of pytorch on the direction TF
leadership is choosing to go. "Eager execution" and "easier to learn and
apply" are where pytorch steals market share from TF, resulting in moving
towards a more "dynamic" graph model.

~~~
pacala
The pytorch graph model is not as dynamic as you think it is, because batches.
All batch elements must be processed by the exact same code trace, limiting
per element dynamism. The dynamic aspect of pytorch is an implementation
detail for delivering autograd over a trace of tensor operations.

What pytorch does very well is to play nice python/numpy semantics, especially
via broadcasting[0]. There is very little cognitive overhead between a pytorch
program and its equivalent python/numpy representation.

What tensorflow does very well is to execute computational graphs on a wide
variety of backends. One gpu, many gpus, distributed gpus, servers, mobile,
browsers. The recently announced autograph[1] merges the clarity of
python/numpy coding style with tensorflow execution engine, offering autograd
via compile-time abstract interpretation over all possible traces.

[0]
[https://docs.scipy.org/doc/numpy/user/basics.broadcasting.ht...](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

[1]
[https://www.tensorflow.org/guide/autograph](https://www.tensorflow.org/guide/autograph)

~~~
h4b4n3r0
Multi GPU in PyTorch is usually just a few lines of code. Can’t be any easier.
Wrap model in DataParallel and enjoy life. Last time I touched TF, things were
pretty gnarly for this frequent use case. Don’t know how it is now.

------
newfocogi
The removal of contrib is surprising but probably necessary. It will be
interesting to see which projects get integrated vs removed. I'll be watching
bayesflow and keras.

~~~
grandmczeb
Isn't bayesflow already a separate project (i.e. tensorflow-probability[1])?

[1]
[https://github.com/tensorflow/probability](https://github.com/tensorflow/probability)

------
infocollector
AMD Graphics card support please?

~~~
jlebar
We are (well, AMD is) working on it. It's all being done in the open; there
are PRs you can follow. You can even help out if you're so inclined.

It's not going to be simple or fast, but it's coming...

(I work on XLA, a compiler for TensorFlow, and I've been working closely with
AMD on the TF/XLA -> AMDGPU port. My team also works on CUDA support in clang,
and we've been reviewing AMD's patches to support HIP in upstream clang.)

------
mark_l_watson
I wonder what new language bindings they will support. I have experimented
with saving Keras models, converting to Racket Scheme, and writing a runtime.
It would be way better to have it officially supported.

Haskell support has worked OK for a while. The languages I would most like to
see supported are common implementations of Common Lisp like SBCL and Clozure.
I think there is some future for hybrid connectionist and symbolic AI and it
would thrill me to have Common Lisp support for TensorFlow.

~~~
mi_lk
On a related note, there's a Clojure API for MXNet (another major deep
learning framework).

[https://mxnet.incubator.apache.org/api/clojure/index.html](https://mxnet.incubator.apache.org/api/clojure/index.html)

~~~
mark_l_watson
Thanks, that looks very good. Unfortunately Clojure and I don’t really click.
I have about two years professional experience with Clojure but except for my
site [http://Cookingspace.com](http://Cookingspace.com) I don’t much use
Clojure for personal projects. I prefer Common Lisp. For functional
programming, I would substitute Haskell for Clojure except my Haskell skills
are so-so.

------
lukem567
Any chance of being able to build models in Go (instead of just running them)
in Tensorflow 2.0?

------
CaliforniaKarl
I just hope that it's easier to build from source. We need to support multiple
TF versions, and since we have shared storage, we don't install relative to
system root, we install relative to a TF-version-specific root path (like
/software/open/tensorflow/2.0). That's been fairly annoying to do on CentOS.

~~~
rerx
Is Python virtualenv not an option?

------
polkadotted
Meanwhile, TF (or pytorch) never managed to get into debian :(.

~~~
IshKebab
You'd be perpetually 50 releases out of date if it did.

~~~
polkadotted
As I follow unstable, I couldn't care less about the final releases.

