Hacker News new | comments | show | ask | jobs | submit login
TensorFlow 1.0 Released (googleblog.com)
647 points by plexicle 129 days ago | hide | past | web | 73 comments | favorite



Been using Tensorflow embedded in a mobile app for a few months and honestly, I’m constantly surprised at how well thought-out the tooling is, and how quickly you can get results. Conversely, I think a few things are still unnecessarily dense (installing dependencies, optimizing hyper-parameters, and some of the embedded/XLA stuff is very raw). Kudos to the team though. It sounds like they’re on the right track with TF overall, and focusing on performance (including the XLA stuff) + ease of use (high-level, Keras API) is absolutely what I want as a user right now. Keep up the great work, y’all.


Would you happen to know if it requires additional code to support the Hexagon digital signal processor (DS)from Qualcomm Or is it automatic (kinda like switching between Tensorflow-CPU and Tensorflow-GPU)? I mainly work with Tensorflow on a PC so I'm not too familiar with the embedded variants of Tensorflow. Thanks!


I don’t have any experience with that unfortunately. I’ve seen a couple of talks/demos/announcements about it and it sounds like it’s automatic, but I haven’t been able to find the SDK or any tutorial for it, so I’m not 100% sure. The Qualcomm speaker this morning said there would be more details about it later today but I don’t see anything on the Agenda [0]. Maybe Pete’s session at 12:40 will cover it?

[0]: https://events.withgoogle.com/tensorflow-dev-summit/agenda/#...


I'm not going to cover it in detail in my talk, but the code with some barebones documentation is available at https://github.com/tensorflow/tensorflow/tree/master/tensorf...

One thing to note is that this isn't available on production phones yet, because we need a signed driver to run within Android. You should be able to run this on a Dragonboard 820 development board though, using the instructions in the README.

This is all very new though, so apologies in advance for any hiccups getting up and running. My email's petewarden at google.com if you are trying this and hit problems.


Thanks Tim and Pete!


Thanks Pete :)


How do you embed in a mobile app?


Essentially there are two ways to do this. The “old” way is to export your TensorFlow neural network into a protobuf file, then load up the TensorFlow interpreter in your iOS/Android app, feed it the neural net, and run the inference directly on device. The GitHub repo [0] has a good set of examples of what that looks like in practice.

The new, still experimental way is to compile your neural net into executable code with their XLA / tfcompile tool, and link that into your app. They are adding more docs on this on the TensorFlow website [1].

[0]: https://github.com/tensorflow/tensorflow/tree/master/tensorf...

[1]: https://www.tensorflow.org/versions/master/experimental/xla/...


Running neural nets on a standard mobile device will be game changing. I can't wait for mobile devices having custom chips to do AI related tasks.

I'm predicting in a decade we'll have offline speech and image recognition running on the phone.


They don't want you to go there. Remind who started the latest big AI Projects (Google, Amazon, Microsoft, Facebook) and I don't think the will stop grabbing data.

I think they'll develop a hivemind, where mobile adds to the pool. In short Skynet ;)


Hopefully the ASICs are fast and open enough so we can see some open AIs who respect our privacy and security concerns


which app may i know?


Not released yet, should go live in April/May!


Amazing work; it makes using AI and Deep Learning accessible for everyone here really. If you haven't seen it check this out for an intro:

https://www.youtube.com/watch?v=vq2nnJ4g6N0

I wish AMD graphics cards were supported fully. I really think AMD should find a way to work with the Tensor Flow team on this...


It's worth pointing out Tensorflow is basically Google's clone of Theano, including a lot of the same design decisions. They've improved some things but it's not like Google handed us the secret to fire here. It's just a good implementation of the same things a lot of people have been working on for years.


TensorFlow is not a clone of Theano. It's based on the earlier Google's platform DistBelief, mostly known outside of Google as the engine behind 2012 Youtube cat videos paper. Like DistBelief, TensorFlow was designed from the ground up to be scalable across multiple nodes.

Theano, on the other hand, seems to be focused on the optimizations for the single machine, single GPU code. It only recently got the ability to run each function on a different GPU.


To be truthfully honest it doesn't matter either way or even if there is something "better" out there (if Theranos was...).

TensorFlow has already become the winner from my reading around it so I'm going to continue learning it rather than another framework until I've become fairly proficient. By which time why change?


TensorFlow does not make AI or DL "more accessible". It's not easier to use than Theano. Both have good documentation, and both have lots of code examples/model implementations.

If you're looking for something that would make it easier for you to learn DL, you should try Keras - it's a higher level library, which can use both Theano and TF as a backend.


DistBelief was a CPU-only special purpose neural network system that would have been difficult to modify to support arbitrary neural architectures like theano. TensorFlow is not based on DistBelief in any meaningful way other than that they were written by mostly the same people.


I agree 100%. I'm not sure what AMD is thinking, but without support from major ML tools there is no chance of competing against NVidia in this space - and this space will grow larger and larger.


I totally agree with you as well. I was looking for a new graphics card and was debating between the GTX1050 or the RX480. I ended up getting the 1050 since it has CUDA and CUDANN support even though the RX480 has better specs.


For anyone updating to 1.0--

There are quite a few breaking changes but there is a very helpful conversion script here: https://github.com/tensorflow/tensorflow/tree/r1.0/tensorflo....

You can find the breaking changes in the 1.0 release here: https://github.com/tensorflow/tensorflow/releases/tag/v1.0.0


This is good, the argument order and axis handling was inconsistent.


> Plus, soon Google will open-source code that will multiply the speed of TensorFlow — specifically version three of Google’s Inception neural network model — by 58.

Uh, nope, that was speedup on 64 GPUs (or CPU cores, can't remember). i.e. it scales linearly, something that TF hasn't always been good at v other frameworks. I'm amazed a journalist with (I assume) basic technical competence could make this mistake.


TensorFlow 1.0 was just announced during the TensorFlow Dev Summit keynote.

https://github.com/tensorflow/tensorflow/releases/tag/v1.0.0

You can follow the Summit live here: https://www.youtube.com/watch?v=LqLyrl-agOw


How do I get started with machine learning?

I have a couple of applications in mind, mostly time series predictions. But the machine learning field seems to be vast and I don't know where to start.


This is a good introduction focused on tensorflow. https://www.youtube.com/watch?v=vq2nnJ4g6N0 (Tensorflow and deep learning - without a PhD by Martin Görner)

The ML/DNN rabbit-hole goes deep. If the video above leaves you wanting more, http://www.deeplearningbook.org/ does a good job on drilling into more specifics for the various techniques used. The examples on the tensorflow webpage are also very good.


http://cs231n.github.io/ is a great site for beginners. I've been following the site along the Udacity Self Driving Car nanodegree. The CS231 material has helped me understand the concepts significantly.

Edit: I should mention that the class mainly focuses on neural networks and image recognition. However, once you have the foundation, you can apply your skillset to a vast range of applications.


The Udacity ML course is gradual enough to avoid overwhelming you, but really in-depth: https://www.udacity.com/course/machine-learning--ud262

Definitely recommend that as a good starting point. Isbell and Littman can be a bit cheesy at points, but they're very clear and thorough.


Start with statistics. Seriously, just google time series modeling (this seems ok for a beginner https://www.analyticsvidhya.com/blog/2015/12/complete-tutori...). Learn ARMA/ARIMA/etc.

Don't worry that just because it isn't using deep nets that it isn't state of the art or won't get the job done well. That would be like thinking python's built-in sort function isn't sufficient because it doesn't use Spark.


The next course:

Deep learning is primarily a study of multi-layered neural networks, spanning over a great range of model architectures. This course is taught in the MSc program in Artificial Intelligence of the University of Amsterdam. In this course we study the theory of deep learning, namely of modern, multi-layered neural networks trained on big data. The course focuses particularly on computer vision and language modelling, which are perhaps two of the most recognizable and impressive applications of the deep learning theory.

http://uvadlc.github.io/


I would recommend starting with a spreadsheet-sized dataset (no more than a few thousand records) where you want to predict one of the columns, use binary decision trees to try to predict it's value. Use either Azure ML Studio, or Jupyter with the Sci-Kit Learn library, depending on your comfort level with programming.


The r/MachineLearning subreddit has a pretty good wiki to get started.

https://www.reddit.com/r/MachineLearning/wiki/index


if you are based in the SF Bay Area you can come to Data Weekends (www.dataweekends.com). They are 2-day workshops to get started with Machine Learning and Deep Learning (full disclosure: I run them)


I am looking at the Martin Wicke talk. The Estimator API is very reminiscent of SparkML. Nice to see that the tensorflow crew are flexible enough to take good ideas from projects such as SparkML and Keras (now included natively in the TF stack). Other highlights include the hotspot compiler (I was not that impressed so far, but it's early days for them), and embedded visualizations (looked quite cool) for visually inspecting learnt manifolds.


I stumbled across a three-chapter preview of the upcoming book Learning TensorFlow on Safari Books Online and went through them in a sitting. It was so accessible - both the book and TensorFlow itself - and inspired me to start learning math so that when the rest of the book comes out I will be better prepared to go deeper. I love learning in general, but haven't been this excited about learning something totally new (for me) in a long time.


Do you have a link to the said book?



Yes, that's the one.


Does anyone use Tensorflow models in C++ applications? Is it possible to build Tensorflow as static or shared lib?


Not necessarily about the article so may get downvoted, but is there a good book for TensorFlow/Machine Learning?



Good book to learn the deep learning concepts. The official tensorflow tutorials are also good to learn the programming part which is not covered in the book.


A bit dated in terms of code, but good - even beyond deep learning: https://www.amazon.com/TensorFlow-Machine-Intelligence-hands...


Kudos to the team. Anybody know if we can we train in languages other than Python yet (or do I have that wrong)?


I'll discuss this a bit during my talk at the dev summit.

The short answer is no.

The long answer is yes, but only if you create the model in Python, export it, and then feed training data in other languages. There are some people doing exactly that.

Long term, I'd like to give all languages equal footing, but there's quite a bit of work left.


Forgive my ignorance, but why is it that it is Python-only?

Does Python have intrinsic qualities that other languages don't possess or is it that the huge initial investment in creating TensorFlow was based on Python and duplicating that effort somewhere else would require too much work?


Yeah, essentially a lot of supporting libraries were written in Python at first, and they need to be ported to C++ to make other languages train.


Traditionally, most neural network architectures have been implemented in C/C++ - for performance reasons. But ML researchers are not hackers, for the most part, and Python has the lowest impedence mismatch for interfacing with C/C++ of all the major languages. Julia was popular for a bit, but now Python is dominant. Programs tend to be very small, and not modular - so static type checking is less important than it would be in picking up errors in larger systems.


It's not just the lowest impedance mismatch, but it's also a framework coming out of google, where python and Java were really the only two language choices for a high level interface, and of the two python is the clear winner in prototyping / scientific community acceptance. I think it's because of the ease in experimentation and expressiveness of the language.


From the page:

-----------------------------------------------------

Language options

TensorFlow comes with an easy-to-use Python interface and no-nonsense interfaces in other languages to build and execute computational graphs. Write stand-alone TensorFlow Python, C++, Java, or Go programs, or try things out in an interactive TensorFlow iPython notebook where you can keep notes, code, and visualizations logically grouped. This is just the start though — we're hoping to entice you to contribute interfaces to your favorite language — be it Lua, JavaScript, or R.

--------------------------------------------------------


Yes, though last I remember reading about this the symbolic differentiation only worked in python, and ergo training with other languages wasn't quite there. I think the language on the page was always similar to the above.


I believe there's bindings for: C++, Java, Rust, Haskell and Go.


To load graphs and run sessions. Constructing graphs is (still) another story.


Well there's Gorgonia[0] (shameless promo: I wrote it). It's like TF/Theano. I'm finishing up porting/upgrading the CUDA related code from the older version (long story short: I needed a dependency parser and so I hacked on CUDA stuff and now I'm paying the price for not properly engineering it)

[0]: https://github.com/chewxy/gorgonia


there's an official "experimental" Java version on github, and many people are using and commiting to it


Could MacBook Pros (with Intel HD Graphics 3000 384 MB, to be more specific) train with GPU? I've always wanted to train algorithms but without using the GPU it is really slow.


I doubt the integrated Intel Card would be supported, even if it is, using the CPU would be just as good if not better. A lot of the high performance you see on GPUs are because of very highly optimized libraries available for Nvidia cards (like CuDNN) and so on.


The tensorflow developer summit is being streamed live right now on youtube: https://www.youtube.com/watch?v=LqLyrl-agOw


Great news. I have several TensorFlow examples in a new book I am writing. I need to read up on the new higher level APIs, and can hopefully shorten the book example prob-grams.


I wish so much for experimental APIs compatible with .Net stuff. Mostly because I want to use it with F#.

I really like Python, but F# <3


Even if you don't care about machine learning, TensorFlow's XLA is amazing for farming code out to the GPU. GPGPU has never been easier.


Ahh, so that's why the 1.0rc docs started to 404 an hour ago. Had me cursing under my breath :)


For a complete beginner. What kind of applications I can work on using TensorFlow?


One of the "Hello, World" applications would be learning to classify MNIST digits. They have a tutorial on their site.


Is there a good birds-eye overview of what people are creating with TF or ML in general?



We changed the URL from https://www.tensorflow.org/, which doesn't say anything about 1.0, to an article which gives a bit of background. If someone suggests a better URL we can change it again.


The announcement post was just published: https://research.googleblog.com/2017/02/announcing-tensorflo...




The URL when I first clicked this was the GitHub release notes, which is far more informative and apropos to the HN audience than either the TF landing page or vague VentureBeat pseudonews.


We changed it to that page for a few seconds, but it's pretty low-level for a 1.0 release. The official blog post seems best.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: