Hacker News new | past | comments | ask | show | jobs | submit login
TensorFlow Simplified Interface Examples (github.com/tflearn)
105 points by nrooot on April 21, 2016 | hide | past | favorite | 16 comments



What's a good course (Udemy, etc?) to learn about AI/Deep Learning/Basics? (Please don't pounce on me on things like "AI is not Deep Learning!")

I'm not even sure where to begin with understanding this stuff, although it's just fascinating.


Geoffrey Hinton's "Neural Networks for Machine Learning" on Coursera [0] is a pretty excellent course to cover the basics. There's a lot in the course, but if you just skim over the videos you'll get a pretty good "big picture" view of what's out there. As with any quantitive topic, it's best to take a first pass where you just glance at the math and come back later to really focus on the missing pieces

An important thing to realize is that much of deep learning is decades old neural networks research that has for one reason or another become more viable recently.


Someone posted this on another thread which is good (I've only got to chapter 3 and have had to detour to the Khan Academy to refresh my maths): http://neuralnetworksanddeeplearning.com/


Hey, I signed up just to reply! If you want to begin to understand the stuff for fun I recommend the Udacity cluster of data science courses - intro to data science, machine learning, and eventually deep neural networks. I took AI at Stanford and learned a TON but for getting your hands dirty fast the Udacity courses are more appropriate. They have a bunch and they are well segmented so if you want to divert into e.g. data visualization, there's a class for that.

Your learning will culminate in this course: https://www.udacity.com/course/deep-learning--ud730


This series by Josh Grodin got started recently and looks promising. I assume they will eventually cover neural network / deep learning basics.

Hello World - Machine Learning Recipes #1 https://www.youtube.com/watch?v=cKxRvEZd3Mw

Visualizing a Decision Tree - Machine Learning Recipes #2 https://www.youtube.com/watch?v=tNa99PG8hR8


Prereqs: Stats, Multivariable Calc, Linear Algebra

http://cs231n.stanford.edu/


The classic cs229 Machine Learning course is also pretty great, if more mathematically dense http://cs229.stanford.edu


Good one. A question on TF: How TF scores over other DL tools in terms of functionality... I know it is designed for distributed run and has easy python APIs..but I am not able to find advantages of TF on functionality part.. what I can do on TF that I would not be able to do on Torch etc...


This evaluation of deep-learning toolkits should be useful:

https://github.com/zer0n/deepframeworks


FYI, TF now supports: 1) dynamic RNNs, 2) bidirectional RNNs and will soon have 3D convolution -- that comparison is a little out of date.


One nice feature is that TensorFlow includes TensorBoard, a bundle of visualization tools (e.g. model visualizer and various charts). You can see an example here: https://www.tensorflow.org/tensorboard (try clicking around the tabs on the top).

(Disclaimer: I work on TensorBoard)


I think this is great for beginners or people learning for sure.

One thing that is consistently frustrating about these kind of projects, for machine vision specifically however is that they don't even give you a framework for building your own training sets - which in my view is the most valuable thing you can have. I mean sure, if I want to find cats/flowers etc... in my program, ok fine, but for the vast majority of possible machine vision sets there are specific things you need to find that aren't in the standard trained datasets.

Sure wish "trained network as a service" was a thing.


I'm curious, glancing at the code I can't tell immediately, but how does the DNN trainer (Deep Neural Network) differ from traditional back propagation? Does it train layer by layer using a kind of successive autoencoder approach? (If that makes any sense..)

I'm curious whether this would perform differently than e.g. Keras, on deeper networks.


No they all use the same general principal of backpropagation to do the training. Different flavours of optimizers exist with different tweaks and additions to speed training up.

Relevant file in project: https://github.com/tflearn/tflearn/blob/0.1.0/tflearn/optimi...


So it's not common to use a layer-by-layer training approach for deep nets? I thought that was one of the main things that made a huge difference and enabled the "deep" revolution. Anyways, isn't vanishing gradients still a problem? If so, how do people use these frameworks for deep nets? Otherwise, how is the problem resolved? I thought vanishing gradients was an issue for anything with more than 2 or 3 layers.


afaik this and Keras have identical mathematical methods, backpropagated sgd with some inertia scheme




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: