
Learn TensorFlow and deep learning, without a Ph.D. - ShanaM
https://cloud.google.com/blog/big-data/2017/01/learn-tensorflow-and-deep-learning-without-a-phd
======
westoncb
I spent some time learning the high level concepts first—which I found to be a
very useful initial orientation—but recently I've wanted to solidify my
foundations and learn the math properly.

I've found the math primer (two sections: "Linear Algebra" and "Probability
and Information Theory") in this free book to be excellent so far:
[http://www.deeplearningbook.org/](http://www.deeplearningbook.org/) It's a
little under 50 pages for both sections.

I've seen the basics of linear algebra covered in many different places, and I
think this is the most insightful yet concise intro I've come across. I
haven't started the probability section yet, so I can't comment on it.

~~~
mx12
I have been doing the same thing. I've also augmented each section with video
lectures from Khan academy or other sources. For instance, their videos on the
Jacobian were excellent for getting an intuitive understanding of it [1].

I also search for problems on the topic to help solidify my knowledge. You can
almost always find a class that has posted problems for a section with
answers.

[1] [https://www.khanacademy.org/math/multivariable-
calculus/mult...](https://www.khanacademy.org/math/multivariable-
calculus/multivariable-derivatives/jacobian/v/jacobian-prerequisite-knowledge)

------
thenomad
There seem to be a few of these "learn deep learning without heavy math"
courses popping up, for which I'm profoundly grateful.

Have people found any of them to be particularly outstanding? I'd be
interested - and I suspect many HN readers would be - to hear recommendations.

~~~
holografix
Andrew Ng's Machine Learning on Coursera is a good for building some solid
foundations in ML in general. Keep in mind it's not limited to Neural
Networks. The math is kinda light, you either seek to understand it or trust
that it works and focus on the ML principles.

~~~
d0vs
> The math is kinda light, you either seek to understand it or trust that it
> works and focus on the ML principles.

This is accurate and also why I dropped out pretty quickly.

------
77pt77
What I'd like is a Learn TensorFlow and deep learning with a
mathematics/physics background.

~~~
deepnotderp
I feel you. Try nando de Freitas's course, I liked it a lot.

~~~
77pt77
Thank you for the reference. I didn't know of it and will certainly have a
look.

------
blencdr
From a programmer background with very few knowledge of mathematics, keras
helped me to create some quite efficient CNN without all the code needed by
tensorflow.

This simplicity implies limits for advanced users, but the tool is fantastic
to apprehend deep learning.

------
codazoda
I watched 15 minutes of the first video.

"To help more developers embrace deep-learning techniques, without the need to
earn a Ph.D". Oh, good, I can do this.

"These fundamental concepts are taken for granted by many, if not most,
authors of online educational resources about deep learning". Yup, true with
this one as well.

------
loudmax
A lot of the concepts in this talk are introduced very simply in this blog
post that creates a working neural network in 9 lines of python code:
[https://medium.com/technology-invention-and-more/how-to-
buil...](https://medium.com/technology-invention-and-more/how-to-build-a-
simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1)

Of course, 9 lines is a little dense, even with numpy. In practice, I got more
understanding out of the slightly longer version that clocks in at 74 lines
including comments and empty lines. This is an enormously simple neural
network: a single layer with just 3 neurons. My son described its intelligence
as being less than a cockroach after it'd been stepped on.

It works though. It's able to accurately guess the correct response for the
trivial pattern it's given. You can follow the logic through so you understand
each simple step in the process. In a follow up blog post, there's a slightly
smarter neural network with a second layer and a mighty 9 neurons.

These examples are very approachable. It's about as simple a neural network as
you can get. If you're new to machine learning, understand how it works helps
illuminate the more sophisticated networks described in Martin Görner's
presentation.

------
deepnotderp
Is an integral really _that_ hard?

~~~
xyzzy123
Not if you take it a little bit at a time.

~~~
stephengillie
A very little bit at a time - the amount of time between now and then, as now
is about to become then.

------
amelius
Perhaps somebody here can help me with a sideproject that I'm working on. I'm
trying to figure out the topology of a neural network that is capable of
detecting the location and orientation of a given object. Say, a wrench. I
don't want to use heatmaps (e.g. [1]) because they give just the location of
the object and not the orientation. So the problem is basically how to choose
the output quantities and how to encode them. The x and y coordinates of the
head of the wrench could be quantities, but how to encode them? Should I use
multiple output neurons per coordinate? And encoding the orientation is a
similar problem. Would it even make sense to decompose the output in this way?
Thanks in advance!

PS: More generally, is there a guide that explains how to robustly encode real
numbers as output of neurons? I've tried to search for it, but couldn't find
it.

[1] [https://github.com/heuritech/convnets-
keras](https://github.com/heuritech/convnets-keras)

~~~
pizza
There are algorithms for stuff just like that in OpenCV. Maybe you could find
some inspiration or clarification by reading through the source code for the
algorithms for a brief description?

~~~
amelius
Yes, that's a good idea, thanks!

But I was hoping for a more scientific answer. Like how do researchers
approach this problem typically? And is there a strong consensus in this area
among researchers?

It seems like such a general problem.

~~~
pizza
Hm, there are problems in the field of computer vision that might help
structure your cost/training algorithm; maybe pose estimation, the
perspective-n-point problem, and point-set registration.

------
dagenleg
So, in other words, learn to slap layers'n'shit together. I thought that's
what everyone is already doing in deep learning.

~~~
martin_gorner
Nope, look at the videos. I try to give as much background information as
possible within 3h. The goal, on the contrary, is to help you understand the
basics so that you can build on a solid fundation rather than slappin' layers
together! Not that slappin' layers'n'shit together is against my religion or
anythings though - sounds like fun actually :-)

------
jpgvm
This is probably the most effective 3 hours I have spent trying to get my head
around Tensor Flow (and NN in general somewhat). Hell I even get what CNN and
RNN's -are- now.

Fancy math is useful for explaining why it works.

But this sort of content is good for explaining to engineers -how- it works.
Which is ultimately how I need to understand things before the why is
interesting to me.

------
martin_gorner
I just published the outline of this course as a Twitter moment:
[https://twitter.com/martin_gorner/status/823527027357655041](https://twitter.com/martin_gorner/status/823527027357655041)

------
alfonsodev
I found Siraj Raval has a great youtube channel[1] about these topics, also
"without a Ph.D" style, he explains dense topics in a fun way! (maybe not for
everyone) Also has practical [2] videos for building things from the scratch
(in python) to understand better the basic concepts.

[1]
[https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A](https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A)

[2]
[https://www.youtube.com/watch?v=h3l4qz76JhQ](https://www.youtube.com/watch?v=h3l4qz76JhQ)

------
jkestelyn
BTW, worth mentioning that Martin (the author of this tutorial) will present
it in person at Google Cloud NEXT [1] in SF on March 8.

[1]
[https://cloudnext.withgoogle.com/schedule#target=tensorflow-...](https://cloudnext.withgoogle.com/schedule#target=tensorflow-
and-deep-learning-without-a-phd-part-1-f63963b7-1588-4669-9216-a397cc503cd3)

------
thamer
I watched a version of this course a few weeks ago and it has cleared up a lot
of things for me. Martin doesn't waste time on basic concepts and covers a lot
of ground in 3 hours. It's probably the tutorial I learned the most from so
far.

~~~
martin_gorner
thx!

------
agandy
I'm glad to see the shout out to
[http://colah.github.io/](http://colah.github.io/) (even if the linked post is
described as "monster stories")

~~~
martin_gorner
huh, totally unintentional! colah is a colleague. His blog posts are great and
were part of the source material for building these sessions.

------
SilentM68
Any versions of this course planned for R Programmers?

~~~
martin_gorner
Python is one of the simplest languages to learn. I do this course as a hands-
on lab with people who discover Python programming at the same time as they
discover neural networks. I ask them to read this "Python 3 in 15 min" primer
beforehand and they are good to go:
[https://learnxinyminutes.com/docs/python3/](https://learnxinyminutes.com/docs/python3/)

~~~
SilentM68
I appreciate the link. I haven't used it in a couple of years. It will be a
good refresher and will look forward to the course.

Thx :)

------
iconjack
Without a Ph.D?! Really?

~~~
martin_gorner
This is of course a matter of opinion, but objectively speaking, the most
complex piece of math is a matrix multiply, which I re-explain anyway. This is
an end-of-high-school level of mathematics.

