
Deep learning papers reading roadmap - kevindeasis
https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadmap
======
annnnd
Missing on the list:
[http://neuralnetworksanddeeplearning.com/](http://neuralnetworksanddeeplearning.com/)

Great book for learning concepts and for getting a generic overview (but goes
deep enough that you can jump straight into implementation if you want). I
recommend it highly.

------
leblancfg
# This downloads all the links in that page

# Just save README.md to the folder of your choice

    
    
      sed -ne 's/.*\(http[^")]*\).*/\1/p' < README.md | xargs wget -U 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0'

~~~
leblancfg
Turns out, there are also a bunch of extensionless files in there, too.

    
    
      for file in *; do if [ -f $file ] && [ ${file:(-3)} != "pdf" ]; then mv "$file" "${file}.pdf"; fi done

------
zk00006
Nice list, but there are too many papers like this and it is easy to get stuck
in theory. I would suggest to grab some simple neural network (Darknet is
great for that) code and read that first. If something does not make sense,
find the theory from papers.

~~~
Scea91
This approach is dangerous in statistics and machine learning. Things often
work on your toy examples but it may be for totally different reasons than you
think and it can bite you later. Since it looks like it works to you, you will
never look into the necessary theory. With the approach 'theory first' you
might not get nice results as fast but I would bet that your understanding
would be much deeper.

I am not saying that you need to read all the papers before coding a line, but
the approach of coding your way into it will give you more shallower knowledge
if you don't follow up with the theory.

~~~
zk00006
In fact there is no correct answer to that and it depends on your current
state of knowledge and needs for your project. It also changes over time.

I would be careful to claim that with theoretical approach you will always get
better understanding.

~~~
eftychis
I agree with posts above. Diving straight into 'practice' in statistics (and
other fields e.g. cryptography being the most notorious) leaves you open to a
great many of pitfalls. Best case you will be inefficient with your
approaches.

Your examples may work, a couple of testing sets giving you high confidence,
and then you attempt to use it in the wild and everything falls apart.

At the same time machine learning is a lot about data cleaning, bootstrapping,
picking the right algorithm with mininum iteration, minimizing your iteration
cycle as much as possible etc which you don't gain until you actually mess
around and get your hands dirty. Plus there are little implementation tidbits
specific to each project.

------
yalogin
A bit off topic but what does one need to know/read before doing the Udacity
course on autonomous driving?

~~~
erikgaas
You don't really have to know anything. And the current free course does not
have anything to do with neural nets. They use monte carlo, kalman filters,
particle filters, PID, etc. You should feel comfortable with probability
theory and if you know some linear algebra, the kalman filter example is going
to make a lot more sense.

------
amelius
What are the best video lectures out there, with emphasis on theory (not
coding)?

~~~
lis
I can highly recommend Hinton's course on Coursera -
[https://www.coursera.org/learn/neural-
networks](https://www.coursera.org/learn/neural-networks)

It starts on October 31st, I took it a while ago and learned a lot. The course
is gravitating towards the theoretical end of the spectrum. If you want
something thats a bit easier to get into, I can recommend the course by Hugo
Larochelle:

[http://info.usherbrooke.ca/hlarochelle/neural_networks/conte...](http://info.usherbrooke.ca/hlarochelle/neural_networks/content.html)

------
markovbling
Great, thanks!

I find the problem is actually how to select what content to study given a
time constraint like if you had 5 hours or 20 hours or 200 hours - what should
you read?

Like an exhaustive list is great but it's an optimization problem - how do I
maximise understanding subject to a time constraint - which resources do I
select to maximise learning in x hours/days/years?

~~~
AlexCoventry

      > which resources do I select to maximise learning in x 
      > hours/days/years?
    

Pick a particular open-source implementation of something you're interested
in, say the Google image captioning technology, read the papers necessary to
understand that, and play with that implementation.

The main problem with the plan implied by the OP's bibliography is that it's
not practical at all. You'll end up with a catholic knowledge of the theory
and no idea about how to use it.

------
syphilis2
Is there a link to the book mentioned in item 1 (1.0 Book) as a PDF? The
closest I could find is
[http://www.deeplearningbook.org/](http://www.deeplearningbook.org/) which
says I cannot get a PDF of the book, though clearly one existed at some point.

~~~
lrwilke
Is this not it:
[https://github.com/HFTrader/DeepLearningBook/blob/master/Dee...](https://github.com/HFTrader/DeepLearningBook/blob/master/DeepLearningBook.pdf)

~~~
syphilis2
Yes, thank you. I didn't understand that the [pdf] was a separate link from
the broken webpage link.

------
jonnys1
I was SOOO looking fow something like this!

------
ilaksh
I would skim these to select something basic and then try to experiment with a
real system with those documents as a guide/reference.

Try to actually learn one level at a time.

That is my approach right now. I have read a lot but without being 100% on the
fundamentals it mostly goes over my head so I am backing up. I plan to try
Tensorflow examples also but I expect it will be pretty shaky until my high-
level practical surface knowledge can meet in the middle with my fundamental
knowledge if I can keep progressing.

