What, if anything is wrong with the list of links provided?
What would you have recommended? And why does this list come up short?
What led you to believe the link would be more than "just" a list of learning classes?
You take issue with the fact this list was compiled by someone getting their master's in Deep Learning, why? If it matters who gives the suggestions, might I ask who you are and what qualifications you posses to so flippantly discount someone's contribution?
> proves that there's a massive hip about Neural Network among programmers
Wouldn't the opposite be true, if people learned more about the subject ?!?!
> I am very sorry for the negativity
I'd recommend new Coursera Washington University ML course (https://class.coursera.org/machlearning-001/lecture) as a solid foundation. You'll learn key ML concepts (bias vs variance, bayes theory etc.) and the lector is simply the best I've seen.
Following that Ian Goodfellow book http://www.deeplearningbook.org/
is very good.
If you want to learn foundations of Reinforcement Learning I'd recommend Sutton & Barto https://webdocs.cs.ualberta.ca/~sutton/book/ebook/the-book.h...
https://www.udacity.com/course/reinforcement-learning--ud600 is also good; it is not super-modern, but it is easily accessible, and it covers a large part of classical RL - this helps if you want to read recent RL papers because terminology and ideas become more familiar.
I'm not sure Andrey Karpaty's blog post (linked in the article) is a good intro to RL - it starts with Policy Gradient which is on a complex side of RL techniques spectrum. From other sources I've heard Policy Gradient is harder to get working on an arbitrary problem than e.g. Q-Learning, but don't quote me on that :)
I also think you're taking this too personally.
If you want to read one main resource... the Goodfellow, Bengio, Courville book (available for free from http://www.deeplearningbook.org/) is an extremely comprehensive survey of the field. It contains essentially all the concepts and intuition needed for deep learning engineering (except reinforcement learning).
If you'd like to take courses... Pieter Abbeel and Wojciech Zaremba suggest the following course sequence:
- Linear Algebra — Stephen Boyd’s EE263 (Stanford)
- Neural Networks for Machine Learning — Geoff Hinton (Coursera)
- Neural Nets — Andrej Karpathy’s CS231N (Stanford)
- Advanced Robotics (the MDP / optimal control lectures) — Pieter Abbeel’s CS287 (Berkeley)
- Deep RL — John Schulman’s CS294-112 (Berkeley)
(Pieter also recommends the Cover & Thomas information theory and Nocedal & Wright nonlinear optimization books).
If you'd like to get your hands dirty... Ilya Sutskever recommends implementing simple MNIST classifiers, small convnets, reimplementing char-rnn, and then playing with a big convnet. Personally, I started out by picking Kaggle competitions (especially the "Knowledge" ones) and using those as a source of problems. Implementing agents for OpenAI Gym (or algorithms for the set of research problems we’ll be releasing soon) could also be a good starting place.
Quora link: https://www.quora.com/What-are-the-best-ways-to-pick-up-Deep...
The compilation of links is good, and you are very clear presenting them. Wish I'd had this earlier to save some time, but still, thanks for compiling this list!
And like another commenter said: comments that are negative for the sake of being negative are why HN has a negative reputation.
- Get a Master's or PHD
- Work at Google and wait for your turn to do their internal ML bootcamp
- Udacity Nanodegree Plus (I would love to hear from anyone who has gotten a job through this)
1. Finish a degree in ML and a couple of projects
2. Use this experience, as evidence of enthusiasm for applying as a support engineer on an ML project. (Working on infrastructure, and all the surrounding development work an ML project requires)
3. Keep trying to move more and more into the ML side of things.
does anybody know:
- How much work is required by week?
- Programming language used?
Work per week: can vary between 6 to 12 hours depending on the week and your background and willingness to dive into some demonstrations.
Pre-requisite: first, you definitely need to save some quality time otherwise you'll drop out (I heard the completion rate is around 10%). I feel that having a bit of matrix computations, algebra background helped. Some parts are easy, other parts are harder (but YMMV).
Programming language used: octave; while it's nice to work at that level of abstraction, I found the "feedback loop" very slow when you submit exercises, so in the end I used https://github.com/MOxUnit/MOxUnit with http://entrproject.org/ to have a faster feedback loop (anyone can email me at email@example.com to get more details). This is especially useful when dealing with vectorization of computations.
Overall I found that the course is great for me (coming from an ETL, programming background with some solid maths exp at some point), and I'm learning quite a bit; a good introduction with a pragmatic viewpoint.
A couple of years back, there was also talk of Pharmaceutical companies opening up their data for clinical trials. I'm not sure where things are with that effort. If anyone has any info on that, it would be good to know.