Edit: removed "inb4 downvotes".
Also, I really apologize for this but, please don't say things like: "Time for me to get down voted to oblivion".
You've spoken your mind (and helpfully so) with the end of your comment (which is otherwise good).
Self-referencing how one expects comment voting to go is a behavior that I wish people would refrain from. It makes the comment "about" itself --- rather than the content. It's a primer that stems from perceptions about how it will be interpreted by the community, which in turn manipulates voting behavior about the comment. (<insert-discussion> voting systems on community forums. is voting itself a good system? </insert-discussion>).
If you need books as your learning resources, I would recommend to search it via Hacker News Book . That site scrapes books based on the shared links on HN comment and ranks them.
The user story for search has been solved. What hasn't been solved, it sounds like, is feature discoverability.
Algolia has suggestion features built-in which cannot be disabled (synonyms? autocorrect?) which will return content that perhaps does not much what the user really wants if they want an exact search. This behavior is especially important to developers since our terminology does not match the English (the language of HN) vocabulary many times. Try searching for the product "logsene", which is simply an example. Quoting words, such as what Google uses, does not work all the time.
Early this year Apple acquired Turi for $200 million. It was founded by Carlos Guestrin, one of the professors who is teaching the course.
We (Class Central) are also working on a six part Wirecutter style guide to learning Data Science online. Here is part 1:
Feedback would be appreciated (on the format as well as content)!
Carlos and Emily do a great job diving deeper than most other online courses into the math behind different algorithms without making the math too theoretical. I'm a grad student in engineering, so I wanted to understand not only how to run these algorithms but also how they work and these courses were great for learning in a mathematically rigorous but still approachable sort of way.
The only criticism I've heard of this series is that it uses Turi/Dato/Graphlab instead of SciKit-Learn. I did the courses that exist so far using GraphLab, but I'm starting to redo the assignments using SciKit now so that I learn that toolkit as well.
I am in the same boat as you. I am currently doing Udacity's Machine Learning Nanodegree. But I think I would have felt lost if I hadn't done the first two courses of that Coursera Specialization.
Just started, but it seems that Pandas and SciKit-Learn are very similar to Dato/Graphlab from a usage perspective.
If you want to just try them out, I'd honestly recommend just going through the scikit-learn documentation. Almost all of the algorithms provide an example, and the API is pretty consistent across different ML algorithms, to the extent that it can be.
People learn differently, some people prefer to get into the math right away, others will never be interested in it. I'm interested, but I tend to be more motivated when I've used the algorithms, start to learn about how and why they perform well or poorly under various circumstances, and then dig into the mathematics specifically to find out why.
Also, I'm not going to be creating new ML algorithms. So, you know, that also influences my level of interest. I do care about the mathematics involved, because I do want to genuinely understand why some outputs are available for random forests but not naive bases or logistic regression, why performance and/or accuracy is great in some circumstances and not others, and I don't want to have to rely on too much hand waving. But if you want to actually develop and research novel ML algorithms, you'd need to get considerably deeper into the math.
Ask HN: How to get started with machine learning?
For keeping up with the latest research, once you know what you are doing, reading papers on Arxiv daily/weekly is a great way to keep up, nearly everything gets published there
It is a remarkably high signal to noise community.
Excellent book for starting with NN and DL.