Eek. This type of extreme claim makes me even more skeptical than I already was of such a short overview. Each of the topics listed could easily fill 100 pages themselves...
All who needs to know? Hopefully not anyone actually using this in their job. Maybe it would be a useful reference for the definition of the term, but it's completely wild to suggest you'll have any understanding of SVM, NN, CV, or gradient descent after only 100 pages, much less all of them...
I'm curious to check it out. I find the balance of figuring out how long to make texts -- how much supplemental information that texts have to be an interesting problem to solve. Most educational materials have to much for me, in an attempt to appeal to a wider audience, and you can't really learn from a cheat sheet or a cram book. Not always because you can't understand the material but also because reading a list of facts is as dull as paste.
Not sure if this was posted by the author, but it wouldn't hurt to add TLS and a redirect. Doesn't seem like there's any sensitive info coming through, but TLS is so quick and easy with Let's Encrypt that it would be a nice addition.
This book is ideal for someone with no knowledge of ML. It is a level 100 equivalent book.
Given its mandate of 100 pages, it sacrifices breadth for depth - it covers most of its topics in 1 page or fewer. It has lots of paragraphs that end with "There are many advanced techniques that go beyond the scope of this book."
The best thing about the book is it is fairly comprehensive and well-organized as an end-to-end overview. It has short, 1-2 paragraph chunks about the 80% of ML work that covers 99% of what ML is.
It of course doesn't go into detail but it also does not have many topics missing, which means if you make a mind map while you're reading you'll have a great starting point to branch out further once you're done.
When you're done with this book, you should be able to speak intelligibly about most aspects of machine learning development and design.
You should also be able to find strengths and weaknesses in your understanding based on what you came to the table with to pursue further.
I think this book coupled with some kind of lab practical or a series of Kaggle contests and notebooks to apply what you learn in this book would be an excellent "ML 101" course.
I teach out of this book for a 2 week bootcamp to new hires and converts to the data science practice at the consultancy I work at.
I typically recommend a few different books to everyone who finishes the bootcamp, based on a self-assessment they take. I recommend some books based on their strengths, so they can find a career path sooner, and some books based on their weaknesses, so they can widen their cone of oppportunity within ML.
In our consultancy, data science is done in Python and SQL (and PySpark, but I don't hand out books on that during bootcamp!), and ML delivery is a combination of math, software engineering, and architecture/product owner disciplines.
If you're strong in software engineering, I recommend Machine Learning Mastery with Python by Jason Brownlee as it's very hands-on in Python and helps you run code to "see" how ML works.
If you're weak in software engineering and Python, I recommend A Whirlwind Tour Of Python by Jake VanderPlas, and its companion book Python Data Science Handbook.
If you're strong in architecting / product management, I recommend Building Machine Learning Powered Applications by Emmanuel Ameisen since it explains it more from an SDLC perspective, including things like scoping, design, development, testing, general software engineering best practices, collaboration, etc.
If you're weak in architecting / product management, I typically recommend User Story Mapping by Jeff Patton and Making Things Happen by Scott Berkun, which are both excellent how-tos with great examples to build on.
If you're strong in math, I recommend Understanding Machine Learning from Theory to Algorithm by Shalev-Shwartz and Ben-David, as it has all the mathematics for ML and actually has some pseudocode for implementation which helps bridge the gap into actual software development (the book's title is very accurate!)
For someone who is weak in the math of ML, I recommend Introduction to Statistical Learning by Hastie et al (along with the Python port of the code https://github.com/emredjan/ISL-python ) which I think does just enough hand holding to move someone from "did high school math 20 years ago" to "I understand what these hyperparameters are optimizing for."
Anyway, I've spent a lot of time reading and reviewing books about ML, and my key takeaway is ones that get you closer to writing actual code to solving actual problems for actual people are the ones to focus on.
I read this cover to cover. I did not think it was very useful for someone who has experience in the field. For a total beginner, may be slightly useful.
What are topics you wanted the book to cover that it did not cover? What are the most irritating problems you face in your day to day work in the field that you would have wanted to be in the book?
There's some other ones out there...maybe by Hochreiter or one of the giants out there? I'd take a look at it, it's a free PDF that's absolutely massive and all you need to know for much of the basics and a few other deep areas of deep learning.
Or even better... Just pick a project you like and start doing. It's like art, you'll learn nothing by studying alone, and everything by getting a feel for it and being all excited about it and whatnot there. ;))) :))))
Learning anything involves a lot of experimentation, but in art the feel you get is very close to what you're actually doing, while in math and engineering the "feel" is something you learn to interpret with experience applying a theoretical point of view to numerous problems.
Even low-grade data science where cobbling together some grid-searched scikit-learn stuff suffices is more like fixing a damaged combustion engine than riding a bike.
People can be tricked into thinking they get more than what they actually do. It's flattering to be told you can "get a feel for it" and emerge knowledgeable on the other side.
for one of the possible starters, I would go for the MIT Licensed
"Paradigms of Artificial Intelligence Programming"
by Peter Norvig.
https://github.com/norvig/paip-lisp
a) you get the extra benefit of playing with Lisp
b) it gets universal praise
c) as you guessed, it's free in the freedom sense
There are other resources specific to machine learning, but the above, from personal experience, actually was fun in the proper sense: expanding my mind, knowledge, and providing a well thought-out learning experience
PAIP is a great book, but I wouldn't call it ML. It is more about traditional AI research based on symbolic computing, in the Lisp and Prolog tradition. Modern ML is 90% based on statistical and optimization algorithms.
Given that Neural Networks seem to be approaching the limits of what they can do it may be time for a return to symbolics.
NN at the limit? Is that too controversial? The "black box" approach is having difficulty with the last 5% is it not? Hence we do not have self driving cars.
I recently read a great paper covering a survey on AI. They were interviewing AI field experts, and one of the problems they encountered were the multiple, often opposing, viewpoints, with some experts plain out refusing to discuss AI, as they only recognized machine learning.
I suppose for applied ML the approach suggested by one of the top comments would be perhaps the most viable - just start running code and see where it gets you.
In some ways I agree with you - Lisp does seem old school when all the ML people are using Python.
On the other hand the algos don't change that much so it's still relevant in that sense - and the book is about getting the reader to become a Jedi, rather than being a reference book on AI.
I think of this book as the type of book to give a beginner enough info to know what you don't know, or for a professional to read before a job interview to refresh your memory on some topics you may be using less often.
(OP) A friend highly recommended this book to me and I was impressed by the testimonials on its homepage. So I thought HN might find it interesting, too. I have no connection to the author.
I've read this book recently and found it a very useful overview of ML. It is a bit theoretical which is good for somebody that comfortably reads math notation. Also the notation is pretty consistent which makes connections between methods and algorithms stand out.
If your book is short you have to be parsimonious in your explanations, touch the main results,go from the general to the particular and avoid getting bogged down in inconsequential details. A great example is Landau's Mechanics .
If your book is long you can afford to be more verbose, present more examples and broad the subject as much as you like.Even then there is usually a preferred order to present the material but the length of your book gives you leniency.
This book is the first type but even in the first chapter it just dives into an specific example using SVM without contextualizing the technique. It is like a 100-pages calculus book solving an integral by parts in page 1.
Eek. This type of extreme claim makes me even more skeptical than I already was of such a short overview. Each of the topics listed could easily fill 100 pages themselves...
All who needs to know? Hopefully not anyone actually using this in their job. Maybe it would be a useful reference for the definition of the term, but it's completely wild to suggest you'll have any understanding of SVM, NN, CV, or gradient descent after only 100 pages, much less all of them...