Book is here: https://arxiv.org/abs/1803.08823
My review: The authors provide a condensed summary of all central topics in machine learning. Topics include ML basics, ML theory, optimization algorithms, but also a detailed introduction to modern methods deep learning methods. Code examples and tutorials are provided as jupyter notebooks for each chapter . The book uses three datasets (MNIST digit recognition, SUSY physics data, and simulated Nearest Neighbor Ising Model) as running examples throughout the book to help learners understand what different ML techniques can bring when analyzing the same problems from different perspectives.
The book has a high bias (since written from physics perspective), but low variance since assuming physics background allows authors to write a very focussed narrative that gets to the point, and communicates three-books-worth of information in 100 pages. This is somewhat of a repeat of the general physics-ML-explanations-for-the-win pattern established in Bishop's `Pattern Recognition and Machine Learning`.
The authors are wrong to label this book as useful only to people with a physics background, and in fact it will be useful for everyone who wants to learn modern ML. An estimator with high-bias but high efficiency is always useful.
For all my hacker news peeps that wants to learn ML and/or DL, you need to drop everything right now, go print this on the office printer, and sit outside with coffee for the next two weeks and read through this entire thing. Turn off the computer and phone. Stop checking HN for two weeks. Trust me, nothing better than this will come around on HN anytime soon.
 book pdf => https://arxiv.org/pdf/1803.08823
 jupyter notebooks zip => http://physics.bu.edu/~pankajm/ML-Notebooks/NotebooksforMLRe...
But it's certainly true that Deep Learning with its combination of mathy underpinnings and poorly understood behaviour is something that fits very well with a physicist's skillset.
I mean, I know that there's a bias-variance tradeoff in stats and ML, but what does it mean in the context of introduction to ML for physicists?
My guess is they mean they aren't going into as heavy detail in ML, which means the reader may lack some knowledge (high bias) but won't miss the forest/wood for the trees (low variance).
Anyone else care to speculate?
Low-variance, I think the author means to have a sharp and quality focus on the subjects he/she is going to talk about.
- The book has a high bias (since written from physics perspective), but low variance since assuming physics background allows authors to write a very focussed narrative that works for physicists (low variance on subset of physisicts readers).
- If we define the problem of choosing which topics t1, t2, ..., tn to cover in a ML book, the authors are saying the using the physicists approach will lead to consistency good results (i.e. not all over the place). It something like an argument from physicists-are-sensible people, unlike CS theoreticians, industry practicians, and mathematicians.
Michael Hartl (Caltech Physicist) also does a killer job with Rails Tutorial. There's definitely a trend.