By yoquan - 7 hours ago
Showing first level comment(s)
Book is here: https://arxiv.org/abs/1803.08823
My review: The authors provide a condensed summary of all central topics in machine learning. Topics include ML basics, ML theory, optimization algorithms, but also a detailed introduction to modern methods deep learning methods. Code examples and tutorials are provided as jupyter notebooks for each chapter . The book uses three datasets (MNIST digit recognition, SUSY physics data, and simulated Nearest Neighbor Ising Model) as running examples throughout the book to help learners understand what different ML techniques can bring when analyzing the same problems from different perspectives.
The book has a high bias (since written from physics perspective), but low variance since assuming physics background allows authors to write a very focussed narrative that gets to the point, and communicates three-books-worth of information in 100 pages. This is somewhat of a repeat of the general physics-ML-explanations-for-the-win pattern established in Bishop's `Pattern Recognition and Machine Learning`.
The authors are wrong to label this book as useful only to people with a physics background, and in fact it will be useful for everyone who wants to learn modern ML. An estimator with high-bias but high efficiency is always useful.
For all my hacker news peeps that wants to learn ML and/or DL, you need to drop everything right now, go print this on the office printer, and sit outside with coffee for the next two weeks and read through this entire thing. Turn off the computer and phone. Stop checking HN for two weeks. Trust me, nothing better than this will come around on HN anytime soon.
 book pdf => https://arxiv.org/pdf/1803.08823  jupyter notebooks zip => http://physics.bu.edu/~pankajm/ML-Notebooks/NotebooksforMLRe...
ivan_ah - 2 hours ago
I mean, I know that there's a bias-variance tradeoff in stats and ML, but what does it mean in the context of introduction to ML for physicists?
My guess is they mean they aren't going into as heavy detail in ML, which means the reader may lack some knowledge (high bias) but won't miss the forest/wood for the trees (low variance).
Anyone else care to speculate?
pure-awesome - 4 hours ago