Reading interesting posts on HN daily eventually inspired me to write my own, and posting my own writing to HN showed me there was an audience for the topics I wanted to write about.
I started blogging about Machine Learning in 2017, and posted some of my writing to HN. Many posts did not do well, but some made it to the front page, sometimes even to the top spot (see https://news.ycombinator.com/item?id=18147710 , https://news.ycombinator.com/item?id=16224346 and https://news.ycombinator.com/item?id=17257143).
As a consequence, my posts ended up getting over half a million reads, which was enough to put me on O'Reilly's radar. They reached out and asked me to write a book for them. I decided to write something that follows the theme of my blog posts, and focus on practical Machine Learning advice that isn't often covered in ML classes. The book took 18 months to write, and is now out. I couldn't have done it without HN.
You can find a longer description of the contents at https://www.mlpowered.com/book/.
Thank you HN!
One good use for web analytics eh.
I have a text file I’ve kept for 16 years that contains quotes from many sources and some of my own musings. I dream to one day use it to inspire myself to spout something.
I have some musings  on Github too. Thoughts that I want to get out there but are too incoherent to become blog posts ...
Took me 3-4 months to convert it into a book and I made ~10k with it.
Best $99 I spend every year.
I've read the TOC and looks really interesting. Is the content accessible for your average software engineer who never touched ML in his career? (like me)
I tried to explain the vast majority of ML concepts that I mention in the book. If you'd like to dive deeper into ML theory, I have a list of recommended books in the preface (included in the PDF preview).
I am a little bit skeptic about your running example, the "ML Editor". A model that helps you asking "good" questions, e.g. on StackOverflow.
Isn't that like an extremely complicated problem, I would even say AI hard? How do you want to evaluate if a question is "good" (and sure thing, it's not the number of upvotes it gets)? Is there a working example of such an editor in action, because I highly doubt that this is currently possible.
Improving question quality could however be a product goal for StackOverflow, or for a company focusing on writing tools (Grammarly, Textio,...). The book describes a process for turning that vague product goal into a more tangible set of metrics, which lead to choosing an ML approach, and iterating on it.
Eventually there is a finished prototype, for which you can find a GitHub link in the PDF preview. It has definitely not solved how to evaluate if a question is "good", but aims to provide a narrower set of recommendations (the first chapter actually dives into an approach for this).
However, I imagine it could be applied to refine an existing question? There certainly exist "obviously poor" questions on SO, and it's a good first step to make otherwise poor question "look" like a good question - trivial things like formatting and misuse of the language. It won't get other, high-level attributes of a genuinely good question however, but some poor questions are poor in just that - formatting and language, the "requires editing" queue.
Regarding "intrinsically poor" questions, on the other hand, if everyone used the described model, readers would now have an increased cognitive load to distinguish between good and poor questions. Over time, the described model would drop in performance, as the "typical good question attributes" are used in poor questions which wouldn't have those otherwise.
(Forgive me for trivialising the concept of the quality of a question)
It's still a very interesting problem for a book. It's just as suitable for demonstrating the model development process, and it's likely very relevant to the vast majority of the readers (I imagine).
The book covers multiple aspect of that process, from choosing an ML approach that isn't too simple or ambitious, to iterating on a model within the context of its final use case (i.e rather than only optimizing for a metric, testing how the model helps with its end goal).
In my experience, I've found that it is often those challenges that make or break the quality of an ML product, so the book focuses on tools to make complex problems more tractable, and less risky.
If I can help with troubleshooting, send me an email at email@example.com
In the meantime I'll bump up the version in requirements.txt to solve the conflict.
If pip struggles to find a specific version, I'll usually remove the specific version requirement, and give whichever version pip finds a try by running `pip install tensorflow`
I am going to purchase the book because I have never done any ML and I started working at a company that I think may benefit from applying ML to some of its products. I hope to use the book as a starting point for diving deeper into specifics of ML.
Does it matter to you where the book is purchased?
I hope you find the book helpful, please reach out if you have any questions or feedback.
Do you have a slack or some other place to group together readers of this book to discuss? I'd love to poke other readers' (and yours) brains as I go through this.
For now I do have an email address provided for any questions in the book, and am reachable on Twitter (mlpowered) and GitHub.