There's a wwdc presentation on this where they show the tools in action. The first part is Core ML and the Tools part starts at 21 minutes in (it is about 40 minutes in total so definitely worth watching I found).
https://developer.apple.com/videos/play/wwdc2017/710/
Turns out generating a ML model is pretty easy too, even after training in Python. There's a simple 2 or 3 line conversion from keras/sklearn to an ML model.
I'm curious to learn how to think about the applicability of ML inside my app or server. I know I could hire an ML professional to do the thinking for me but a bootstartup can't really afford the time, money, and salesmanship required to locate and hire such a person.
What I'd love to see is something which says "hey! Here's ML technique X and these are the areas where X or something similar to X can be used."
Schools do well in transmitting this information to us - "here's multiplication. Use it to figure out how much 5 boxes of cereal cost if one costs $4.71" and so on.
If companies are trying to mass-ify ML, they need to de-ivory-tower-ify the applicability of ML in everyday thinking too.
Take face recognition - easy as pie to try out and understand but apart from the extremely limited use case of finding friends to tag in social media, what else can it be used for? Can it used as a diagnostic tool in neurology or ophthalmology? Can face recognition be used in police sketchups? No idea and not many blogs exist to think about such things.
The other problem with ML is that it has no component parts that we can extract and use on its own. Every ML technique comes fully formed - image recognition is a complete API. Are there constituent parts inside image recognition that, when combined with constituent parts of (say) face recognition, become a better ML tool? Again, I have no idea and no blogs discuss this either.
I want to use ML.
(Also, if you're going to link to your own blog, you should mention that maybe? Your HN username and your blog name are disjoint enough for the link not to be obvious.)
Maybe I can assist? I took a graduate course at Cornell called Applied Machine Learning[1] last year. The lecture notes are really great. The lecture notes survey a huge list of machine learning algorithms, and highlights its useful applications.
If you don't have understanding of regression vs classification, maybe skim through online resources to get a high level understanding of machine learning. Then you can dive into the lecture notes where you get breadth and a little depth.
Also the book, Building Machine Learning
Systems with Python [2], is an amazing book where it applies machine learning techniques with python. I think this is the best resource on how to begin applying machine learning methods and it was helpful when I was implementing algorithms for the class, like kNN for clustering and PCA reduction + log regression for face recognition.
Learning enough to make your own spin on things by mixing and matching parts is really easy once you understand the mathematical underpinnings of things.
It's true, many API-level things are opaque about how they work, but their theoretical foundations are usually possible to pick up.
I guess a good way to do this is via an example. I had to build a system to hit the following criteria:
* The data is scarce
* The data is time series
* The data follows a state machine
* The data is noisy, and contains a signal that's unique across all training samples
Gluing together multiple domains of knowledge, from generative tactics for label propagation, to EM for state-machine convolutional decoding, to 5 or 6 others gave the company something that worked, in a short timescale, that scaled well.
It's understanding where things glue together and under what circumstances to apply that glue has been what's been most helpful to me.
A great place to get an intuition of what I mean is Louppe Giles's PhD thesis on random forests, specifically stuff like the section on the bias-variance tradeoff.
Perhaps consider self studying the big ideas? We have a product manager who is self-studying to boost his ML street cred on the project he is managing currently. You only need a handful of books/courses to get 80%+ of the way there. The math isn't all that bad if you're mathematically inclined, and even if you're not, you just need to know symbolically what is going on. The implementations are already done (via sklearn, etc.)
Hmm, I can absolutely self study but I question whether I should self study an entire branch on knowledge just to figure how to take apart component parts of that knowledge and reassemble them.
For example, if I told you today that you need to print me a raspberry pi, you could read up all about CAD and all about running a PCB printer OR download the Pi's CAD files, ship it off to a PCB startup and drop ship it to me.
I was looking for blogs which are functionally equivalent to the above example. Just tell me how RNN, clustering, mean reversion and whatever other technology fits in with each other and how they can be remixed.
E1: I'm totally using terms and ideas I have passing familiarity with. Here's a pinch of salt.