
Create ML - gok
https://developer.apple.com/documentation/create_ml
======
Someone1234
Because I lack imagination and or don't understand ML enough, could someone
give a couple of examples of what this could be used for? Particularly at
these scales with presumably(?) a smaller initial data set.

~~~
rasmi
These are not revolutionary or even helpful ideas, but they can be convenient
or fun. For example:

Some medical applications (just for example, you would need quite a robust
model):

* Categorize moles on your skin as cancerous or not.

* Categorize cuts on your skin as infected or not.

* Have the user input certain characteristics (images, temperature data, etc) and give some preliminary diagnosis.

Less interesting/helpful:

* Detect photos of certain foods in your app (e.g. Twitter, Instagram, Yelp) and recommend relevant emoji or hashtags.

* For a note-taking or to-do app, classify notes into categories automatically for the user.

* Suggest actions in your app based on what the user types or does.

* Recommend solutions or help articles from the feedback form in your app. Tag the feedback with a certain sentiment to determine how quickly you should follow up on it.

* Detect your products in images/videos and tag them to make them searchable, and recommend relevant things to the user (e.g. in a beer tracking app [2], detect a certain kind of beer, suggest similar types).

If any of this sounds like it's been done before, it probably has, but it's
important to note this is done entirely on device (private!) and with custom
labels. The current alternative is to use TensorFlow Lite [1], which is a bit
more involved. I'm sure as the field develops we will see more creative (and
useful/helpful) applications.

[1]
[https://www.tensorflow.org/mobile/tflite/](https://www.tensorflow.org/mobile/tflite/)

[2] [https://untappd.com/](https://untappd.com/)

------
rasmi
This looks like a really easy way to integrate your own custom ML models into
your app. I imagine this would appeal to a lot of hobbyist developers,
especially given how simple it is.

The docs [1] also seem to imply they're using transfer learning from more
robust models for their image classifier: "Use at least 10 images per label
for the training set, but more is always better." and "Create ML leverages the
machine learning infrastructure built in to Apple products like Photos and
Siri."

[1]
[https://developer.apple.com/documentation/create_ml/creating...](https://developer.apple.com/documentation/create_ml/creating_an_image_classifier_model)

~~~
tomnipotent
This will appeal to any developer that wants to deploy machine learning models
on macOS or iOS and get the most out of the hardware (Metal w/Core ML).

> The docs [1] also seem to imply they're using transfer learning from more
> robust models for their image classifier

I read that as a recommendation for the minimum number of training samples per
label (e.g. 10 photos w/leopard tagged).

Apple's Turi Create [1] also also been getting some love these last few months
to add support for exporting to Core ML.

[1] [https://github.com/apple/turicreate](https://github.com/apple/turicreate)

------
therealmarv
Not long time ago we got Swift for Tensorflow as first class citizen:
[https://medium.com/tensorflow/introducing-swift-for-
tensorfl...](https://medium.com/tensorflow/introducing-swift-for-
tensorflow-b75722c58df0)

------
thomasjoulin
Interestingly, it's available in Swift only, not Objective C:
[https://twitter.com/jckarter/status/1003768142407987200](https://twitter.com/jckarter/status/1003768142407987200)

------
RosanaAnaDana
Boy that 'ML' logo sure is reminiscent of the TensorFlow logo..

------
nso95
How is this different from Core ML?

~~~
rasmi
Core ML is bring-your-own-model (or use the built-in classifiers, which have
generic labels). This is an easy way to train custom models (for example, on
images from your own product catalog).

~~~
nso95
Thanks!

------
mromanuk
Can I use this with Swift for Linux?

~~~
dhritzkiv
No, as it is reliant on the Frameworks built into –and SDKs targeting– macOS
10.14

------
amelius
Can we please replace "Machine Learning" by "Pattern Matching", as this is all
it is ...

~~~
singularity2001
Image Captioning, STT, DeepQN etc now have little to do with "Pattern
Matching" anymore.

[1]
[https://github.com/DeepRNN/image_captioning](https://github.com/DeepRNN/image_captioning)
[∞]

~~~
singularity2001
Admittedly though the ridiculously primitive apple classifiers can be regarded
as simple old "Pattern Matching".

------
oceanghost
Did Apple just commoditize machine learning? If it wasn't before...

~~~
mlthoughts2018
The big tech companies keep trying this (commoditizing ML models exposed via
APIs), but I am extremely skeptical that they will succeed with this approach.

The reason is simple. _All_ of the work lies with validating that a model
solves _your_ particular problem well enough to be cost-effective for your
customers or stakeholders to be satisfied. Machine learning is not a commodity
the same way cloud infrastructure is, because the model development and
validation aspects are inherently tailored to your extremely specific, one-off
data generating processes and performance characteristic requirements.

Even if you outsource the model itself, this still requires developing some
notion of acceptance testing for the model's performance in _your_ use case,
on the distribution of data that matter to _you_. And such acceptance testing
still requires high literacy in statistics and model evaluation (e.g., you
can't skip on hiring that expensive machine learning engineer _even if you
outsource_ because then who, internally, is going to be able to tell you if
the outsource solution is a bunch of junk, and why, and what to do about it.)

Short of turning the big tech ML offerings into flat out consulting
arrangements, where you trade-off having your own in-house and application-
specific and data-specific machine learning staff in order to rent time from
experts at the big co's, this model couldn't work.

Don't get me wrong, just as IBM somehow still finds ways to leach from
enterprise clients after all these years of not actually helping them, I am
sure these lines of business will generate enterprise consulting money.

I just think lots of people will be disappointed that it doesn't somehow mean
you're getting Google ML engineer caliber attention or results spent on your
company's bespoke needs, and that performance of generic and naive transfer
learning from big cookie cutter models often ends up being way worse than you
thought.

I feel bad for people who end up suffering the amplified version of vendor
lock-in this could create.

------
Slackattack
Could not seem to find any mention of where the model is being trained...I
would assume locally, but again, can't seem to find a mention of recommended
hardware for this.

------
uptownfunk
Things like this convince me that we will never get to AGI. I have a, perhaps
untestable hypothesis, that an AGI could only be built by an AGI, or that if
we create it, we won't entirely know how we created it in the first place.
(Much as we don't completely understand how to create life force, even though
we do create it, through the process of reproduction). We may be able to build
components of it (like growing organs in a lab) but I am quite skeptical we
will be able to create "life" or in this case "intelligence".

~~~
seiferteric
In my uninformed opinion, ML is more akin to the subconscious parts of our
brain, like our ability to recognize peoples faces. We don't learn how to do
this, it's there when we are born. It is a hardwired part of our brain that
was formed through evolution instead of learning and does not involve higher
order thinking. It's obviously useful, and could/would form part of a AGI
system but unlikely to be enough.

~~~
jsemrau
> to the subconscious parts of our brain Not too bad of a definition since the
> basis for ML is statistics on assumptions on what describe the world.

