
Learning from Simulated and Unsupervised Images through Adversarial Training - shreyassaxena
https://arxiv.org/abs/1612.07828
======
rkaplan
This paper is important for a few reasons:

\- Labeled data is very expensive. Historically attempts to learn on synthetic
data has failed because ConvNets are very good at detecting small visual
artifacts in the synthetic data and using those for classification during
training. At test time on real data, those artifacts aren't present so model
fails. A technique that can beat state-of-the-art (admittedly on a very narrow
Eye Gaze dataset, but still) by only training on labels from synthetic data
and testing on real data is important.

\- They present a useful new idea to improve GAN training: using a history of
"fake" images, rather than only the latest fake images from the generator. Ask
anyone who has tried to train a GAN: the training is really unstable, each
network only cares about beating the latest version of its "opponent". They
show good improvements by saving many previous fake outputs to make the
generator more robust. This reminds me of Experience Replay from DeepMind for
RL.

\- It's a published paper from Apple! Great that they are starting to
contribute back to the research community.

------
brandonb
As context, this is Apple's first published AI research paper. After Russ
Salakhutdinov was hired, he promised that Apple would start publishing AI
papers in peer-reviewed conferences... and this is them making good on that
promise.

I work in AI (using HealthKit data, not as part of Apple), and my quick read
is that although this particular paper isn't especially groundbreaking, it's a
good omen for things to come. The idea they propose is a good twist on
Generative Adversarial Networks—which have been around either since 2014 or
the 1990's depending on who you ask—but you might see dozens of similar papers
at a conference like NIPS.

That said, Apple is using state-of-the-art techniques, submitting to peer-
reviewed conferences, publishing results on well-known baselines, AND
publishing openly, none of which were true of them even a couple of years ago.
This is real research.

So kudos to them. I'm really curious to see how they publish in core product
areas like Health or Siri -- they could make some major contributions to
research in addition to products.

------
spacehacker
FYI, the title has been changed from this being the first publication by Apple
on arXiv to the title of the paper.

It seems the technique presented in the paper was simultaneously discovered by
Sixt et al:
[https://arxiv.org/abs/1611.01331](https://arxiv.org/abs/1611.01331) (Nov. 4th
vs Nov. 15th)

------
kalid
I found the paper very well written. It's a testament to Andrew Ng's ML class
that you're able to follow a state of the art paper (terms, equations,
conventions) after his lessons.

The notion of a "visual Turing test" was interesting (can humans distinguish
real from simulated images). The sample sizes (and p-values, for all their
issues) were fairly small (10 participants, 100 images each).

I wonder if there are volunteer participant efforts (similar to HealthKit) to
help automate these types of evaluations. You could even have a Recaptcha-
style service to show your simulations are realistic enough.

------
visarga
Yay! Apple publishing ML papers. The paper's pretty interesting in concept,
but I would have loved to see more variety of samples.

If we could make rendered images more natural, wouldn't that have a huge
benefit on gaming? We could make games almost identical to real video/images.
Couple that to 3D VR headset and you have a pretty sweet system.

------
satyajeet23
This is the First ever Machine Learning Paper from !

~~~
detaro
\uF8FF?

EDIT: ah, that's where Apple puts their logo in their fonts... that is private
unicode space, so don't expect it to show up anywhere non-Apple.

------
shreyassaxena
OP here, not sure who has changed the title. Thought it was relevant to the
paper.

Original title: First publication from Apple: ...

~~~
sctb
A moderator changed the title from “Apple's gift for christmas: An actual AI
paper”, which breaks the submission guidelines by editorializing:

> _Otherwise please use the original title, unless it is misleading or
> linkbait._

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

