Hacker News new | past | comments | ask | show | jobs | submit login
Improved Techniques for Training GANs – OpenAI's first paper (arxiv.org)
126 points by gwulf on June 14, 2016 | hide | past | web | favorite | 12 comments

So this is pretty interesting.

One of the great hopes of the current deep learning boom is that somehow we will develop unsupervised or at least semi-supervised techniques which can perform close to the great results that are being seen with supervised learning.

Adversarial Networks is one of the more likely routes to semi-supervised. There is also a lot of interesting work in combining Bayesian optimization techniques with Deep Networks to develop one-shot learning[1][2]. Some of this was (very broadly) in response to the the one-shot learning paper coming out of (I've forgotten!!) where the authors are famously doubtful about the utility of Deep Learning, and showed somewhat competitive results on MNIST. (I can't remember who it was - there have been HN discussions about the group. Sorry!!)

Both OpenAI and DeepMind are following roughly similar paths here (no surprise really), and the results are looking really good.

[1] http://arxiv.org/abs/1603.05106

[2] http://arxiv.org/abs/1606.04080

> the one-shot learning paper coming out of (I've forgotten!!) where the authors are famously doubtful about the utility of Deep Learning

This? https://www.technologyreview.com/s/544376/this-ai-algorithm-...


That was it, although I see I've conflated the views of Gary Marcus (who I think it is fair to characterize as anti-Deep Learning[1]) and Lake, Salakhutdinov, and Tenenbaum who wrote the paper.

[1] https://www.technologyreview.com/s/544606/can-this-man-make-...

The paper presents improved techniques for training Generative Adversarial Networks (GANs). Code is published here: https://github.com/openai/improved-gan (uses TensorFlow, Theano, Lasgne)

Nice work team!

When they find the visual turing test results important enough to put in the abstract, it's a shame they only include tiny images in the paper :(

They are the full size images. CIFAR-10 is 32x32 colour images: https://www.cs.toronto.edu/~kriz/cifar.html

This is one of many examples why claims regarding near- or super- human performance in AI papers need to be taken with a good amount of salt. CIFAR-10 is great for experimenting with your algorithms, but it's a horrible dataset to do any kinds of human-to-machine performance comparisons.

If they used another dataset with images of larger dimensions could they generate larger and less blurry images?

A DCGAN as usually implemented (eg Soumith's Torch DCGAN implementation) can produce arbitrarily large images by upscaling. The quality won't be good, though, unsurprisingly, because it was only trained on 32px or less images. This also means that it's hard to evaluate DCGAN improvements because you're stuck squinting at 32px thumbnails trying to guess whether one blur looks more semantically meaningful than another blur.

One nice thing about this paper is that, as I've been suggested for a while, they up the input to 128px for the Imagenet thumbnails, and if you look at those, it immediately pops out that while the DCGAN has in fact successfully learned to construct vaguely dog-like images, the global structure has issues.

What is the license for the paper? I can see it's licensed for Arxiv to distribute, but I cannot see any open access/distribution besides that.

Basically, can I redistribute this paper in my website? if so, under what license?

PS, great job

It would be nice if there was an explicit CC license, but ArXiv has a perpetual license to distribute it, and you can link to ArXiv. Why would you host the paper yourself?

Edit: The code has been published as well, under the MIT license. https://github.com/openai/improved-gan

Why? There could be many reasons, but mine is because I'm doing a small Arxiv alternative. Unfortunately no license means closed with copyright... which I expected OpenAI to know and handle

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact