One of the great hopes of the current deep learning boom is that somehow we will develop unsupervised or at least semi-supervised techniques which can perform close to the great results that are being seen with supervised learning.
Adversarial Networks is one of the more likely routes to semi-supervised. There is also a lot of interesting work in combining Bayesian optimization techniques with Deep Networks to develop one-shot learning. Some of this was (very broadly) in response to the the one-shot learning paper coming out of (I've forgotten!!) where the authors are famously doubtful about the utility of Deep Learning, and showed somewhat competitive results on MNIST. (I can't remember who it was - there have been HN discussions about the group. Sorry!!)
Both OpenAI and DeepMind are following roughly similar paths here (no surprise really), and the results are looking really good.
Nice work team!
One nice thing about this paper is that, as I've been suggested for a while, they up the input to 128px for the Imagenet thumbnails, and if you look at those, it immediately pops out that while the DCGAN has in fact successfully learned to construct vaguely dog-like images, the global structure has issues.
Basically, can I redistribute this paper in my website? if so, under what license?
PS, great job
Edit: The code has been published as well, under the MIT license. https://github.com/openai/improved-gan