Well, if you're like me and not an expert in the field, you could be forgiven following a sentence like this:
>I originally developed this interface as a tool to explore the latent space of the introspective adversarial network, a novel hybridization of the VAE and GAN which powers this system.
I'm wondering, can a similar technique be used to zoom-in on a photo? If you can detect features of the face (eyes, nose, mouth), and you have a library of examples of these features at high resolution, you can supposedly fill-in the missing details of a zoomed-in image using this library.
There are a few ways to go about this eg. using adversarial or deconvolutional networks. Super resolution is a little tricky to evaluate so it's hard to have good benchmarks but Waifu2x is the best-named approach thus far.
That is really interesting. This type of enlargement is amazing for purely aesthetic consumption. The most impressive aspect is the ability to maintain accurate sharpness across the details, even when those fine details themselves cannot be reconstructed.
I am so looking forward to progress in this area, and expect it to be huge in desktop editing and in-camera processing, w/r/t both actual edits, and tonal adjustment. I know much more about the Adobe Raw Engine that I'd ever want to, slowly and painfully acquired via editing 200K+ images - but I know that the proper software will be able to select starting presets for me based on the image characteristics.
Paper: http://arxiv.org/abs/1609.07093