Markov generated infinite multi-resolution textures with a 'Neighborhood Causality' feature that looks like it removes some of the uncanny valley in the article link suffers from.
I find it interesting that the network seems to have trouble getting the global structure right. This is particularly clear when the source features a regular pattern that carries through the whole image. If you zoom in on a small enough region of one of the synthesized brick textures, it looks fine, but looking at the whole thing, it's clear that the network doesn't get that it needs to produce identical looking bricks and that the lines need to match up and run parallel to each other, etc.
I wonder if this global structure gets lost in the pooling layers? I'm not sure how global constraints could be enforced across pooling. Part of the pooling layers' job is to provide translation invariance, after all.
I think the "where" dorsal stream (which is thought to be the missing piece in image recognition [1][2]) alone would not be able fix it. What would still be missing, I think, would be a network that learns to recognize patterns (i.e. repeating patterns and symmetries) in the "where" information.
I could also image that sequential information (i.e. videos) would help in the case of the liquid texture.
This group's most interesting work was the paper that outlines "style transfer"[1], which is what all those photos-painted-in-the-style-of-van-gogh-etc pictures[2] that went around a few months ago were using.
This is an interesting approach but it seems to be less effective than existing patch based texture synthesis approaches like http://www.cc.gatech.edu/gvu/perception//projects/graphcutte... and not obviously better than even simpler pixel based approaches. Using deep CNNs here doesn't appear to be improving results over existing techniques.
I point this out because existing texture synthesis methods work surprisingly well already and to anyone not familiar with them it may appear that the results achieved here would be very difficult to produce when there are already quite effective techniques in existence.
For procedural game textures this seems like it has potential.
Adapt this to output a feed-forward net running on a GPU that generates an infinite texture in any direction.
Some of the generated textures have a curious uncanny valley feel, very nearly the same and the differences can look interestingly weird.
How does this affect copyright? As we move closer and closer to being able to take the essence of other people's pictures, and create our own out of them automatically, we're probably going to fight over who actually owns the picture. Is there any precedent for such fights?
This stuff seems better:
https://www.think-cell.com/en/pdf/think-cell_article_siggrap...
https://graphics.stanford.edu/papers/texture-synthesis-sig00...
http://johanneskopf.de/publications/solid/