Gwern has applied this to anime dataset
Cyril at Google has applied it to artwork
This was to raise awareness for what a talented group of researchers made at Nvidia over the course of 2 years, the latest state of the art for GANs. https://arxiv.org/pdf/1812.04948.pdf (https://github.com/NVlabs/stylegan)
Rani Horev wrote up a nice description of the architecture here. https://www.lyrn.ai/2018/12/26/a-style-based-generator-archi...
Feel free to experiment with the generations yourself at a colab I made
I'm currently working on a project to map BERT embeddings of text descriptions of the faces directly to the latent space embedding (which is just a 512 dimensional vector). The goal is to control the image generation with sentences, once the mapping network is trained. Will definitely post on hacker news again if that succeeds. The future is now!
"One or more high-end NVIDIA GPUs with at least 11GB of DRAM. We recommend NVIDIA DGX-1 with 8 Tesla V100 GPUs."
I'm sure my wife will understand why I took out that second mortgage on our home...no problem.
How many faces are generated? This can't be real time.
The results were .... eh .... okay, at least on my ugly face.
But overall, Im still tweaking. In the mean time, I've been focusing on static image analysis for aging research, but I hope to find better encoding schemes down the road.
> Turns out it can disentangle pretty much any set of data.
All the example I have seen (including your links) are variants of face generation algorithms. Any ideas on how this could be useful beyond image generation in some style? Specifically for (data) science?
Sorry if this is a naive question.
By "variants of face generation algorithms" I mean any image generation really.
Aside from the original work, on Twitter, people have done Gothic cathedrals very well, graffiti very well, fonts very well, and WikiArt oil portraits not so well. On Danbooru2017 full anime images (linked in my thread), one person has... suggestive blobs but has only put 2-3 GPU-days into it and we aren't expecting much so early into training. skylion has been running StyleGAN on a whole-body anime character dataset he has, and the results overnight (on 4 Titans) are pretty impressive but he hasn't shared anything publicly yet.
(Who says we aren't compute-limited these days?!)
That is, until Graphcore delivers their IPU.
It's not that hard to do it yourself, but it's a really clean package, and it gives you nice CLI flags for most things like pooling strategy, and what layer you want to get the activations from.
I think this is a very dangerous game we are playing here but I guess it is going to be done.
then yes, it should be possible
"To qualify as a work of 'authorship' a work must be created by a human being":
https://www.copyright.gov/comp3/chap300/ch300-copyrightable-... [PDF], see section 313.2 "Works that lack human authorship"
Monkey selfie case:
On 23 April, the court issued its ruling in favor of Slater, finding that animals have no legal authority to hold copyright claims 
Copyright is (read the law!) a temporary monopoly granted for works meeting certain criteria, being creative is one of them. You’d hold copyright for the code you wrote to generate the “art”. If you download somebody else’s code (as this site uses Nvidia’s), you lack the creative element.