
Why Bigger Isn’t Always Better with GANs and AI Art - Artnome
https://www.artnome.com/news/2018/11/14/helena-sarin-why-bigger-isnt-always-better-with-gans-and-ai-art
======
riebschlager
Great article! Thanks for the writeup, resources and introducing me to Helena
Sarin.

I've been wanting to get into GANs but I've really been turned off by the
samey-ness of the work I've been seeing. I'm afraid I'd spend a ton of time
getting everything set up only to end up creating more 512x512 jpgs that look
like everything else everyone else is making. Helena's work makes me want to
give it another shot though.

To me, it really feels like AI-generated artwork is in that hype phase you see
with any new tech used in art. Would that work from Obvious sold for nearly
half a million dollars if it wasn't a GAN-generated work? Probably not. I'm
looking forward to the day an AI work gets that much attention because it's
compelling all on its own, not just because of how it was made.

------
visarga
For comparison, here's a web app that lets you play with BigGAN:

[https://ganbreeder.app/](https://ganbreeder.app/)

~~~
nnd
appears to be down?

------
gwern
One way forward would be _retraining_ the official BigGAN models. You may not
have enough compute to train BigGAN from scratch, but you may have enough to
finetune/retrain it on a new dataset to get the best of both worlds.

There's an attempt at implementing BigGAN in PyTorch
([https://github.com/AaronLeong/BigGAN-
pytorch](https://github.com/AaronLeong/BigGAN-pytorch)) - anyone know how to
load & retrain a TF model in that ([https://github.com/AaronLeong/BigGAN-
pytorch/issues/2](https://github.com/AaronLeong/BigGAN-pytorch/issues/2))? :)

------
habitue
Main point: bigger GANs like BIGGAN don't let you train with your own images
because they're so expensive to train. So everyone is creating art from the
same pretrained model (and it'll presumably all look somewhat similar)

~~~
gwern
> bigger GANs like BIGGAN don't let you train with your own images because
> they're so expensive to train.

I don't think we know that. They're super-expensive to train from scratch, but
there have been few experiments investigating GAN transfer learning. In my own
very simple personal experiments, when I trained a ProGAN on one anime
character's faces and then retrained it on a second anime character's faces
(Asuka->Holo), it adapted very quickly and successfully; so I suspect that the
released BigGAN models might be retrainable with a tiny fraction of the
resources. Training on your own photos should be doable.

The real reason that everyone is using the BigGAN models and not retraining
them is because, well, there's no way to retrain them! The BigGAN source
wasn't released, and the only implementation so far doesn't support loading
them and retraining. So you can't do it, no matter how much you want to,
unless you are so good at TF/PyTorch that you can program it yourself. Which
apparently no one has so far.

