
Imaginary worlds dreamed by BigGAN - sytelus
http://aiweirdness.com/post/178619746932/imaginary-worlds-dreamed-by-biggan
======
minimaxir
You can now demo BigGAN in an official Colaboratory notebook (backed by a GPU)
to create your own AI-generated nightmares:
[https://colab.research.google.com/github/tensorflow/hub/blob...](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb)

Examples from the notebook by the article author:
[https://twitter.com/JanelleCShane/status/1062067001504321536](https://twitter.com/JanelleCShane/status/1062067001504321536)

~~~
saganus
Can someone explain to a layman what does the truncation parameter do?

~~~
Fr0styMatt88
My understanding is that the 'truncation' parameter makes the output of BigGAN
more closely resemble the training data set.

Moving the truncation towards 0 will give you more realistic but less varied
images, while moving it towards 1 will give you more varied but increasingly
potentially nonsensical images.

~~~
saganus
Aha!

That makes sense. Thanks!

------
gear54rus
> Combine sugar and sugar and sprinkle with the sugar.

Seems about right describing current food lol

On a more serious note I wonder if our grand-grandchildren will look at these
programs from blog posts and see them as some kind of "PRINT HELLO; GOTO 10"
for AI, ancient artifacts showing the birth of a new paradigm of automation

~~~
dieterrams
> Combine sugar and sugar and sprinkle with the sugar.

What is this quote from?

~~~
classichasclass
An earlier entry from the same blog on generated recipes. My wife and I were
laughing so hard at these we got some weird looks from the waiter. The use of
asterisks is priceless in that one.

~~~
gear54rus
Strangely enough, on mobile all those posts were melted into one big page so I
didn't even realize it was not the one linked to.

------
aportnoy
Funny how they mess with your brain: the illusion is that you are looking at
something familiar. The texture is "easy" to look at, but the brain fails to
recognize a known object.

------
CommieBobDole
Interesting article, but the subject of the article (the amazing images) are
only shown in tiny thumbnails, with no links to the full-size images. Seems to
sort of defeat the purpose.

~~~
drodgers
Even the largest images produced by the network are only 512px x 512px and
most are smaller — adding more pixels to these kinds of networks is fairly
expensive and not at all worth it if you're training thousands of them over
and over again while doing research.

------
aquarin
One day this will generate graphics in games. Imagine Doom with synthetic
monsters controlled by you biggest fear.

~~~
therein
Or No Man's Sky with actually infinitely many unique planets.

~~~
fapjacks
Or as I like to say, Skyrim with infinite content.

------
gear54rus
Read some other entries in that blog, recipes are something to look at:
[http://aiweirdness.com/post/176589646292/its-time-for-
cookin...](http://aiweirdness.com/post/176589646292/its-time-for-cooking-with-
neural-networks)

More of the same:
[https://www.reddit.com/r/SubredditSimulator/](https://www.reddit.com/r/SubredditSimulator/)

~~~
minimaxir
SubredditSimulator uses Markov chains to generate text, not an neural network.

I have made my own subreddit using the same RNN tool as that post:
[https://reddit.com/r/SubredditNN](https://reddit.com/r/SubredditNN)

~~~
gear54rus
have you tried partnering with simulator so your posts are displayed there
too? I imagine they would get more exposure and therefore better feedback

~~~
minimaxir
I believe development on SubredditSimulator is stalled now that the dev is no
longer working at Reddit / is building a Reddit competitor.

~~~
gear54rus
Makes it more likely for him to agree :)

------
Nasrudith
The train picture reminds me of some of my botched attempts at creating
perspective by combing two differing horizon line sub-images into one. I
suppose that sort of rendering abstraction is the same in terms of
accidentally producing weirdness - even though different processes were used.

------
LeicaLatte
How machines hallucinate.

[https://twitter.com/quasimondo/status/1063449767617937410](https://twitter.com/quasimondo/status/1063449767617937410)

------
mkagenius
We should have a reverse image search built in image-net for someone doubting
and trying to find similar images in imagenet . Like google's reverse image
search.

------
bufferoverflow
RIP stock photography. And possibly videography too.

~~~
mkagenius
Very doubtful, the results just seem to be combination of existing background
and existing artifacts (nose, ears, furs, etc) .

The different poses you would get naturally is much more complicated and
diverse than these networks seem to be generating. All in all these networks
aren't generating anything new.

~~~
visarga
The images you saw in the article were hand picked to be weird. Most of them
are photorealistic and are not copies of any training photo. If you think it's
so easy, paint a photorealistic image in Photoshop by hand (no direct copying,
but you can look at other images all you want) and let's see how it compares.

Cropped a few more BigGAN samples for reference:
[https://imgur.com/a/GIQSo4I](https://imgur.com/a/GIQSo4I)

Or how about ProGAN for faces?
[https://www.youtube.com/watch?v=36lE9tV9vm0](https://www.youtube.com/watch?v=36lE9tV9vm0)
Does that look like a simple copy? It's a 'walk in the latent space' of faces.

Also, take a look at this tool (TL-GAN)
[https://www.youtube.com/watch?v=O1by05eX424](https://www.youtube.com/watch?v=O1by05eX424)

That comes with an online demo: [https://www.kaggle.com/summitkwan/tl-gan-
demo](https://www.kaggle.com/summitkwan/tl-gan-demo)

~~~
mkagenius
> photorealistic image in Photoshop by hand

If you are competing with a human, if not me I am sure a professional can
easily better the best of best (and not just 512x512 pixels but 100 time more
pixels) by this network. It has to catch up with humans by a long shot.

> Cropped a few more BigGAN samples for reference:
> [https://imgur.com/a/GIQSo4I](https://imgur.com/a/GIQSo4I)

I don't know but to me all of the images you have in the imgur link seem non
realistic. Head or torso or legs is messed up for (all?)most of them.

> how about ProGAN for faces?
> [https://www.youtube.com/watch?v=36lE9tV9vm0](https://www.youtube.com/watch?v=36lE9tV9vm0)
> Does that look like a simple copy? It's a 'walk in the latent space' of
> faces.

No, I am not saying its a simple copy, the copy is great. The fact is that its
a copy. An imperfect one.

> Also, take a look at this tool (TL-GAN)
> [https://www.youtube.com/watch?v=O1by05eX424](https://www.youtube.com/watch?v=O1by05eX424)

Sure, human face comes in all shapes and sizes, so its tricky to judge the
effectiveness since every modification is a possible real person.

~~~
visarga
> Sure, human face comes in all shapes and sizes, so its tricky to judge the
> effectiveness since every modification is a possible real person.

What I wanted to show you is that you can walk through the 'latent space' of
faces and generate intermediary images between any two images, thus they are
not simple copies.

------
red75prime
That's a look of purely holistic vision of the world, "collection of parts" is
not included.

------
beaconstudios
sweet, a cursed image generator.

