
StyleGAN2 - rolux
https://github.com/NVlabs/stylegan2
======
Veedrac
I set up this super simple ‘Which Face Is Real?’
([http://www.whichfaceisreal.com/](http://www.whichfaceisreal.com/)) style
challenge. Click the row to show the answers. You might need to zoom out.

[https://veedrac.github.io/stylegan2-real-or-
fake/game.html](https://veedrac.github.io/stylegan2-real-or-fake/game.html)

There's a harder version as well, where the image is zoomed in.

[https://veedrac.github.io/stylegan2-real-or-
fake/game_croppe...](https://veedrac.github.io/stylegan2-real-or-
fake/game_cropped.html?x)

I get 100% reliably with the first link (game.html), and got 4/5 on the
cropped version (game_cropped.html) so far.

~~~
ImminentFate
On your site I can consistently get 100% by looking at the backgrounds since
they generate in somewhat inorganic patterns.

~~~
nuccy
Also after watching the video from the StyleGAN2 team
[https://drive.google.com/file/d/1f_gbKW6FUUHKkUxciJ_lQx29mCq...](https://drive.google.com/file/d/1f_gbKW6FUUHKkUxciJ_lQx29mCq_fSBy/view)
now I know that original StyleGAN, images from which are apparently used for
this "game", produces faces with a "water droplet" and phase artifacts, so I
was able to spot few fakes just by looking for those things.

~~~
Veedrac
Only whichfaceisreal.com uses the original StyleGAN. The github.io links use
StyleGAN2.

------
gwd
Only watched the video, but one of the interesting things is the potential
method to tell a generated image from a real one: namely, if you take a
generated image, it's possible to find parameters which will generate exactly
the same image. But if you take a real image, it's generally _not_ possible to
get exactly the same image, but only a similar one.

The exact point in the video:

[https://youtu.be/c-NJtV9Jvp0?t=208](https://youtu.be/c-NJtV9Jvp0?t=208)

~~~
gwd
Also, looks like
[https://thispersondoesnotexist.com/](https://thispersondoesnotexist.com/) has
been updated to use the new generator.

~~~
mzs
Phew I looked at three and they all had toothy smiles where the teeth grew out
out of the lips and one had a floating tooth.

------
resiros
The demo in the official video is mind blowing.
[https://www.youtube.com/watch?v=c-NJtV9Jvp0](https://www.youtube.com/watch?v=c-NJtV9Jvp0)
I wonder when we will see full movies unrecognizable from real ones made from
deep learning.

~~~
globuous
Insane the part where they get multiple angles from the same generated face

------
alexcnwy
The part of the video showing the location bias in phase artifacts (straight
teeth on angled faces) is really interesting and very clear in retrospect if
you look at StyleGAN v1 outputs.

Their “new method for finding the latent code that reproduces a given image”
is really interesting and I’m curious to see if it plays a role in the new $1
million Kaggle DeepFakes Detection competition.

It feels like we’re almost out of the uncanny valley. It’s interesting to
place this in context and think about where this technology will be a few
years from now - see this Tweet by Ian Goodfellow on 4.5 years of GAN progress
for face generation:
[https://twitter.com/goodfellow_ian/status/108497359623614464...](https://twitter.com/goodfellow_ian/status/1084973596236144640?lang=en)

------
anonfunction
I'm surprised to see Nvidia hosting[1] the pre-trained networks on google
drive which has already been blocked for going over the quota:

> Google Drive download quota exceeded -- please try again later

1\. [https://github.com/NVlabs/stylegan2#using-pre-trained-
networ...](https://github.com/NVlabs/stylegan2#using-pre-trained-networks)

------
anyzen
A bit off-topic - the license [0] is interesting. IIUC, if anyone who is using
this code decides to sue NVidia, the grants are revoked, and they can sue back
for copyright infringement?

Also, interesting that even with such "short" licences there are trivial
mistakes in it (section 2.2 is missing, though it is referenced from 3.4 and
3.6 - I wonder what it was...)

[0]
[https://nvlabs.github.io/stylegan2/license.html](https://nvlabs.github.io/stylegan2/license.html)

~~~
supermatt
Patent grants.

Its a butchered Amazon Software License:
[https://aws.amazon.com/asl/](https://aws.amazon.com/asl/)

~~~
bonoboTP
I've never understood why it's allowed to give up "suing rights" in contracts.
It is in the interest of the public that any law infringement gets
investigated and the infringers punished.

In principle a lawsuit is just asking a neutral party to judge whether there
indeed was a law breaking where I suspect there was one. Ideally, this is not
a inherently hostile action that should be met with any negative consequences.

I know criminal and civil law are different beasts, but still the situation is
analogous to renting out a room to someone in exchange for them promising not
to report me to the police if I beat them up, else I can kick them out without
notice.

It should be an inalienable right of anyone to report/sue for any wrongdoing
against them. It should not be conditional on losing some (any) beneficial
things.

"I agree I will not sue you even if I later find out that you did something
illegal against me" should not be legal to be in a contract.

~~~
JoeAltmaier
Giving up such rights is often accompanied by alternative ways of arbitrating
disagreement. That's reasonable, as it avoids 50% of the cost (lawyers) in
such cases.

~~~
bonoboTP
I dream of a world where I can present my complaints in plain English and have
it considered by a court, without having to pay a fortune to lawyers.

It's somehow an accepted state of affairs that even if you're in the right,
you need some cunning lawyers who will twist words in the right way and build
a strong narrative of why you are right, otherwise tough luck.

Justice should not be up for purchase.

------
tiborsaas
Imagine when these faces start talking, tracking objects with their eyes with
a perfectly synthesized voice all, generated in real time.

------
gdubs
Of course we’ll hit a wall at some point, but when this repo dropped the other
night and I saw the rotating faces in the video, it made me realize that in
the future, VR experiences might be generated with nets rather than modeled
with traditional CG.

------
sails
Any good resources for using GANs to generate synthetic tabular data?

------
narsk
ctfu at the car images. I made a twitter bot to tweet them out with fake
make/model names
[https://twitter.com/bustletonauto](https://twitter.com/bustletonauto)

------
nalllar
urgh, custom CUDA ops now.

Original StyleGAN worked on AMD cards, this won't without porting those.

):

------
jdkdnfndnfjd
It makes me feel ill to see computers doing things like this. Aidungeon was
difficult to stomach as well. GANs were invented on a whim by a single person.
Nobody thought it would work when applied to this kind of problem. It came out
of nowhere. Pretty soon someone will try something on a higher order task and
it’s going to work. We are opening Pandora’s box and I’m not sure that we
should do that.

~~~
allenbrunson
okay, i'll be the one to stick my neck out ...

i read a few of the AI Dungeon transcripts. i think it's worthless. you type
in a command, like you would with an actual infocom-style adventure game, and
it spits out a bunch of flowery language and gobbledygook, the likes of which
you can get in abundance from any self-help guru. there is no state, no way to
win, no way to lose. to compare it to actual adventure games is ludicrous.

likewise, i have yet to see anything having to do with StyleGAN photo
manipulation that goes very far beyond the level of a parlor trick.

this stuff is going to appeal to the same people who love cryptocurrencies,
and will have about the same level of real-world effect.

