Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence Creates Realistic Pictures of People (openculture.com)
337 points by Liriel 6 months ago | hide | past | web | favorite | 131 comments



I love it. I listen to my dads stories and not that long ago you could up and moves states or country and leave your past behind. Then the Internet happened and there was no escaping your past. Speeding tickets could find you. Arrests records were kept online and your identity could catch up to you in an instance. I hope more of this AI stuff takes off and we get some of that deniability anonymity back. I truly do think the Internet was an awesome landscape when people could express themselves openly and feel they were truly anonymous. These fake images may give us back some of that sense anything could be real or fake around us.


The idea that improving technology will make it impossible to 'leave your past behind' is a key plot point in the 1989 SF book 'The Boat of a Million Years' by Poul Anderson [0]

<spoiler>

The story concerns people who are effectively immortal barring death by massive injury. They don't age and often face ostracism or worse by superstitious or resentful neighbours / friends / children. They frequently (re)use the obvious solution - moving on and assuming a new identity - many times over 2000 years of different episodes. However, this becomes untenable and they eventually reveal their identity to a population that is itself becoming long-lived due to improved medical technology.

It's a great read and one of those that illustrates the fact that good SF writers can often write extremely good historical novels. Many of its historical episodes are superb interpretations of periods of history that nowadays aren't particularly well known., e.g. in the post-Roman near east.

[0] https://en.wikipedia.org/wiki/The_Boat_of_a_Million_Years


oh, there is a movie on similar lines called "the man from earth" where the protagonist claims to have survived for 15 or so millennia. to keep others from realizing that he doesn't age, he moves every decade or so...


> e.g. in the post-Roman near east.

Oh, that's the big one in terms of environments underserved by historical fiction. Any chance we can get Stephenson to get obsessed with it?


I was just thinking something along that vein last night: I was watching Stanley Kubrick's Barry Lyndon and early on there's a part where the main character deserts from the British Army by stealing the horse, uniform, and identity papers of an officer. The identity papers were the interesting part, because it was before photographs, so presumably the only biometric information they contained could have been height, hair color, etc. Once we got photographs, they must have had a huge chilling effect on that sort of thing, much like the spectre of facial recognition that we're facing today, but it's one that's already come and gone. (And by "gone" I mean "wedged itself so firmly in the status quo that we never think about it".)


40-year spoiler: Barry is caught because although his papers pass muster with his Prussian escort, his story does not. After conversation over beers, Captain Potzdorf sniffs out Lyndon's BS stories quite easily. It would be interesting to know what "identity papers" consisted of in 1760.

Biometrics are one thing, but they are not the only thing.

Funny -- I watched Barry Lyndon last night, too.


Funny thing, I was this close of re-watching Barry Lyndon with my gf last night too, but instead we chose the Incredibles 2, and I also read today this very interesting article from the Economist on the concept of identity: https://www.economist.com/christmas-specials/2018/12/18/esta...


That is one wide-ranging, apparently well-researched article. Not a fan of the Economist but I think I need to revisit my prejudice. Thank you.


Lois XVI, trying to escape the revolution, was recognized by his image on a form of paper money. In effect, he had sent wanted posters all over the country.

https://en.m.wikipedia.org/wiki/Flight_to_Varennes


I mean, don't get me wrong, I think our society is becoming a more and more observed and cataloged as time goes on which isn't the best thing when coupled with growing distrust in government behavior, but the examples you give really don't help the cause there comrade. Your point really sounds like "I'm glad I'm able to create a more believable fake identity online so I can commit crimes and not get busted!"


Yeah. We need to protect human rights. We don’t need mechanisms to subvert just means of accountability. It’s true that the same tools may cut both ways, but let’s keep in perspective what ends are of collective value.


This philosophy is plainly false with the ongoing criminalization of victimless behavior. Most all drug possession is still heavily penalized for no collective benefit. Meanwhile the crooks on Wall Street continue unhindered, with no sign that accountability will ever come.

Especially alarming is the push to tie Internet activity to real world identities, under the guise of post-facto accountability. The Internet is a medium which purposely eschewed ambient enforcement in lieu of end-to-end intelligence, and yet we're continually seeing (many successful) pushes to drag us back to that legacy post-facto natural-language imperfect-enforcement regime that benefits those in power.


Well I AM glad to be able to create a more believable fake identity online. Being able to commit crimes and not get busted is just one of the many perks that come with it.


especially when the crimes were created to secure the power of the state at the expense of the individual rights, e.g. to ensure the supremacy of one class over another. Government is not synonymous with justice.


I honestly can't tell if this is satirical or not, but I hope so.


Let's just clear up my stance here:

"I want to be able to create a fake identity online..." Fine. Great. Have at it! Privacy and anonymity, what marvels in this modern age!

"...in order to evade the law while I commit crimes, or escape the law after my past crimes." Ehhhhh...this is a rough bit folks. Listen, you want to pitch full on class war, us vs the government "there is no just man working for the injustice of tyranny" sort of shit, that's fine. It's just that's the sort of shit that gets the government involved MUCH faster. I just feel a bit of subtlety might be better in the earlier goings.

I understand that a knife can be used to make food or kill a man but you don't see most knife manufacturers advertising how quickly they can gut somebody for good reason. That's all I'm trying to say.


But your stance is hyperbolic because the original poster didn't say they wanted anonymity so they could commit crimes, they said they appreciated the option to "leave their past behind". Presumably if one is "in the system" they've already been caught and punished for their crime. I totally understand the desire to be able to move forward without the cloud of your past looming over you, and to be honest, I imagine the ability to do so may increase someone's chances of living the remainder of their life in an upstanding manner. Also, as you must know, a disturbingly large proportion of "ex-cons" have trumped up charges on their record or were even completely innocent as our modern judicial systems prioritize efficiency over truth.


IDK, is it? "Speeding tickets can catch up to you..." Those only "catch up to you" if you're not paying them. INAL, but I'm not sure there's many situations where having paid speeding tickets is affecting your life that greatly (unless you're trying to commit insurance fraud by misrepresenting your past).

Again, I get the idea that there are people who are failed by the system and the way they're forced to right this wrong is by pretending to be someone new. I'm not even opposed to the idea! But OP is doing a terrible selling the idea to people who aren't on board by framing it that way. I don't even think you need to pointedly obtuse to construe his statement the way I did. If you're going to say "this tool is helpful because the system itself is problematic and possibly corrupted to the point that in the near future the best solution is to circumvent the system rather than reforming it" you need to work pretty hard to pitch that to a mainstream crowd.

Edited addendum: While OP didn't say they want the ability to commit crimes, it took about 20 minutes before a different poster literally said he did, so the intent/desire is out there and pretty obvious.


Sure, I get where you're coming from, and I'm not even positive I disagree with your underlying stance. I do think "getting out of some speeding tickets" is pretty tame on the list of heinous crimes one could appear to condone. The OP is clearly citing wild stories their dad has told them about escaping one's (his?) past and then putting forth their own lament for the erosion of anonymity and deniability.

The fact is, the original poster wrote:

> I hope more of this AI stuff takes off and we get some of that deniability anonymity back. I truly do think the Internet was an awesome landscape when people could express themselves openly and feel they were truly anonymous.

which you paraphrased as:

> "I'm glad I'm able to create a more believable fake identity online so I can commit crimes and not get busted!"

and

> "I want to be able to create a fake identity online... in order to evade the law while I commit crimes, or escape the law after my past crimes."

which is clearly a hyperbolic extension of the original argument. Also, the only person I see literally ascribing this intent to the OP is you, so I can only assume you are arguing in bad faith.


It really wasn't! I'll fully admit I may have gone a bit ad absurdum but that's not bad faith. A lack of other people willing to jump in on this doesn't mean anything other than people aren't jumping in.

I'll admit that I may have been inferring something that wasn't there for OP. I still feel the points I'm bringing up are valid, but attempting to place these feelings on OP involves a bit of a stretch of his phrasing.


>Edited addendum: While OP didn't say they want the ability to commit crimes, it took about 20 minutes before a different poster literally said he did, so the intent/desire is out there and pretty obvious.

Moate, I checked your profile, but I don't see your name or address. I shudder to imagine the obvious intent/desire that implies. I question whether or not society will stand once the reign of terror you are planning has run its course.


Yeesh. Pretty far leap from my observation that someone literally stated they welcome this technology because it allows them to commit crimes (They said this, it's in the thread, go look it up) and my lack of identifying information.

I'll take it that I may have misconstrued the OP intent. Might have clutched my pearls a bit too quickly. However...

I think it's naive to think that certain technology doesn't enable criminal behavior. Cash currency is one such technology. It's much more difficult to trace than online money, and is thus preferred by certain criminal elements. It's also perfectly reasonable to say that we shouldn't outlaw cash because it's used by criminals. There's tons of legitimate reasons why people would want and prefer cash. Don't throw the baby out with the bathwater and all that.

I accept that crimes will be committed. But it's a horrible marketing choice to list it as a selling point/advantage to your technology. I'm fine with this facial amalgam stuff. I think being anon is great! But maybe hyping the "Finally I can do all this illegal shit I've been held back from due to evil oversight" isn't the right choice?


The article is about synthesizing images of nonexistent people, not making fakes of real people. Imaginary people have no effect on your anonymity.


Except when upload a fake image as your profile picture. Facial detection algorithms are some of the most refined ML algorithms today. There are already plenty of social media sites that check profile pictures for the presence of a face, and Facebook goes as far as figuring out which face is yours.


I had the same concern as you. But I figured it would take space travel to escape our past and our identities. But this AI does offer us alternatives until space travel becomes a reality.


Can anyone tell me if it's reasonable to think this technology might substantially replace or augment movie CGI in the future? I have no domain experience with CGI or graphical simulation, but I have a passing interest as an occasional admirer of what shows up on /r/simulated.

Does modern CGI make use of machine learning like this? If not, what does it have in common aside from significant use of GPUs? What are the computational intensive parts of CGI, and what are the heavily manual and labor intensive parts of the CGI development workflow?


Yes, this sort of tech is likely to make its way into CGI. Some relevant resources if you're interested:

1. This post shows how GANs were used to replicate the Princess Leia scene in Rogue One. The scene in the movie required a team of special effects artists, whereas this required access to lots of images of Carrie Fischer, a GPU, and this technology: https://io9.gizmodo.com/this-video-uses-the-power-of-deepfak...

2. I wrote this popular post on this topic (first page on Google for "deepfakes") that explains how it works, and where it can go for commercial applications: https://hackernoon.com/exploring-deepfakes-20c9947c22d9

3. If you prefer video, I gave a humorous talk on the same recently. It is designed for lay people, and has lots of examples: https://www.youtube.com/watch?v=wajS0XHzfpU&feature=youtu.be...


It depends on how broad of a brush you're painting with when you say "this technology." Neural network based or neural network enhanced rendering techniques are certainly going to get applied to movie CGI. This specific adversarial network only does stills, so it won't. A lot of the recent "wow" research is directly generating images, which I don't think will scale up to realistic scenes as soon as anything moves and rotates, but there's other research in hybridizing 3D model rendering with AI[1] that seems like it could get there. And maybe the adversarial network model in article works with those techniques too.

Also, it's important to consider control when talking about CGI or other creative endeavors. Generating a perfectly realistic scene is useless without being able to control the details of the scene. So the completely end-to-end magic demos aren't nearly as useful as things that generate off of armatures or performance transfer.

[1] https://www.theverge.com/2018/12/3/18121198/ai-generated-vid...


Keep in mind that these are stills. Once the motion starts, humans detect the not-quite-realness easily. Even the 2D video creations by AI aren’t stitched correctly, let alone getting all the details of an animated 3D person right.


They are getting closer, but I do agree that it is still not quite there. It's pretty common practice now at the beginning of each film production to do full image capture of all of the main talent. It's a back up in case something untoward happens that the film can still be finished by using their 3D likeness.


You mean for an Oliver Reed/Gladiator scenario?


More commonly for postproduction fixes (think stunts and inserting talent likeness, relighting/reframing a shot, inserting reflections when CG-ing the background of a window etc.…) so you can fix sequences without having to recall talent which can produce availability delays and additional expense beyond what it takes to fix digitally.


Even in those stills (cherry-picked for sure) it's easy to spot NN artifacts, it's just that most people on HN trained themselves to spot "classic" CGI problems that are local. Check the general geometry and long-distance order, that's what problematic for NN because of their nature. Even using photos as references NNs can't avoid shifting ears or gaze or warping faces because there's no concept of the underlying 3D object there, just a bunch of flat transformations.



Lip-synching is quite different from full body motion. And what you have posted is actually using original real source video for most of the pixels if I understand correctly.


I was mainly responding to the parent's comment:

> Keep in mind that these are stills. Once the motion starts, humans detect the not-quite-realness easily. Even the 2D video creations by AI aren’t stitched correctly, let alone getting all the details of an animated 3D person right.

I agree this isn't end to end video synthesis but the end results are still very impressive and realistic. I can easily imagine this type of technology being a problem, because video evidence will be devalued and mean nothing.

They basically just animate Obama's lips using a RNN and then composite the output onto a target video of Obama to change the appearance of what he is saying. Yes they aren't generating all of Obama's head yet but this result in itself is impressive. Also I can't wait to see where GAN's take us 10-20 years down the road. Don't forget they are still in their infancy relatively speaking.


It used to be the case but even with not too recent CGI tech they can make ~alright main characters (see SW ep. VII). With the most recent siggraph tricks I'm sure most of the population won't notice (if not abused).


Good CGI tools in the hands of a talented human can make for good results.


Oh yeah don't misread me, I'm not asking for fancier tech. Movies != CGI, far from that. I cry when I compare old movies that had flesh to the hyper hectic pixel fest of today..


Note that isn't tabula rasa generation. All these faces are generated from reference images. See this timestamped video for example: https://youtu.be/kSLJriaOumA?t=80. Current deep generative tech requires a carefully designed dataset of images with controlled variation. For the faces above, a dataset of celebrity faces was carefully constructed from a larger wild dataset, so as to not have any rare features, and the eyes and noses were aligned, making the dataset digestible by a GAN. This particular method then also uses multiple style images which it takes certain features from to mix up and create the mixture face.

The main problem with this tech is that the quality can not be reliably controlled. Whatever it generates has to be passed through a human checker/curator pretty much. You can't just serve up generations to the audience without checking what they are. So, imo, this tech would be very useful for the concert art stage of the creative process, being able to just create new things, or mixup existing things, but only to create an intermediate artifact, which would then be used as inspiration for the artists to create the final art.

This is also devoid of any form of higher-level reasoning. It doesn't know concepts a human artist would know, such as object permanence, intuitive physics, etc. Something as simple as a ball falling down should squish a certain way, and when hit the ground should squash another way. It also doesn't have artistic sensibilities like anticipation, etc., that have been laid out in the 12 rules of animation, and even more, there's almost no way to communicate it to these models, short of just creating a huge dataset which has those 12 rules, and just hoping, rather praying, it learns it.


I've been tweeting about this a bit lately. [1] No doubt in my mind that soon it will be possible to take in a movie script and a couple hundred dollars of electricity later output a full movie. Think of the implications, not just for entertainment but for society (eg. politics & criminal justice) in general.

[1] https://twitter.com/ericfaccer/status/1077463342761951232


> Think of the implications, not just for entertainment but for society (eg. politics & criminal justice) in general.

Take the Reuters feed, render as Fox News, then apply a Wes Anderson style transfer. Maybe exploring all those permutations will be more interesting than creating deceptive fakes?


NVIDIA created an entire CGI city with Machine Learning

"Research at NVIDIA: The First Interactive AI Rendered Virtual World " - https://www.youtube.com/watch?v=ayPqjPekn7g


Have you seen Aquaman yet? Studios are already starting to replace popular actors in the movies with younger/better looking CGI versions of themselves. As the tech gets cheaper, you should expect a lot more of this in future blockbuster movies.


> younger/better

Or, a bit earlier (star wars?), not-dead versions of themselves.


These are only images. Theoretically it is possible. But I guess we are still years away even with exponential improvement in the field.


I'm not the expert you're looking for, but surely we are not far from being able to generate movies with bespoke content for every person.

Performance art itself is about to be upturned.

AI can already string bad poetry together, surely in not long it can do a 5 act play, vary the characters a bit, jumble up the situations, create 3D realistic models, and put the whole thing together into your very own Breaking Bad that caters to your particular biases.

It can then take your feedback and do spinoffs, parodies, and so forth forever. You'll never be able to get up.


Surely? Doing this in any meaningful way requires so much insight into the human mind and culture.


Well I'm just saying it looks like we're close. Just based on the pieces being prototyped already. There's people on this thread posting photoreal fake videos, text generators exist at least as satire sites (POMO generator), and it's not like you couldn't make a tree of story elements.

Someone is gonna glue those pieces together and make it a business.


I think we’re nowhere near close. Yeah we can generate an image, but even AI music tends to get pretty boring pretty quickly, and is far from having the human element. A movie or 5-act play? You’re being quite optimistic.


I’ve read somewhere that there already are publishers machine generating genre fiction (crime stories), that are only manually polished by humans after the generation. However, a distance between a shitty genre novel and Dostojewski is not to be underestimated - msybr the only thing that connects them is that they both are texts with plot and characters.


This reminds me of an old story of how the Japanese idol band AKB48 added a new member who turned out to be a computer composite in 2011, to think what they could do with technology today and in the future:

https://www.youtube.com/watch?v=piZ2TkdK4Dw

https://en.wikipedia.org/wiki/Aimi_Eguchi


Or this Hatsune Miku which doesn't even exist but performs in concerts as a holographic character on stage, and the vocals are synthesized: https://en.wikipedia.org/wiki/Hatsune_Miku


I take for granted that in the not so distant future well'see dead actors/musicians replicas performing alone or along real ones. Having John Lennon and Kurt Cobain perform together or John Wayne himself commanding the USS Enterprise (the starship, not the carrier) is just a few years and some lawyers away. Dead star franchises will probably love that though, so the incentive to perfection the technology will very likely bring us to a point where almost all artists will be machine generated because they cost a fraction of real ones.



Somewhat related, there's also a Japanese character/synth/software package called Hatsune Miku that is flat out an animated hologram that is just a voice synthesizer for its voice.

It is currently on tour...


You posted this while I was looking up the link to post. Else I would have replied to you


This morning is getting stranger with every post or comment I read.


William Gibson foresaw that one ages ago.


heck all the theorists of the hyperreal, too, including Baudrillard, Postman:

https://en.wikipedia.org/wiki/Hyperreality

"Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.[1] It allows the co-mingling of physical reality with virtual reality (VR) and human intelligence with artificial intelligence (AI).[1] Individuals may find themselves, for different reasons, more in tune or involved with the hyperreal world and less with the physical real world."


One day, we're going to have google hangouts or phone calls with "people" that look, act, talk like they're real, but are actually AI. I dread when scammers will get their hands on this tech. I'm already have enough trouble getting 5 calls a day about automated voice scams, imagine getting a call from an AI that acts and talks like a real person.


That's when we'll switch to whitelisting.


Most younger people (< 30 years old) already do this anyway and refuse to answer phone calls from numbers that are not already in their contact list. I know I do and I'm not young anymore.


Unfortunately that stops being an option when your parents become elderly and missing a emergency call from an unknown number has unpleasant repercussions you would like to avoid.


Set your voicemail message to tell them to leave a message or to send a text.


I dread when law enforcement get their hands on it. This will take planting evidence on a "suspect" up an order of magnitude.


I’m eagerly awaiting the Show HN for this.


That will be the day when you have your personal evidence recording device on 24/7 so you can prove your alibi.


And in court they can claim the alibi is a high-fidelity fabrication.


A possible protection could be: person can emit, every N minutes, a high bit cryptographic checksum of the previous checksum plus the last N minutes of recorded video.

Send that value to the 'forever', public registry in the cloud, AKA some combo of Twitter, Facebook, GMail, FastMail, AliMail, etc.

To be fabricated, it would have to be done in something approaching real time. Not absolute safety, but a seemingly easy step in that direction.


Sounds like an actual use for a blockchain! One could even commingle others’ checksum streams in order to make it even more difficult to rewrite history.


That would help prove you were with such and such person or not with them.


I don't see how the addition of cryptography to image fabrication detection would be useful at all since it just tells you 'X key with Y data'. Yes the data may have come from this camera but what is stopping anyone from feeding in arbitrary data to be signed or extracting the key and applying it some place else?

Attribution doesn't offer any proof in itself - which has us back to square one.

Vast data-hoarding might be able to spot inconsistencies within supposed data - the equivalent of thousands of witnesses at a UN press conference proving that world leaders did no in fact take off their clothes and start dancing like strippers before the world.

But if everything produces similarly consistent data from their cams - including metadata one could hypothetically get thousands of feeds from differing points of view displaying the infamous 22xx UN press conference lewd dance incident - collaborated by a blend of camera device keys. In the end it would come down to tautologies of trust.


But surely then they wouldn't be immune to claims that their evidence wasn't a high-fidelity fabrication as well?


Not immune, but law enforcement generally has trust-by-default from the general population, whereas everyone they accuse has distrust-by-default.

I don’t know how to solve that. Not even in principle, as even a hypothetical future A.I. jury, judge, lawyer, and cop would have the “what does ‘good evidence’ mean?” problem in addition to the incentive alignment problem that already makes many of us question the justice of our laws.


Law enforcement is generally given the benefit of the doubt. They're professionals.


Didn’t google already create that? Fake “real” caller who can respond to what you are saying.



> But still, "you can’t doctor any image in any way you like with the same fidelity. There are also serious constraints when it comes to expertise and time. It took Nvidia’s researchers a week training their model on eight Tesla GPUs to create these faces."

That doesn't sound like a high hurdle at all. As soon as the algorithm is known, anyone with money to burn will be able to fire up a bunch of AWS GPU instances and create authentic-looking photos of people doing something they actually didn't.

If you have some relevant knowledge and want a job that lets you travel all over the place at no cost to yourself, now is the time to become a forensic expert with a specialty in detecting AI-doctored evidence. Yes, there are already people who make a living testifying in court that a document is typeset in a font that didn't exist when it was purportedly printed, etc.


>That doesn't sound like a high hurdle at all. As soon as the algorithm is known, anyone with money to burn will be able to fire up a bunch of AWS GPU instances

Or their non-profitable mining rigs.



Placeholder doc for where the code+data will be hosted: http://stylegan.xyz/code

Explainer video for the paper: https://www.youtube.com/watch?v=kSLJriaOumA


Ok, now imaging a paperless future, with plenty of surveillance devices and imaging who much can count computing powers to create fake evidence of pretty anything for tons of different purpose including, but not limited to:

- orient people behavior with nearly unverifiable and absolutely plausible news;

- do illegal activities as a large powerful corporation and plausible denying anything with forged proofs no one can really verify (think about what happen inside megafactories);

- false accuse with skyrocketing high effectiveness anyone that disturb big&powerful;

- essentially centralize even more actual society because yes, we automate, but we do not automate simply with open tech in a free market on n-th competing players and open core knowledge publicly granted by public universities but with VERY few super-giant transnational player with universities reduced at Ford-model workers "production factories".

Imaging how easy we can invent a kind of "widespread illness" just to sell a dummy medicine that does nothing but cost big money so give big money to it's vendor. Imaging how easy we can depict a country, or even a small area of a nation as awful just to move people around. All things can be verified, in theory, but if all tools are made by few big&powerful and are closed black boxes our effective "verification power" it became REALLY limited and so it became our ability to communicate since being everywhere more and more "remote workers" with less physical social contacts and no really free means of communication... We do not even need censorship; it suffice suffocating unwanted news under a stream of other news and silently delete them after a certain amount of time. It's super-easy spread FUD against any unwanted news etc.

Nothing new under the sun of course, only at an unprecedented level with too few that can do too much, without the need of a certain support chain like ancient dictators still need.


I think you probably mean "imagine" rather than "imaging". We are talking about fake images and the imaging of imaginary people, so your comment is a bit confusing.


My English is really poor.

The meaning of my comment is essentially: try to image how easy will became forging false proofs for subjects with enough computational resources and skill. Add to the scenario the actual trend toward few and few subjects with knowledge and infrastructure that act as a "platform of our society"...

Hope it's more clear now.


I believe that best of these generated faces will have a 90% match in training dataset.

If you go through the generated faces you can see all of them have different background. These are not generated from scratch, only picked from memory to match the requirements.

Each face has totally different hair and hairstyle.

Have the published the training dataset?


The previous version of this published a year ago was using public CelebA dataset; and no, vast majority of generated faces were different than training dataset. You could even change latent variables for continuous "morphs" between various synthetic faces, each stage being a unique face. Read on how GANs work first if you have doubts.


I have a rough idea that how it works but I am still skeptic.

Faces of kids specially are unbelievable to be generated. Their backgrounds, clothes, hair facial expressions seem all very unique to that picture. I will be going through the dataset to find the matches once they release it.


It can be demonstrated that GANs learn to model certain image contents with separate input variables like the presence of certain objects or 3D rotations [0] all by themselves, so there is definitely more going on than simply learning to paint some image patches at certain locations and smoothly interpolating them. An image patch reveals itself directly in the training data, so the image patch could simply be stored in the network weights, but a 3D rotation or the presence of a certain class of objects cannot be learned by simply copying image patches into the weights. A 3D rotation requires at the very least a computation of foreshortening, occlusion perhaps based on a depth map. An object detector requires at the very least a feature hierarchy, perhaps binding computations that relate different object parts to the whole object.

Neural networks basically learn to implement nearly arbitrary computations (up to a certain circuit depth) to produce the desired output, so you can also think of deep learning as program mining of a certain program space that is reachable by the adaptive functions in a neural network. Stephen Wolfram has written about that in his blog and talked about it in one of his podcasts. So it's basically magic much like evolution, and it will probably destroy us because it's simply too powerful.

[0] https://twitter.com/phillip_isola/status/1066567846711476224


I thought it was CelebA-HQ (https://drive.google.com/drive/folders/0B4qLcYyJmiz0TXY1NG02...) but the article states that they have made a better dataset; which isn't published yet but promised to be published in January at https://docs.google.com/document/d/1SDbnM1nxLZNuwD8fQkIigUve...


Per: https://docs.google.com/document/d/1SDbnM1nxLZNuwD8fQkIigUve... They're doing so in January, along with the source code.


Asking the important questions


Here is my best guess of the first usage: If I marry this men/woman, how will my kids look like? (approximatively). I wonder if there will be a Tinder app for that.


Sadly, the first use will likely be creating lots of more realistic shill accounts on Twitter.


Isn’t that what the majority of str8 daters (more so women?) do unconsciously already when using Tinder, OkCupid, etc? Like wow they are attractive.... I’d have his kid.

Maybe an upcoming dating app will allow you to connect your profile pics to those who have swiped right. From there this app creates images of what your offspring would look like.


Wow! Another incredible consequence of this tech is that it will actually make blockchain based confirmations of unadulterated digital assests a valuable and needed service as the amount of well crafted fakes soars exponentially. A service that I didn’t see having a big amount of market potential now seems essential.


This point has come up a lot in earlier discussions; the short summary of it is that signatures/blockchains/whatever can't provide anything useful in this regard.

I really don't want to repeat all of this here, but in essence: (1) you'd need all (and I mean all, not most; so all the currently existing devices need to be replaced) of devices that can capture video (and thus can sign/confirm the thing that they captured) to be fully outside of user control, if 0.01% of devices are jailbroken, it fails; (2) you'd need all the manufacturers of such devices (which put the appropriate secrets in the devices) and all the supply chain in between to be 100% trustworthy, since if a manufacturer gives 100 phones to a spy agency that'll happily sign anything as "unadulterated" then it fails; (3) you'd need to figure out a magic way to prevent the analog loophole, as with specialized expensive optics you could project arbitrary pixels to the camera sensor so that the camera would "believe" that it's seeing the exact pixels in reality.

A blockchain would allow you to securely assert things like "this data was seen and signed by device #1234/(or approved by user id #456) no later than this moment", and not more than that. It can't really prove that this data reflects reality somehow; a physical device that has the technical ability to take a picture of my face and add a "it's unadulterated" tag to it can be used (with appropriate effort) to take a fake AI-generated picture and add the same "it's unadulterated" tag.


Great points,

I still think this coming despite all of the reasonable limitations you listed. Our society will desire some form of authenticity and I can imagine an alliance of companies coming together to provide that.


Why block chain instead of a public key?


Well, I was thinking it may be easier to establish and verify chain of ownership and attribution via blockchain vs just public key.


KYC will become even more webcam based until the tech catches up


I remember when super celebrity [redacted] had her 100K stolen. You know, the package of one hundred thousand photos a celebrity has taken so that movie studios can inject their likeness without their presence?

It was funny when the internet was making her say and do dumb stuff. She took it all in pretty good spirit. I think it was the porn, especially the age-deflation stuff, that finally caused her to retire from the public.


I think the sleeper in this article is at the end where they are applying the same technique to inanimate objects. Imagine what this could do to inspire designs and help people that have a good eye but aren’t able to pull it all together in the form of a finished product. With the degree of automation we’re heading towards, the era of bespoke commodity goods is going to be a money maker.


Seems like this would replace creative designers with a "creative black box algorithm".


Aside from fake news, I can't help but wonder: what is the probability that the “generated” faces do not closely resemble people IRL?

An immediate use case that jumped to mind was DIY stock photos or even movies. But, with the above question still hanging, I'm not sure how comfortable I'd be using this.


Well, taken a random human from the population, what is the probability he doesn't resemble anyone else alive? This is not really a mathematical question, since "resembles" is undefined and it really depends on the biological nature of human morphology, but I'm pretty sure that except for pathologically abnormal cases the answer always will be close to zero.


Face generation is a frequent subject of study in computer vision. One way to make sure the generated faces are not just copy-pastes of some face in the training set is to look for low-level visual similarity of the generated images with the training set to guarantee that it's not "too close" to any of the actual faces used to train the model.


some strange things about the 'generated' faces. The one face is nearly identical (1,1) (1,5), just that adding the glasses. another is that male + female generates male images. you'd expect more androgynous images to show up


Slack should change their random profile pics to these


imagine when people first starting telling stories 50k years ago, how it must have felt to listeners at the time "but these people you speak of don't exist!"


We can assume (but not prove) that people back then were deeply animistic. You could talk to them about the soul of stones and they wouldn't flinch.


It's an interesting, even if highly speculative thought experiment: would there be much animism before storytelling? I don't think that the answer is clear.

(now I'm off on a tangent reading https://en.m.wikipedia.org/wiki/Deception_in_animals )


I would assume that our drive to see intent even in happenstance is older than language. But yeah, highly speculative. Sapir & Whorf would disagree :-)


well ok people vary - there are people today who see spirits within everything and others who take everything literally & don't get jokes or metaphors


I legit registered aistockphotos.com when I first saw that paper.


Is it ethical to make porn pics out of generated images ?


Great, more false hopes on dating apps for me ha! They'll look more real than all of the women using snapchat filters on their profile photos.


You joke, but one of the applications for this will almost certainly be to generate fake dating app profiles that look more realistic. The plague of false profiles looks set to get worse.


No, I was being quite serious. Instead of ripping low-rez eastern European model photos now they can just churn out the 'ideal woman' for any number of cultures/sub-cultures.

It won't stop at dating apps either, they'll be able to build all sorts of fake social media accounts that can generate new photo content over and over to back up a persona/legend and/or a narrative for all sorts of nefarious reasons.

You'll have OGAs, private groups like PIAs, hacker/carder groups etc all doing this. Hell, you'll be able to have some 12 year old sitting in his bedroom generating private shoot content of a dozen women for whatever the most trending genres on pornhub are and selling private sets for months or years without anyone catching on.

I need to start some sort of neo-amishesque group and run away from tech.


It would be hilarious if dating site scammers developing chat bots end up being the first ones to pass the Turing Test.


In this day and age there's no running away from tech...


Can the same or similar techniques be used to distinguish real from generated?

Also, might this destroy the usefulness of photographic evidence?


NVidia's demo video is staggering.


If this kills dating apps. I think speed dating meetups will sky rocket.


Dupe: https://news.ycombinator.com/item?id=18675371 (video link in comments of dupe)


Startup idea: Suspect sketches using GANs


More likely it will end up as the opposite: fabricate a plausible accusation based on available evidence against a specific person.


If it can run through evidence arbitrarily wouldn't that imply better general utility as a detective than a framer? "Given the available evidence and remotely plausible suspect pool x is a way better match than y."


I saw this a while ago and noticed how none of the darker skinned individuals have nappy hair.

Not sure if I feel it's offensive


[dead]


Let’s make it


Artificial Intelligence Creates Realistic Photos of People, None of Whom Exist... yet. Let's wait a couple years until "they" start creating these people, the next humans on Earth :)




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: