* Person: https://thispersondoesnotexist.com/
* Waifu: https://www.thiswaifudoesnotexist.net/
* SO Question: https://stackroboflow.com/
* Startup: https://thisstartupdoesnotexist.com/
* Resume: https://thisresumedoesnotexist.com/
 On Archive.org - https://web.archive.org/web/20190220071550/https://thisairbn...
“It is so sad to say that this manga has never been seen by any anime fans in the real world and this is an issue that must be addressed. Please make anime movies about me. Please make anime about me. Please make anime about your beautiful cat. Please make anime movies about me. Please make anime about your cute cat. I wish you the best of luck in your life.
Please make anime about me. Please make anime about my cute cute kitten.”
Maybe it's just a prank, but Gwern's description makes it sound like an accidental creation of the text generator. https://www.gwern.net/TWDNE
The images & snippets are 100% uncurated & unedited by a human, other than as described in my page (primarily: I feed in a long prompt to give it keywords to work with, and I postprocess them to drop at '<endoftext>' tokens to keep them on a single topic rather than, as the default generation process does, force starting of a new topic to fill out the character count).
One issue I notice in your video of interactive face editing is what I'd call "crosstalk" (there's probably a more formal term for this) - you alter a specific slider, and beyond the targeted modification, the entire face, shirt and background changes slightly. Is there any work on eliminating such crosstalk, so one could e.g. stitch together a believable video from such dynamically modified face?
If you mess with the style noise only and leave the latent vector alone, the glitchiness should be less, but you also lose a lot of possible control. Training a more powerful nonlinear classifier like a random forests should be able to compensate for the remaining nonlinearity in the latents, I think, but then it's harder to do the actual modification of the latent vector, of course, since you're no longer simply tweaking one variable but many simultaneously to try to hold the other features constant. One could try to brute force it by doing blackbox optimization of the nonlinear classifier... but I haven't seen anyone try that.
One shot training examples at 4:23
> "Don't tell me where I've been": US man who claims to have "never left the country"
Not only is it a convincing sentence, it's humorous too.
"The law is always on your side. You're not in danger. You do have the right to hate the dog. It's not the end of the world. It's like a cat that got a kitten and is biting its owner. Your dog is not safe. I am glad you have a dog, but it's not the end of the world. If you want to kill your dog, go ahead. It's not going to be easy, but you can do it. I promise."
Quite coherent indeed. The comments too, even has the concept of replies.
I'd expect a bot to be more nonsensical in stupid ways instead of funny ways.
Not a compliment to the quality of the writing. But definitely not unconvincing. Add some ads that make it impossible to scroll without freezing Chrome and you're good to go.
> You might not think that a mumford brush would be a good filter for an Insta story.
They are definitely leaking some of their training data. Many of the names in the article are real people (which is concerning).
Working on synthetic data generation myself, this is not at all surprising. It's also why we are basically living in a "post-truth" world...
What do you do when anything can be synthetically created?
Any publications on what technologies are used for this?
If you want much more detailed documentation, I wrote up in detail how to train & generate text with the original GPT-2 models using nshepperd's codebase: https://www.gwern.net/GPT-2
minimaxr also has a actively maintained codebase which I believe has powered some of the GPT-2 projects you might've seen recently like Talk to Transformer: https://github.com/minimaxir/gpt-2-simple
> About This Blog
> This was created by the Content Marketing agency Frac.tl as a demonstration
This is just an ad.
It’s not the same as trying to understand sentences with bad grammar, at least those texts represent a human trying to express an idea, so you know with enough effort you can eventually come to understand what they are trying to say and maybe score a dopamine hit.
In this case, no amount of effort will bring meaning to AI generated jibberish.