Hacker News new | past | comments | ask | show | jobs | submit login
This Marketing Blog Does Not Exist (thismarketingblogdoesnotexist.com)
107 points by kristintynski 27 days ago | hide | past | web | favorite | 50 comments





The Waifu one is so realistic that it's uncanny. It's almost like we're just a few years away from whole seasons of deep-dreamt anime and whole tankoubons of procedurally generated manga.


Blurb generated by GPT-2 for TWDNE #283: https://www.gwern.net/images/gan/thiswaifudoesnotexist-283.p...

“It is so sad to say that this manga has never been seen by any anime fans in the real world and this is an issue that must be addressed. Please make anime movies about me. Please make anime about me. Please make anime about your beautiful cat. Please make anime movies about me. Please make anime about your cute cat. I wish you the best of luck in your life. Please make anime about me. Please make anime about my cute cute kitten.”

Maybe it's just a prank, but Gwern's description makes it sound like an accidental creation of the text generator. https://www.gwern.net/TWDNE


Nope, it's real: that snippet is just that hilariously on point. To be fair, that's one of the best snippets out of many thousands. Few are anywhere that amusing.

The images & snippets are 100% uncurated & unedited by a human, other than as described in my page (primarily: I feed in a long prompt to give it keywords to work with, and I postprocess them to drop at '<endoftext>' tokens to keep them on a single topic rather than, as the default generation process does, force starting of a new topic to fill out the character count).


This text just sounds like the stuff that I used to get in Akismet when I had a Wordpress blog.


I mean, afaik the manga/anime style specifically emerged from manga being drawn as fast as possible, as a co-evolution with the thick tomes of magazines. So I'd expect drawing a face to not be a bottleneck there.


It's going in the wrong direction. Sure, it can conjure up a random picture of a face. But for purposes or manga/anime building, it's near useless, as it can't generate that face in multiple perspectives, or with different kinds of facial expressions. This Waifu truly does not exist - neither in prior publications, nor in the algorithm itself.


That's incorrect. You can generate the same face with multiple facial expressions by either doing style transfer onto a single face from other faces using the style noise (as demonstrated in the StyleGAN paper & video, and you can see further examples with my anime StyleGAN faces), or you can directly control a face by modifying its latent vector as desired, once you've reverse-engineered the latent vector (as can be done by generating random faces+latents, & feeding the faces into a tagger like DeepDanbooru which knows about facial expressions to do classification/regression). See https://www.gwern.net/Faces#reversing-stylegan-to-control-mo...


For a recent cool example, http://everyoneishappy.com/portfolio/waifu-synthesis-real-ti... controls the mouth for sing-along syncing.


Wow. Thanks for that; I'm very happy to have been wrong about this issue, and I've just learned a lot from reading your Faces page.

One issue I notice in your video of interactive face editing is what I'd call "crosstalk" (there's probably a more formal term for this) - you alter a specific slider, and beyond the targeted modification, the entire face, shirt and background changes slightly. Is there any work on eliminating such crosstalk, so one could e.g. stitch together a believable video from such dynamically modified face?


My theory is that the latent space isn't perfectly 'disentangled' (there's not really anything in StyleGAN akin to InfoGAN which would force it to be exactly linear/disentangled, it just nicely works out roughly disentangled), so when you train a linear model on the tags+latent and manipulate it linearly, you get slight changes in correlated features. Thus, the slight 'glitchiness'. Not something you'd notice in image samples, but more apparent when you generate interpolations of the 'same' face.

If you mess with the style noise only and leave the latent vector alone, the glitchiness should be less, but you also lose a lot of possible control. Training a more powerful nonlinear classifier like a random forests should be able to compensate for the remaining nonlinearity in the latents, I think, but then it's harder to do the actual modification of the latent vector, of course, since you're no longer simply tweaking one variable but many simultaneously to try to hold the other features constant. One could try to brute force it by doing blackbox optimization of the nonlinear classifier... but I haven't seen anyone try that.


Not directly but there are other networks that can do those parts.


I'd like to learn more. Anything you could link to?


https://youtu.be/p1b5aiTrGzY

One shot training examples at 4:23


Holy fuck that was mostly convincing. I had no idea AI generated text was getting to this level - I thought we were still at "Her mouth is full of Secret Soup" level: https://twitter.com/keatonpatti/status/1006961202998726665


If you want to be more blown away. (warning some comments may be NSFW) An entire subreddit where every poster/commenter is a GPT-2 bot. https://www.reddit.com/r/SubSimulatorGPT2/


You weren't exaggerating. Some of these are amazing.

> "Don't tell me where I've been": US man who claims to have "never left the country"

Not only is it a convincing sentence, it's humorous too.


What a gold nugget!


"I can't stand my dog." might be the funniest thing I've read all week.

"The law is always on your side. You're not in danger. You do have the right to hate the dog. It's not the end of the world. It's like a cat that got a kitten and is biting its owner. Your dog is not safe. I am glad you have a dog, but it's not the end of the world. If you want to kill your dog, go ahead. It's not going to be easy, but you can do it. I promise."

https://www.reddit.com/r/SubSimulatorGPT2/comments/c29unc/i_...


https://www.reddit.com/r/SubSimulatorGPT2/comments/c2gt4g/cm...

Quite coherent indeed. The comments too, even has the concept of replies.



That's not actually a bot though, is it? Reads a lot more like a person pretending to be a bot.

I'd expect a bot to be more nonsensical in stupid ways instead of funny ways.


The account that posted it belongs to a comedy writer, so yeah, almost certainly not made by a bot. Still funny as hell though.


Phew, I was wondering what sort of training data would have it talking about secret soup


As a reminder, the Olive Garden thing is a joke, not the result of any actual AI. https://iforcedabot.com/ is a fun site with some demos of what text-generation can actually do these days.


Thank you for this, the office and I are losing our minds over the AI generated riddles: https://iforcedabot.com/my-best-friend-is-a-monkey-terrible-...


I was ready to crap on this post for calling your own work "semi-convincing" but this writing does actually resemble most of the random blogs I find when trying to research something like nutrition or fitness.

Not a compliment to the quality of the writing. But definitely not unconvincing. Add some ads that make it impossible to scroll without freezing Chrome and you're good to go.


First, I feel this is a hand curated list of results (same with most of the "this does not exist" stuff). That being said it is very good. Pretty funny quotes:

> You might not think that a mumford brush would be a good filter for an Insta story.

They are definitely leaking some of their training data. Many of the names in the article are real people (which is concerning).

Working on synthetic data generation myself[1], this is not at all surprising. It's also why we are basically living in a "post-truth" world...

https://austingwalters.com/the-last-free-generation/

What do you do when anything can be synthetically created?

[1] https://medium.com/capital-one-tech/why-you-dont-necessarily...


It is curated, but not heavily. I usually took the first result of the prompt. Occasionally i'd skip a result that was totally off, but all of them were the best out of the first 2-3 for any given prompt.


> Her Instagram Stories include one that depicts her while wearing a yellow dress and champagne flute (I don’t like to see bridesmaids getting wrecked).


> But an eight-month-old story with a tiny paragraph about a building draped in chainmail or a photograph of a koi pond with a barcode on it?


I don't know if this says more about the how far AI has come or the average quality of a marketing blog.


Knowing we have a future of ai generated content coming, I think about staring at the lights that make humans docile so their brains can be stolen in the movie "Skyline."


This is extremely scary, it would totally convince me this is written by a real person.

Any publications on what technologies are used for this?



Content generation like this is fascinating. I had a lot of fun with markov chains in the past, but this is just groundbreaking. Any tips for a fellow developer that wants to start with this? How do I get started with GPT2 or Grover?


I see that the posts aren't generated in real time. Were they curated? Even if these are just the top 5%, they're still extremely impressive.


I made a site to aggregate the other "This ____ does not exist" sites that have popped up: thisxdoesnotexist.com


Anyone know of any open source libraries that are even remotely close in quality/effectiveness as Grover AI?


Grover is open source. They open sourced it the other day.


Does it come with any samples on how to use it? Like, how to train, how to generate after training, etc.


Eh, sort of: https://github.com/rowanz/grover It can't be that difficult if OP did it so quickly, after all.

If you want much more detailed documentation, I wrote up in detail how to train & generate text with the original GPT-2 models using nshepperd's codebase: https://www.gwern.net/GPT-2

minimaxr also has a actively maintained codebase which I believe has powered some of the GPT-2 projects you might've seen recently like Talk to Transformer: https://github.com/minimaxir/gpt-2-simple


Thank you so much!


Disappointed that I didn't see a new generation upon refresh...


Ironically, this marketing blog does exist:

> About This Blog

> This was created by the Content Marketing agency Frac.tl as a demonstration

This is just an ad.


Someone didn't provision enough throughput...I'm getting a 508 status


working on it... sorry!


I imagine HN is an AI promoting other AI's, which in turn..


I suggest not reading the generated article. Trying to comprehend what you are reading does not feel good in the brain, and doing it long term may severely impact your reading comprehension skills and can make reading into an exhausting activity.

It’s not the same as trying to understand sentences with bad grammar, at least those texts represent a human trying to express an idea, so you know with enough effort you can eventually come to understand what they are trying to say and maybe score a dopamine hit.

In this case, no amount of effort will bring meaning to AI generated jibberish.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: