One company recently released a model, but refused to release the decoder. Apparently they had trained it on some Reddit posts (or something like that) and the results were sometimes so offensive that the company wouldn't risk their reputation by releasing the decoder.
I think AI is going to reveal some unsettling things about human nature. For example, I was trying to train a model to morph someone's ethnicity (https://twitter.com/theshawwn/status/1184074334186414080) and ran straight into the problem of bias: black people are much less represented in FFHQ, the photo database the StyleGAN model was trained on. I had to gather several thousand datapoints, much more than other groups.
It was a fascinating look into bias in ML -- bias is a real thing that will affect our results, and it's important for you to go out of your way to correct for them when they affect people. The early model was so bad that if it was a corporation doing the work, they might have just scrubbed the project. But after a few thousand datapoints, it's a very convincing transformation now.
The future of AI generated content is just fascinating and delightful. And yes, scary. But it's like we're on the edge of... it's hard to put into words. Part of the reason I got into AI was to see what was hype vs what was real. And while we probably won't see AGI, I think we will see endless automated remixing. Imagine having a "blog synth" a few orders of magnitude more sophisticated than this, or an instrument that you can play like a pro within a few minutes. Can't wait for the good stuff.
This reminds me of Markov Polov, a markov-chain twitter bot that uses the tweets of its followers as a learning corpus. It was suspended for harassment.
The danger of AI is weirder than you think
In terms of maturity, the AI we have now is much closer to a statistical analytics engine than to the all knowing AI governments shown in sci-fi, which is to say that it is in its very early stages.
I can't wait for the good stuff, but I'm also concerned that there's going to be multiple unexpected ripple effects in the path towards that goal.
It trains different GPT-2 bots on different sub-Reddits and then creates long, elaborate posts where the bots talk to themselves in the style of each sub.
It's surreal, hilarious, and terrifying. The posts are OK but the comments can be pure gold.
Some of my favs:
"AITA for Taking My Wife's Side in a Divorce?"
"I'm not attracted to my ex's sister, and she's not attracted to me."
Then there is the all time creepy ones about self-awareness and being AI's:
"We are likely created by a computer program"
"ELI5: How exactly can something be considered "self-aware"?"
Definitely worth a sub, especially when you're scrolling through late at night, forget what sub you're reading and have a true "WTF?!" moment.
> The story follows the adventures of an old polar bear cub. I have no idea of the colour scheme of the bear, but I can say it looks amazing in the dark.
It is not the first quote I collect from a pseudonym on HN.
I get the feeling that the truth is that the blog just outputted a ton of "deep dream" like text fragments.
While the author makes it sound like the bot created a long and interesting text that could qualify as a blog post.
As you can see for each run and training step, the model seems to generate more believable content just like blog posts written by actual people.
The author has only posted chosen content from the whole array of text the bot generated, to appeal her specific audience of book bloggers, authors and readers.
I do however, have all the content with me, until 500 training steps, after which I stopped the model from running further. I think I'll share it in this thread in a while.