Now, if it was 50/50, I'd have said they're just as coherent. But it's 13:1 which suggests to me that there is a bias here. I think the authors intentionally selected quotes which make the least sense out of context to be the Goop quotes and cherry picked GPT2 quotes that happened to sound the most sense without any context. This is supported by the fact that I only had to go through 14 quotes before it started repeating.
If that's the case, and I suspect it is, it's not really dishonest per se, but it is at least sensationalist and potentially misleading. It's asking you to draw conclusions by having you participate in an experiment where it has its thumb on the scales.
I wouldn’t spend too much time reading into it or critiquing it’s honesty. It clear the author is just making fun of Goop through the medium of AI. Making out as anything more than that would be like critiquing The Onion for not providing sources.
I fine tuned the model (OpenAI's GPT-2) using Max Woolf's gpt-2-simple and by scraping articles from Goop's "Wellness" section. I generated predictions by feeding it a few words from the opening of actual Goop sentences (not sentences it was trained on) and seeing what it spat out.
There aren't many quotes (something like 25) in it right now, but I can add more easily if people have fun with it.
I am working on a new text generation package which should be even more simple to use.
They've gone way beyond "lack of scientific proof" many times. Including dangerous advice like "vaginal steaming". https://www.independent.co.uk/life-style/health-and-families...
EDIT: All good now :)
The way you packaged it can be used to label data to improve ML models. I am thinking of such a link being sent out to numerous people to crowdsource labeling. Even if they answer one question, if a few million people answer it once, that's a few million responses to help train the model.
Is it using any notion of common sense?
Volia, Content indistinguishably coherent from the original. Bring your own sanitation wipes.