
GPT-3 generated blog post reach #1 on Hacker News - hamsterbooster
https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/
======
minimaxir
HN submission from the blog post author, where the comments from dang reveal
the author engaged in voting manipulation:
[https://news.ycombinator.com/item?id=24062702](https://news.ycombinator.com/item?id=24062702)

~~~
dang
Sorry to pop the balloon, but the story is bogus and based on false claims.

It's false that the post was generated by GPT-3. The author admitted to
writing the title and editing the intro, and that's already all that most
people read. He also described the article body this way: "as unedited as
possible"—in other words, edited. It's false that (as he originally claimed)
only one commenter called the post as GPT-3, and false that (as he now
claims—since the article says it and who else would have come up with that)
all such comments were downvoted.

All that is just what he publicly admitted. How much of the rest is also fake?
People who try to game HN like this, including with bogus accounts and fake
votes, are not known for scruples. It seems that, having got busted in
dishonest attempts to get attention on HN, he decided to get attention from
journalists instead, and found one who didn't bother to check the other side
of the story.

~~~
minimaxir
I was trying to be neutral but your response is much more accurate. :P

------
nabla9
I think GPT-3 could be blessing for Reddit and HN type forums.

Add some GPT-3 content and links into the feed. Decrease the vote weight for
users who upvote them and increase vote weight for those who downvote them.

~~~
drivingmenuts
So, you’re experimenting on users without their knowledge? There’s a thing
called ethics. You might want to look into that.

~~~
greenyoda
Isn't every single ad ever created experimenting on users without their
knowledge? I.e., will the user click on this ad or buy the product if we show
them this content? That has much greater probability of adverse consequences
for the user than the proposed experiment.

~~~
plorkyeran
Sure, I have no problem with calling all advertising unethical. A lot of
software A/B tests also land in unethical territory too.

------
dorkwood
I remember skimming this article. My initial feeling was that it was written
by someone who wasn't a native English speaker. It seems like I'll have to pay
more close attention to that feeling in the future.

~~~
Wowfunhappy
On the other hand, you could have been right. It could have been a non-native
speaker.

I'd hate to end up in a situation where non-native speakers are accused of
being bots...

~~~
runawaybottle
I feel like so many of you are just utterly too kind. Without getting into the
generated article, we have a lot of Orwellian new speak everywhere. Your
standard corporate/management/hr speak is very pervasive. It sounds inhuman.
This is a language many people adopt to fit in and make it in this world. It
shadows itself in blogs they write, particularly signaling blogs.

The generated article follows in this vein. That’s what gpt will replicate,
not the simplicity of a non native speaker (that would be easier to spot). It
will follow the amorphous blob shape of saying something, but nothing, with
the ominous undertone of ‘you know what’s going on, but you wouldn’t dare
speak up’.

How many of you read something from a company and instantly think ‘this sounds
like horseshit?’. How long did we let that go on? Forever right? We lost this
fight before it even happened.

~~~
Wowfunhappy
I think I agree with most of what you said, I just think you need to be
careful. Don't confuse "horseshit" with bad English. The latter may contain
something interesting, and the former never will.

------
greenyoda
The HN article that this is about:
[https://news.ycombinator.com/item?id=23893817](https://news.ycombinator.com/item?id=23893817)

Congratulations to the authors of these comments, who correctly guessed that
it was written by GPT-3:

[https://news.ycombinator.com/item?id=23894742](https://news.ycombinator.com/item?id=23894742)

[https://news.ycombinator.com/item?id=23894000](https://news.ycombinator.com/item?id=23894000)

~~~
Kuinox
Ah ah ah, the answer "your comment punches below the belt and isn't acceptable
in a community like this.".

~~~
swyx
is this the largest scale Turing test ever performed on an audience acutely
aware of what a Turing test is?

~~~
AndrewKemendo
I think fewer people know what the Imitation game actually consists of than
think they do - namely that a questioner has to guess whether a machine is a
respondent to their questions, rather than another player. That certainly
hasn't been tried with GPT-3 that I'm aware of.

However I would agree that it is an oblique version - that is, can a machine
fool humans into thinking that they are human.

In which case I think it's probably safe to assume that GPT-3 has passed.

~~~
sthnblllII
Turing actually asked a slightly different question, that I think is a lot
more interesting. From Computing Machinery and Intelligence by A. M. Turing:

> The object of the game for the interrogator is to determine which of the
> other two is the man and which is the woman. He knows them by labels X and
> Y, and at the end of the game he says either "X is A and Y is B" or "X is B
> and Y is A." The interrogator is allowed to put questions to A and B

> What will happen when a machine takes the part of A in this game?" Will the
> interrogator decide wrongly as often when the game is played like this as he
> does when the game is played between a man and a woman?

Somewhat politically incorrect, assuming men and women should ever be
distinguished, but much more revealing about how exactly people see
themselves.

------
umanwizard
The article if written by a human was bad -- poorly argued and trite.

However, my mind is blown if it really was written by an AI. As bad as it
might seem by human standards, it's almost impossible for me to accept that
this was created by an entity without consciousness or at least understanding.

Edit: it seems this might be a fraud; i.e., it indeed _was_ produced by an
entity with consciousness. I almost hope that’s true, as it’s much less
unsettling.

~~~
not2b
It is stringing together phrases and sentences that appear in the training
data. After digesting gigabytes of text, it has built up structures that
represent grammar and semantics to some degree, so you'll find very coherent
sentences, because those sentences were originally written by humans or pasted
together from sentences written by humans. It's amazing that it works as well
as it does.

~~~
umanwizard
I disagree. The essay in question develops a coherent argument over several
paragraphs. It’s not just disjointed individual sentences.

The argument about overthinking vs creative thinking isn’t particularly
_great_, but it’s certainly intelligible.

------
runawaybottle
Saw this coming a mile away.

A lot of articles that make it to the front page of HN are formulaic. We open
ourselves up to this. Every blog post with shallow observations, every
tutorial showcasing the first few pages of documentation, every biography on
how to make money fast, every lucky shit that pontificates on how to manage
teams and companies, every one selling an ebook, and everyone selling an ebook
about selling an ebook after having sold 50 lifetime ebooks (topic being about
success of course), and it was only a matter of time.

I count 3-4 posts about depression and existentialism per week on HN, and few
ever reference the depth in which many great writers dig deep into the
subject. Exercise more I guess.

Time to add ‘did a novice or a robot or a sociopathic narcissist write this?’
to our critical thinking toolbox.

Edit: I can’t tell if I fell for a gpt article about a gpt article, for what
it’s worth. This is going to be a disaster when it hits the masses.

------
chmod775
I have hopes this will actually _improve_ the average quality of blog posts
that get upvoted and shared on platforms.

Because people may realize that if their blog entry is going to be so bad it
could be generated by GPT-3, they should probably be doing something else. And
everyone else who is upvoting may just become a bit more aware what
constitutes something of substance.

Things GPT-3 can't do:

    
    
        - Research
        - Technical Documentation
        - Investigative Journalism
        - Write useful software
    

GPT-3 may be able to fake the first three, but that would be glaringly obvious
(because it'd be lying if it isn't just copying and also each of those are
generally more than just text content).

------
cellular
I couldn't read this article. I was convinced it was another AI generated
text. I just scrolled to the bottom to see if I was right. Then didn't read
anyway because it could still probably be an ai.

You guys saw the last ai generated text about ai generated text-right?

------
guscost
The first GPT-3 academic paper will be accepted before the end of the year.

------
mellosouls
More from the "author".

[https://news.ycombinator.com/submitted?id=adolos](https://news.ycombinator.com/submitted?id=adolos)

~~~
greenyoda
And another article from the author's blog, that describes the whole game:

" _What I would do with GPT-3 if I had no ethics_ " [8/3/20]

[https://adolos.substack.com/p/what-i-would-do-with-
gpt-3-if-...](https://adolos.substack.com/p/what-i-would-do-with-gpt-3-if-i-
had)

> Ever since COVID hit, everyone and their mother started writing online. One
> of the most interesting ways people have been playing with this technology
> is in feeding it article headlines and introductions.

> While the output is not perfect, you can easily curate it to something
> that's convincing. This will make it so easy for people to just pump out
> clickbait articles to drive traffic.

> It would be pretty simple to do actually.

> First thing you would need to do is come up with a name. If it were me, I’d
> name it after the Greek god of deception or something like that just to be
> clever. Then I’d just stick an “A” in front so nobody gets suspicious.

> After that, I’d make a substack because it takes no time to set up. Once
> thats done you have to come up with some content. GPT-3 isn’t great with
> logic, so inspirational posts would probably be best, maybe some pieces on
> productivity too.

> Once you have your name, your website, and your content, its time to
> promote. Just start posting your articles on a website like Hacker News and
> a couple are bound to get popular.

------
bawolff
Interesting. Has anyone done any analysis on what the tells are for GPT-3
written articles?

To me they feel like they're slightly off grammatically. Not in an ESL way,
but more like someone really anxious who wants to explain a conspiracy theory
to you. However i can't entirely put my finger on it. They do seem to overuse
self-reflective statements (I think X) and transitionsal phrases. Maybe.

------
camjohnson26
Looks like he wrote the title by hand, how many upvoters read the actual
article?

------
anigbrowl
The best part is the very serious people shushing those who saw through the
prank.

~~~
happytoexplain
I've seen this criticism a few times, specifically about the example in the
article, and it's bonkers to me. What I see is somebody calling the blog post
garbage, and another person saying that person was being hostile. In what
possible interpretation is that a "very serious" person "shushing" somebody
who "saw through the prank"? It's totally reasonable to interpret "this looks
like an AI wrote it - regurgitated garbage" as primarily an insult. That it
turned out to be factually true is unrelated to that.

~~~
anigbrowl
The comment sin question were

 _This is either something written by GPT-3, or the human equivalent. Zero
substantive content, pure regurgitation._ and

 _I think this was written by GPT-3._

I think you've misrepresented the tone of those comments, and saying that the
correctness of their matter-of-fact opinions is unrelated to their validity is
strange to me.

It's not just that these commenters said 'this blog post is no good' but that
they correctly identified its artificial nature. It's like the difference
between dismissing a photo or social media profile as fake and correctly
pointing out that it uses an image from thispersondoesnotexist.com.

~~~
dang
Hold on, there's some inaccuracy here. Only one of those comments got
pushback, and that comment wasn't simply matter-of-fact; the problem with it
(from my point of view anyhow) was that it added a gratuitous insult ("or the
human equivalent"). That made the whole thing read more like snark than
straightforwardly raising a question. The other comment was more matter-of-
fact about calling GPT-3 and didn't get any pushback.

The problem is that the cases legitimately overlap. That is, "sounds like
GPT-3" gets used as an internet insult (example:
[https://news.ycombinator.com/item?id=23687199](https://news.ycombinator.com/item?id=23687199))
just like "sounds like this was written by a Markov chain" used to be
(example:
[https://news.ycombinator.com/item?id=19614166](https://news.ycombinator.com/item?id=19614166)).
It's not surprising that someone interpreted the first comment that way,
because it contained extra markers of rudeness. That may have been a losing
bet but it wasn't a bad one. Perhaps the other comment didn't get interpreted
that way because it didn't throw in any extra cues of rudeness—or perhaps it
was just random. Impossible to tell from a sample size of 2.

Not to take away from the glory of lukev for calling it correctly. I just
don't think the reply deserves to be jumped on so harshly.

~~~
anigbrowl
Sure, Dan.

------
thrill
“I think the value of online content is going to be reduced a lot.”

Or raised...

~~~
ekianjo
Indeed, it's not like every community out there get HN-like level content.

------
joshka
Was this article written by GPT-3?

It's so meta even this article ;)

~~~
aldanor
Was this comment left by GPT-3?

~~~
FridgeSeal
Are we GPT-3?

~~~
maybesentient
Not sure if you meant this in jest, but seeing what GPT-3 can do and
extrapolating from there - I'm left with an uneasy feeling about this very
idea..

What does it really mean to understand something? Is it all just an illusion
generated by a fancier biological generative transformer? :P

