
Ask HN: Should algorithmic news like GPT-2 and clickbait be banned? - burtonator
I&#x27;ve been thinking about this a lot lately but algorithms like GPT-2 scare the hell out of me:<p>https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;better-language-models&#x2F;<p>(if you&#x27;re out of the loop on that one)<p>I&#x27;m excited about it in some ways. It might mean we can build real models from content and build real understanding but the option for Google spam and fake news is too frightening.<p>I think I&#x27;m generally OK with A&#x2F;B testing.<p>I&#x27;m actually a bad writer and sometimes titles are rough.  If you A&#x2F;B test a bunch of them via something like Mailchimp you can come up with a better title but of course I don&#x27;t have nefarious intentions.<p>Maybe a middle path could be to implement some sort of public key system &#x2F; validation for actual humans.<p>This would also solve the fake news problem we currently have but it would also mean people need to actually use keys responsibly.<p>The idea being that every blog post, email, etc you send off needs to be signed.<p>Google could just flat out DROP content by anything created by GPT-2 as it&#x27;s not validated by a chain of trust that goes back to a human.<p>The reason I&#x27;m leaning towards making it illegal is that humans are just too naive.<p>I have family members who have sent out images that are clearly photoshopped because they not technically literate enough to understand that they&#x27;re fake.<p>When something like GPT-2 can write 5-10 pages of content and it looks completely legitimate I think we might be in a world of hurt.
======
enkiv2
The thing about GPT-2 is that it's only a marginal improvement over a markov
chain, and much worse than paying somebody on mechanical turk a quarter of a
cent per hour to write lies. It doesn't _actually_ hang together -- it only
looks like it might if you are skimming rather than paying attention.

The appropriate way to counteract GPT-2 having any effect on politics is to
promote social norms around careful reading, and this solution also works
against all other forms of grey propaganda. ("If you go home with somebody &
they don't read critically, don't sleep with them" is the new "if you go home
with somebody & they don't have books, don't sleep with them".)

Technical solutions aren't really viable, and miss the point. Most human-
generated content on the web isn't amenable to trust chains, because it's
generated anonymously or pseudonymously by people who don't really have the
technical chops to understand cryptography. It's trivial to write a text
document and put it on a web server, & it should remain trivial to do that.
The onus of epistemic hygene falls on individuals and communities.

> I have family members who have sent out images that are clearly photoshopped
> because they not technically literate enough to understand that they're
> fake.

Mock them mercilessly and they will quickly learn how to distinguish shooped
photos from real ones. It's not a deep technical skill, but a shallow
recognition of tell-tale signs -- a skill that is easily learned passively, if
social pressures exist to encourage learning it.

------
zzo38computer
I think that they should not make so many thing to be illegal. However, people
who use that kind of GPT-2 and stuff to make fake news should be given a bad
reputation if the news isn't accurate. (They should also be given a bad
reputation if you just make up stuff and say it is proper news, even if you do
not use a computer to do so.)

------
verdverm
What's to stop me from copying the output of gpt2, as a human, into the chain
of trust?

What about the first amendment?

~~~
burtonator
> What's to stop me from copying the output of gpt2, as a human, into the
> chain of trust?

You would lose trust. Humans can only hit a certain volume...

> What about the first amendment?

It's not absolute. You can't yell fire in a crowded theater and SCOTUS has
ruled many times that it has limits

