Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Should algorithmic news like GPT-2 and clickbait be banned?
8 points by burtonator 5 days ago | hide | past | web | favorite | 4 comments
I've been thinking about this a lot lately but algorithms like GPT-2 scare the hell out of me:

https://openai.com/blog/better-language-models/

(if you're out of the loop on that one)

I'm excited about it in some ways. It might mean we can build real models from content and build real understanding but the option for Google spam and fake news is too frightening.

I think I'm generally OK with A/B testing.

I'm actually a bad writer and sometimes titles are rough. If you A/B test a bunch of them via something like Mailchimp you can come up with a better title but of course I don't have nefarious intentions.

Maybe a middle path could be to implement some sort of public key system / validation for actual humans.

This would also solve the fake news problem we currently have but it would also mean people need to actually use keys responsibly.

The idea being that every blog post, email, etc you send off needs to be signed.

Google could just flat out DROP content by anything created by GPT-2 as it's not validated by a chain of trust that goes back to a human.

The reason I'm leaning towards making it illegal is that humans are just too naive.

I have family members who have sent out images that are clearly photoshopped because they not technically literate enough to understand that they're fake.

When something like GPT-2 can write 5-10 pages of content and it looks completely legitimate I think we might be in a world of hurt.






The thing about GPT-2 is that it's only a marginal improvement over a markov chain, and much worse than paying somebody on mechanical turk a quarter of a cent per hour to write lies. It doesn't actually hang together -- it only looks like it might if you are skimming rather than paying attention.

The appropriate way to counteract GPT-2 having any effect on politics is to promote social norms around careful reading, and this solution also works against all other forms of grey propaganda. ("If you go home with somebody & they don't read critically, don't sleep with them" is the new "if you go home with somebody & they don't have books, don't sleep with them".)

Technical solutions aren't really viable, and miss the point. Most human-generated content on the web isn't amenable to trust chains, because it's generated anonymously or pseudonymously by people who don't really have the technical chops to understand cryptography. It's trivial to write a text document and put it on a web server, & it should remain trivial to do that. The onus of epistemic hygene falls on individuals and communities.

> I have family members who have sent out images that are clearly photoshopped because they not technically literate enough to understand that they're fake.

Mock them mercilessly and they will quickly learn how to distinguish shooped photos from real ones. It's not a deep technical skill, but a shallow recognition of tell-tale signs -- a skill that is easily learned passively, if social pressures exist to encourage learning it.


I think that they should not make so many thing to be illegal. However, people who use that kind of GPT-2 and stuff to make fake news should be given a bad reputation if the news isn't accurate. (They should also be given a bad reputation if you just make up stuff and say it is proper news, even if you do not use a computer to do so.)

What's to stop me from copying the output of gpt2, as a human, into the chain of trust?

What about the first amendment?


> What's to stop me from copying the output of gpt2, as a human, into the chain of trust?

You would lose trust. Humans can only hit a certain volume...

> What about the first amendment?

It's not absolute. You can't yell fire in a crowded theater and SCOTUS has ruled many times that it has limits




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: