Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Press freedom means controlling the language of AI (niemanlab.org)
20 points by thm on Sept 29, 2023 | hide | past | favorite | 28 comments


A truly free press controls its language from start to finish. It knows where its words come from, how to choose and defend them, and the power that comes from using them on the public’s behalf. But GenAI language has no such commitment to the truth, eloquence, or the public interest.

Trust in journalism is at historic lows. “Trust us, not the new technology that will put us out of power,” seems like an exceptionally out of touch approach to take on this topic.

I keep coming back to the Printing Press and the Reformation, because I think we are experiencing a similar phenomenon with the Internet broadly and generative AI specifically. And if there are any lessons to be learned from that era, it’s that insisting on your established authority is not a winning move. “Counter Reformation,” aka reforms that take the criticisms into account, might be.


That is a very odd historic analog to pick but deliciously ironic in that you apparently (?) missed the bit in the OP about “few powerful” interests controlling the labeling of data and thus ‘vocabulary of discourse’ of generativeAI. It is ironic because Luther too had “few powerful” parties behind him — various Princes iirc — who also wanted to unseat Vatican and thus amplify their own power.

So sure, “journalism” is at a low prestige level because a “few powerful” interests -own- all mass media, including press, and we have a handful of “press services”.

Key issue here is the troubling “few powerful” regardless of their declared ‘denomination’, so to speak, don’t you agree?


I don’t disagree with your point about those things being a problem, but I think GP’s interpretation of this article is correct:

> This power goes to the core of journalism’s public service, namely its capacity and obligation to artfully, eloquently, and intentionally use language to create and debate the ground truths that anchor shared social realities.

They are upset at being potentially replaced as one of the “few powerful” who get to create fundamental truths. These authors seem to really think controlling thought is the job, which seems pretty anti-freedom to me.


For sure you could make an analogue between Luther with his princely backers and the fact that generative AI is currently dominated by a handful of corporations that (mostly) want to remove the power of traditional media. But just as the printing press was a global event that went way beyond the Reformation, I think generative AI will eventually be too big and widespread to be contained by a small number of organizations.

Edit: just to add another point, I don’t think the loss of trust in media has come from “the other side” attacking it. It’s come from the media’s own actions.


Re your edit, if you re-read my op you should find we’re in agreement regarding the press’s standing and its causes.

Numerous assumptions are embedded in your optimistic analogy of printing press and machine learning systems, including continued access to general purpose computing devices (for the unwashed), data (and we can work our way up from just there to far flung concerns).

What is actually happening is that capital is no longer satisfied with owning the presses, but rather wants to own the language. That, erasure of the distinct concern of “journalism” (regardless of du jour state), via erosion of control over language and what is “correct”, is the actual issue.

Were these machines benevolent cyber prophets that were discerning ‘veritable truths’ about the human condition and thus their conditioning of the language and acceptable discourse a clear benefit, there would be no concern. But they are not, and again a reminder that the same people who own the press own the ai shops (class wise).


I understand what you’re saying, but I’m not sure I agree that’s what the future will be, and what the impact of the internet and generative AI will be. I think we’ll see a lot more of so-called “fringe” news, independent news, and so forth. People ultimately trust people more than machines, and so while generative AI might replace the traditional media, I don’t think it’ll replace the Joe Rogans of the world.


> the core of journalism’s public service, namely its capacity and obligation to artfully, eloquently, and intentionally use language to create and debate the ground truths that anchor shared social realities

Excuse me but wtf is this because it sure as hell isn’t what journalism means to everyone who isn’t a journalist.


One expects this to be satire, but it is stated entirely in earnest:

Freedom is control.

People have always been able to make false or misleading statements, at scale, with either false attribution or none at all. This is a matter of degree, not of kind. The primary difference in method is that now these statements can come from any source rather than only an approved mouthpiece.

Why should generative AI compel us to pervert the meaning of the word "freedom"?


Freedom always has a control aspect - controlling things and powers that can inflict harm which would otherwise limit your freedom to go about your daily life, for example, is pretty normal.

Whether or not AI created misinformation will fall into a special control category is a different discussion.


Interesting. To me, it's the ability to fully contemplate reality and the ability to choose paths in that reality if it doesn't harm others. There's certainly control and power aspects involved in this when it plays out.


Definitely. In most societies you don't get the pure "if it doesn't harm others" but already something more like "if it isn't too likely to harm others too much" (with all sorts of intricacies and local flavors).


The "don't harm others" is very complex, multi layered, contextual and so on. This is why looser restrictions on the discussion, debate and even thought are needed.

Note, I said "others", I think there should be the ability to make choices that harm ourselves. We benefit from taking bad paths and learning from them. I don't know how else one can truly grow.


> We benefit from taking bad paths and learning from them.

Up to a point. Nothing is simple.

Systematic net-negative transactions are worth inhibiting if not prohibiting.

For instance, predatory lenders that knowingly put disadvantaged people in moments of dire need on a long road of financial slavery, and inevitable bankruptcy, are a cancer.


If I offer you a sentence for your article, and you accept it and include it, your authorial independence is undiminished. If someone prevented you from including that sentence, your independence is diminished, even if more of the article is in your own words. The same is true if the sentence is offered by a software algorithm.

The freedom to accept or not the words of a {stochastic parrot, emerging intelligence} is a new degree of freedom of the press rather than a regression.


I watched Simulant last night (mediocre movie about human-like AI robots taking over the world).

One of the "laws" in the fake world in the movie was something like "AI must only be used when it's being supervised and doing things at the direction of a human"

I thought this was actually an interesting way to think about it.

E.g. If it becomes possible to tell ChatGPT to "spin up a website in the format of NYTimes.com and post 15 new articles relating to recent events each day. publish articles without human review". I think prompts like would really degrade the internet if we also pair it with prompts like "Create twitter accounts, fake engagement for 6 months or until you reach 5000 followers, then start promoting my NY Times clone using these accounts without getting banned from Twitter"

That sort of prompt 1) performs an initial action, which is fine, but 2) also creates a repeating action that continues the AI's use into the future in an unsupervised manner and in a way that its output is used verbatim unreviewed by a human.

I think this whole AI ban discussion needs to be a lot more nuanced. We can't just throw "freedom of speech" at the argument and call it a day.

Honestly, I just don't want to live in a world where 10 years from now my coworkers are able to write a prompt "I'm taking a day off tomorrow without telling anyone so simulate me on Slack. Join daily standup at 9am using an undetectable deep fake. Answer the questions in the style of everyone else. Analyze my Github activity to update my coworkers."


> Honestly, I just don't want to live in a world where 10 years from now my coworkers are able to write a prompt "I'm taking a day off tomorrow without telling anyone so simulate me on Slack. Join daily standup at 9am using an undetectable deep fake. Answer the questions in the style of everyone else. Analyze my Github activity to update my coworkers."

Honestly, that world sounds amazing.

Imagine how productive you could be at programming if you could say:

"I'm going to be doing some deep thinking and coding on a hard problem so simulate me on Slack. Join daily standup at 9am using an undetectable deep fake. Answer the questions in the style of everyone else. Analyze my Github activity to update my coworkers."


Point taken. But the point I was trying to make is that deception isn’t cool. AI lowers the bar for deception.


Reminder to everyone: The freedom of the press is not for journalists or some other protected class. It is for every single person who wants to share their opinion with others broadly.

Journalists are not special or have more rights. The freedom of the press belongs to everyone.

Many times media outlets will try to give themselves a special protected status and make you think they have freedoms that you do not. That is false. There is no credentialing needed for the First Amendment.


You are right, except it's called freedom to produce, and disseminate information. "Press" word being a mere technicality of bygone era is used now to manipulate society to believe that it's media business, or journos profession which are crucial for democracy, and not actual freedom.


Well, we all know what's going on here: some journos would like to have a legal moat protecting their jobs from being eaten by tech. Branding it a "freedom" issue is 100% manipulative, and this may hint at why trust in media is not particularly high.


> But GenAI language has no such commitment to the truth, eloquence, or the public interest.

The implication that modern news outlets have a commitment to truth, eloquence or the publics interest is mesmerizing.


Yea. ChatGPT, while certainly imperfect and unreliable, consistently gives me less bias and distortion than 99% of the news. (For now, the topics are different, of course.)


This stance is both too philosophical and tenuous to be credible. The picture it paints is ideological and doesn't reflect reality. If you read the Times or such landmark journals, yes you will find both journalistic values and proper English. For the rest of the press, textual adequacy has gone through the drain long ago, and you'll find either hot garbage written in the spirit of decorating an advertisement brochure, or pompous logorrhea from people who love the smell of their own farts.

As any profession that involves manipulating text, journalism certainly will be impacted by generative AI. But I find it truly fascinating that, to defend their profession and value, the author would advance their superior mastery of English and wordsmithing. The real value of real journalism comes from its values, the capacity to understand what happens, drill through a topic, don't crack under intimidation, dare to ask the right questions, and report back in a factual and useful manner. Without going into the reasons or casting the blame, these have been in chambles for a while, generative AI won't change any of that.


down the drain - seeing as we are discussing the finer points of the English language.

See also 'through the roof'


> A truly free press controls its language from start to finish.

I don’t agree with the authors definition of a free press being a constrained press. Seems like the opposite of a free press in my view.

A truly free press lets anyone publish any random babble with the most poorly chosen words.


The article was tl;dr for me, but I have to note the irony of them using an AI generated image for the hero. It's a great image to go along with the content, but maybe undermines their stated concerns a little?


I think the article is bologna but it really doesn't matter. If just now near the end of 2023 these authors are voicing concern - they're moving way too slow to speak on how to regulate this technology.


To me, this seems like Canute demanding the sea doesn't rise, because he doesn't like wet feet.

OK, you want truth, eloquence, and public interest (dry feet)? Great: you need that for which higher ground, or a dyke, is an apt metaphor.

In particular:

> at their best, journalism’s words emerge out of public service, unimpeachable reporting, self-reflexive news judgment, eloquent storytelling, rigorous editing, and timely publication.

"[P]ublic service, unimpeachable reporting, self-reflexive news judgment": these are things many of us already doubt in the media. Rightly or wrongly, and for long before the phrase "fake news" was coined.

"[E]loquent storytelling, rigorous editing, and timely publication": all the things that GenAI is really good at. At least, on the "news article" scale — don't bother trying to use it to make a novel, it's not eloquent on long-form content, at least not yet.

Worse, the first proposed solution:

> First, taking a page from the Writers and Screen Actors Guilds, and aligning with some newsroom unions, journalists could find their collective voice on GenAI. Indeed, we’ve started to see some halting but hopeful efforts at collective action.

To the extent that it's capable of timely publication of eloquent storytelling, it's already an enormous megaphone for any and all propaganda. It doesn't need any trained human journalists who care about the craft. By the volume, it is the ocean and striking unions are Canute (yes, I did have this mental image first and then fit everything else in my comment around it).

WGA has a slightly easier time of this, for now, but only because it's not great at coherent film-length plots. Yet.

The second point is more valid. In particular:

> Hidden beneath GenAI outputs is a vast ocean of data with histories and politics that make GenAI anything but neutral or objective. Who categorized and labeled the dataset, what is overrepresented in or absent from a dataset, how often does the system fail and who suffers most from those mistakes?

This is all 100% true and completely valid.

It's not limited to AI though, rather it sounds like my GCSE history teacher 24 years ago explaining the value and limits of both primary and secondary sources when researching the past, and is even starting to touch upon the fundamental issues of the philosophy of knowledge c.f. the Münchhausen trilemma.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: