Hacker News new | past | comments | ask | show | jobs | submit login
AI generated content should be labelled, EU Commissioner Jourova says (reuters.com)
27 points by mfiguiere on June 5, 2023 | hide | past | favorite | 18 comments



In a handful of years every image and every document will be impacted by AI.

Imagine if in the early days of Photoshop a law had been passed to require notification that an image was edited. Every image we encounter today would have to contain that warning text.

This will be the cookie warning popup mess times 100x, except it would be required on every image, document, video, audio file, in addition to every webpage.


> Imagine if in the early days of Photoshop a law had been passed to require notification that an image was edited. Every image we encounter today would have to contain that warning text.

You say that as some sort of gotcha, but there is a law like this in France - all images used in advertising/public context that were edited should clearly state so. And why not? They're practically lying (false advertisement) otherwise, making the person better looking, the food more appealing/thinner, the "taken by our amazing camera" photos better etc.


> Imagine if in the early days of Photoshop a law had been passed to require notification that an image was edited. Every image we encounter today would have to contain that warning text.

Maybe pictures of anorexic models would have been less appealing back then, and therefore fashion would have evolved differently? Maybe young folks today would feel less required to conform?

It's really hard to predict what an alternate reality would have been, if you don't rely on the hypothesis that any divergence would be extremely limited in scope, or similar to another case where wildly different forces and goals apply.


If this kind of labelling was reliable (it won't be of course), I think there would be plenty of interest from people to consume news, literature, and art from 100% human sources. Speaking just for myself, I would pay good money to avoid this 0-marginal-cost alogithimic slop. Such a thing would be niche, but I think it would be more general than people who want to avoid cookie tracking


Every iPhone photo even today would require it.


I think the AI mysticism has confused people about this technology.

A photographer using a camera doesn't build the building, create the natural surroundings, set the sun shining, invest billions of dollars into the global R&D supply chain to create the camera,

and yet, when they point this device at the infinite of creation and push a button, the output is theirs.

I am of the firm belief that these AI/LLM systems are analogous to an 'information camera'.

Just as the individual 'language photographer' at the prompt didn't create the existing human language, textbooks, plays, novels, or the billion dollar supercomputers, or the trillion dollar semiconductor supply chain,

they did point/prompt at a particular part of this infinite Babel's library, and they pushed a button.


The analogy is fine but it doesn’t address the issue people are having with AI stuff.

When you zoom in enough there’s always ambiguity on the definition, today we can’t even define what is life and at what point exactly it starts or ends however we have birthdays and cemeteries since the beginning of the civilization.

The philosophical discussion is nice and useful but when it comes to policies it’s better to address the practical issues.

The difference between photography and AI generated content is that the “metry” part is light on substance I AI.

There’s this Swedish artist who build a “camera” which is a device that looks like camera but doesn’t have any lens or something, instead generates an image based on the location and weather data.

Take a look: https://www.creativeapplications.net/objects/paragraphica-co...

So yeah, the philosophical discussion is there and has plenty of meant to chew :)


We also have laws around cameras and complex discussions around how the ownership plays out when images of famous people are taken, or images of possibly inappropriate content are generated.

I think we would've headed down the wrong path if we would've said from the beginning that cameras own the output.


OK... and what is "AI-generated"? Who will define it? At what point does a CUDA-powered Photoshop plugin become an AI tool?


In a world of user-generated content, such nice ideas are just not practically enforcable.


Why? Copyrighted content piracy is rampant, but it's somewhat limited on professional media. Likewise, one could imagine that user-generated AI content is everywhere on the internet, but news outlets or professionals such as lawyers or educators stay clear of it or label it properly.


Because in all those cases the person who knows the material is copyrighted has an incentive to tell people (so they can get paid). In this case the opposite it true: the only person who knows an image was AI generated will have an incentive to keep quiet.

And even if that is not the case, how much of the media you consume comes from "professional media" vs Reddit, Facebook, Instagram etc? I watch maybe 10min of TV/Movies per day on average, I spend maybe 1h on reddit alone...

Also, FYI, News outlets are not restricted by copyright.


As they have been doing for a long time now, the bureaucrats in Europe will squeeze and regress their market regardless. If they can, they will; the law of the bureaucrat.

As an American, I say don't interrupt your competition while they're making generational mistakes. They're once again making it easier for the US to win another inflection round in tech and maintain our lead globally (and that's with a subpar, broken US that isn't anywhere near being in fighting shape).


> "Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google (GOOGL.O) should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation," Jourova told a press conference.

Basically you can use AI generated content to promote what the government likes. However, if you go against what the government likes, you can’t use AI.


I'm sure some people have such cynical reasons for suggesting that as a "solution"; one other possibility is that it's magical thinking — it wouldn't be the first time I've heard someone say something like "just make it tell the truth" as if that's a thing we can always know and recognise ourselves as humans.

And OpenAI has warned about this specific failure mode since at least 2018: https://openai.com/research/preparing-for-malicious-uses-of-...


I predict that as time moves on, AI will be responsible for generating a larger and larger percentage of politically correct content, until there comes a day when, for all practical purposes, 100% of all politically correct content is AI generated and 100% of politically incorrect content is created organically.

At that point there will be two worlds, one that feels genuine and is politically incorrect and the other that feels like a fairy tale and is oriented around farming consumers.


And on top of that, a human can create "bad" content. So the means by which it's created is a bit irrelevant.

And if we want to talk about "disinformation" -- how about all of the disinformation created by governments themselves to suit their specific agenda? And beyond the actual government -- the disinformation from political candidates is an art form.


Good luck trying to enforce it




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: