Hacker News new | past | comments | ask | show | jobs | submit login

What if a real person reads a script that was created with an LLM? Does that count? Should it?



Blog post specifically mentions that using AI to help writing the script does not require labeling the video.


Sorry, I wasn't entirely clear that I was specifically responding to the GP comment referencing the EU AI act (as opposed to creating a new top-level comment responding to the original blog post and Google's specific policy) which pointed out:

> Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes

Clearly "AI-generated text" doesn't apply to YouTube videos.

But, it is interesting that if you use an LLM to generate text and present that text to users, you need to inform them it was AI-generated (per the act). But if a real person reads it out, apparently you don't (per the policy)?

This seems like a weird distinction to me. Should the audience be informed if a series of words were LLM-generated or not? If so, why does it matter if they're delivered as text, or if they're read out?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: