Hacker News new | past | comments | ask | show | jobs | submit login
YouTube lets you request removal of AI content that simulates your face or voice (techcrunch.com)
95 points by gmays 66 days ago | hide | past | favorite | 50 comments



This risks eventually degenerating into a mostly-indiscriminate process like DMCA takedowns, since it's costly to verify likeness, intent, and all that. They'll probably ironically end up using AI for verification, which could be defeated with enough skill or organization.


It will definitely be abused by the AI generated-likeness of a target to takedown the target's original content probably for the purposes of an extortion racket or silencing political speech. Large media companies like Telemundo were targeting random Youtubers and falsely claiming copyright against original content even 10 years ago.


But they are giving you a option atleast.


Its even more blatant- by current loose definitions of AI (which place the original Palm Pilot's Graffiti text input as the first AI product to reach market)... virtually any manipulation of a voice can be argued as counting.


Why can't I just monetize and steal all the ad revenue if they use my face or voice, just like copyright holders can?


Would you want to? If there was a deepfake of you saying gnarly stuff, being racist and selling dubious vitamins, would you say “that’s OK, trash my reputation, just give me some money”?


You're joking right? Most of us don't HAVE reputations to trash, reputation or a meal?


If you don’t have a reputation, why would there be AI generated content exploiting your likeness?


I've begun hearing Joe Rogan's likeness in TEMU-like YT advertising... how to monetize then? It's obviously not products endorsed by the real Joe.


Unfortunately this will simply be abused to take down videos that aren't AI generated, but which individuals consider disfavourable to their reputation. YouTube has made it clear they're not interested in providing a way to mediate conflicting claims in a non-Kafkaesque manner with the way they handle copyright claims.

I understand YouTube is in a very difficult position with that system, so I'm not even expressing anger, but I'm not hopeful that they're going to actually wade into mediating these kinds of claims sensibly either. You need to build institutional skills and processes and a culture of commitment to doing the hard yards on resolving these kinds of difficult issues, and YouTube has stayed away from doing that this long...


This is heading nowhere.

Online identity verification is NOT a solved problem. We will see fake take down request in no time.


>Online identity verification is NOT a solved problem

There's a lot of neo-banks in Europe and when I created an account with one mostly because I was sick of how bad the online banking with my old bank was the process was basically, register with the eID feature of my national id card, then you hopped on a face call with an employee for a few minutes to verify your data and identity and after five minutes I had an account.

Maybe an FSB or CIA agent can fake this but this basically kills 99.99% of scammers or abusers. It's solved. Only reason social media platforms don't do it is because they love their fake numbers. And if you have one account and that's it you're a hell of a lot less likely to do fake take down requests.


YouTube/Google is NOT going to pay anyone to do verifications like you described.


It's mostly solved.

Most sites e.g. Meta, Twitter will simply ask you to provide government issued identification to prove your online identity. Usually uploaded with a recent selfie so they can match it.


but verification of identity isn't the problem i foresee. It is inappropriate takedowns, where the identity in question is claiming the video is fake or ai generated (when it isn't). For example, videos critiquing a particular online personality, whose usual reaction today is to DMCA takedown those videos (for example : https://www.youtube.com/watch?v=QcEgjZoKEi0 ). I don't see how this new system makes provisions for appeals, to prevent abuse (from either side), but i can tell this will be a whole new can of worms.


Not only that, what about videos that are neither AI generated nor actually the claimant?

If a voice actress sounds like another actress, but the other has deployed a copyright trolling firm to send takedowns for anything they can find, they're now interfering with the livelihood of that actress by issuing fraudulent takedowns.

You need a way to prove who it really is, and "the copyright trolling firm has the driver's license of the other actress" doesn't do that.


Uber and Lyft require this as well, and it’s easily thwarted.


> We will see fake take down request in no time.

Right. From the start, it was clear to anyone with foresight that this technology would result in both the spread of bullshit and the casting of doubt on legitimate content as well.


Some sites register their URLs with DMCA directly to help protect them. Not sure if this is useful but maybe others do


You can't "register URLs with DMCA" (a US federal law). Whatever service the sites registered with would be third-party and unofficial.


Do they mean "registered" as in they've appointed a DMCA agent?


I thought this was common knowledge.

There’s a company called DMCA which provides a site badge service. Tied to a url.

We know a law can’t be a company, someone or something has to help with it.

"IF YOUR CONTENT IS STOLEN WHILE PROTECTED WITH OUR BADGES, WE WILL DO A TAKEDOWN FOR NO CHARGE*"

https://www.dmca.com/

It’s easy to ask what someone means instead of dismissing them.


Like the current system, this one is not going to help the average user at all. This is for people (or other entities) with money and power.


precisely, it's ContentID by another name. Designed to keep studios, record labels and agents off their back.


Every person in the world has a doppelganger out there somewhere. Now we can all meet them through the magic of takedown requests.


This is actually a genius move by Google/YouTube. Video hosting is expensive, so YouTube is transitioning to a new model where the primary form of engagement is sending each other takedown requests and watching fake SpaceX scam streams.


product idea: YT Takedowns premium™

it'll auto reject any requests sent to you with AI bullshit and send anyone you want unlimited takedowns


This is the package that major media companies on YouTube already use.


We know it’s gonna be abused by identity trolls, political movements, business competitors, big and small.

Good. And good timing.

People learned through years that ads are about sales, not products, and they are annoying. Now they will learn that speech is about what you ought to think, not what is true, and it’s full of bs. As my local ethnicity says: “kup suz buk suz” - many word crap word”. Sudden wisdom from a smaller culture.

AI will finally expose our built-in vulnerability and make it explicit. It was so default for millenia that you probably don’t even think about it, but now think about it: listening to voices long enough makes one coherent with what they say. Imagine your door lock giving up if someone pulls at the handle long enough. Or your dog growling at you after listening to a series of podcasts. Or you changing your views because Celeb Smart said something fifty hundred times. It’s been working all this time in favor of non-closing mouths, all they needed was your attention and enough time.

This had a cool (probably decisive) effect in primitive groups, because easy synchronization. But we are clearly, fundamentally dysfunctional in the internet era.


Should be a by default opt out. Why do I even need to request this.


How would that work? Give Google reference pictures so that they can train a model on it?


Google requests consent forms for all identified humans in the video upload else their faces are auto blurred.


So the video creator has to obtain consent forms from all the people in their video? How is Google going to verify consent forms from poeple who's face isn't in their database? What about voice? This line of reasoning is turning distopian quite fast.


And political satire?

Is political satire allowed?


[flagged]


> abuse perfectly legal free speech.

people dont remember that the free speech clause only applies to the gov't, not to private entities. You dont get to claim free speech in a Wendy's.


We have enough problems with gratuitous DMCA notices, and there are enough bad-faith actors willing to abuse this reporting system.

Whether or not the actual content is AI-generated and defamatory is secondary.


> Newly released documents show that the White House has played a major role in censoring Americans on social media. Email exchanges between Rob Flaherty, the White House’s director of digital media, and social-media executives prove the companies put Covid censorship policies in place in response to relentless, coercive pressure from the White House —not voluntarily. The emails emerged Jan. 6 in the discovery phase of Missouri v. Biden, a free-speech case brought by the attorneys general of Missouri and Louisiana....

https://www.congress.gov/118/meeting/house/115561/documents/...


Exactly. There's open collusion between the DHS, other government organs, and these major platforms.

The veil has been pierced. There is no difference between either YouTube or the US government directly censoring you.


Impersonation for monetary gain e.g. ad revenue is a crime in most countries.

Youtube is well within their right to block this type of content.


If putting videos or audio recordings of yourself online exposes you to impersonation that can destroy your livelihood, how does that not chill your freedom of speech?


Well, let’s be clear - that does not make you unable to speak. Telling the world truth or even lies for that matter about someone’s life has nothing to do with their ability to speak freely. If everyone in the world then stops wanting to hire them, that’s still not a violation of free speech - even if done falsely. Only in some narrow interpretations is it even illegal (and very hard to prove).


It very well can degrade your ability to meaningfully speak


Explain how, without defining “meaningfully” as “on a platform i think i deserve to be on, that isn’t owned by me”.


> Only in some narrow interpretations is it even illegal (and very hard to prove).

If you made everyone in the world not want to hire someone via impersonation, then that's very blatantly libel. How would it be hard to prove?


Libel is generally notoriously difficult to prove. You need to prove they acted with negligence, caused a direct harm, and intended to cause a direct harm. Posting a video online and having some random person watch it and decide to fire someone is tangibly different than sending that video directly to their employer. That broken link could easily be the difference that gets the civil case lost.

In practice, libel cases are hard. It might be easier if it’s a deepfake, but it’s still not easy.


Why did we go from everyone in the world not wanting to hire them, to a single person not wanting to hire them?


To fulfill the requirements for civil libel claims you need to point to a specific harm. I suspect it would be difficult to prove no one wants to hire you and that they all decided that on the basis of said libel


Why would you need to prove "all"? If it's anywhere near that widespread it should be easy to demonstrate a ton of people not hiring based on false evidence. And it's hard to argue it wasn't malicious or negligent if it's that widespread.


Tracking down >1 people who all rejected you and proving they all rejected you because of a video is strictly more difficult than tracking down 1. In addition, a video being popular doesn’t have anything to do with the intent of the person who posted it.

I’m not an expert though. I only mean to say it would be difficult to prove, just like all libel cases are. I don’t think there’s anything inherent to a deepfake case that would make it an easy slam dunk for civil libel. I could be wrong though


> Tracking down >1 people who all rejected you and proving they all rejected you because of a video is strictly more difficult than tracking down 1.

If it affected millions of people or more then it's really easy to find examples.

> In addition, a video being popular doesn’t have anything to do with the intent of the person who posted it.

The impersonation already says a lot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: