This risks eventually degenerating into a mostly-indiscriminate process like DMCA takedowns, since it's costly to verify likeness, intent, and all that. They'll probably ironically end up using AI for verification, which could be defeated with enough skill or organization.
It will definitely be abused by the AI generated-likeness of a target to takedown the target's original content probably for the purposes of an extortion racket or silencing political speech. Large media companies like Telemundo were targeting random Youtubers and falsely claiming copyright against original content even 10 years ago.
Its even more blatant- by current loose definitions of AI (which place the original Palm Pilot's Graffiti text input as the first AI product to reach market)... virtually any manipulation of a voice can be argued as counting.
Would you want to? If there was a deepfake of you saying gnarly stuff, being racist and selling dubious vitamins, would you say “that’s OK, trash my reputation, just give me some money”?
Unfortunately this will simply be abused to take down videos that aren't AI generated, but which individuals consider disfavourable to their reputation. YouTube has made it clear they're not interested in providing a way to mediate conflicting claims in a non-Kafkaesque manner with the way they handle copyright claims.
I understand YouTube is in a very difficult position with that system, so I'm not even expressing anger, but I'm not hopeful that they're going to actually wade into mediating these kinds of claims sensibly either. You need to build institutional skills and processes and a culture of commitment to doing the hard yards on resolving these kinds of difficult issues, and YouTube has stayed away from doing that this long...
>Online identity verification is NOT a solved problem
There's a lot of neo-banks in Europe and when I created an account with one mostly because I was sick of how bad the online banking with my old bank was the process was basically, register with the eID feature of my national id card, then you hopped on a face call with an employee for a few minutes to verify your data and identity and after five minutes I had an account.
Maybe an FSB or CIA agent can fake this but this basically kills 99.99% of scammers or abusers. It's solved. Only reason social media platforms don't do it is because they love their fake numbers. And if you have one account and that's it you're a hell of a lot less likely to do fake take down requests.
Most sites e.g. Meta, Twitter will simply ask you to provide government issued identification to prove your online identity. Usually uploaded with a recent selfie so they can match it.
but verification of identity isn't the problem i foresee. It is inappropriate takedowns, where the identity in question is claiming the video is fake or ai generated (when it isn't). For example, videos critiquing a particular online personality, whose usual reaction today is to DMCA takedown those videos (for example : https://www.youtube.com/watch?v=QcEgjZoKEi0 ). I don't see how this new system makes provisions for appeals, to prevent abuse (from either side), but i can tell this will be a whole new can of worms.
Not only that, what about videos that are neither AI generated nor actually the claimant?
If a voice actress sounds like another actress, but the other has deployed a copyright trolling firm to send takedowns for anything they can find, they're now interfering with the livelihood of that actress by issuing fraudulent takedowns.
You need a way to prove who it really is, and "the copyright trolling firm has the driver's license of the other actress" doesn't do that.
Right. From the start, it was clear to anyone with foresight that this technology would result in both the spread of bullshit and the casting of doubt on legitimate content as well.
This is actually a genius move by Google/YouTube. Video hosting is expensive, so YouTube is transitioning to a new model where the primary form of engagement is sending each other takedown requests and watching fake SpaceX scam streams.
We know it’s gonna be abused by identity trolls, political movements, business competitors, big and small.
Good. And good timing.
People learned through years that ads are about sales, not products, and they are annoying. Now they will learn that speech is about what you ought to think, not what is true, and it’s full of bs. As my local ethnicity says: “kup suz buk suz” - many word crap word”. Sudden wisdom from a smaller culture.
AI will finally expose our built-in vulnerability and make it explicit. It was so default for millenia that you probably don’t even think about it, but now think about it: listening to voices long enough makes one coherent with what they say. Imagine your door lock giving up if someone pulls at the handle long enough. Or your dog growling at you after listening to a series of podcasts. Or you changing your views because Celeb Smart said something fifty hundred times. It’s been working all this time in favor of non-closing mouths, all they needed was your attention and enough time.
This had a cool (probably decisive) effect in primitive groups, because easy synchronization. But we are clearly, fundamentally dysfunctional in the internet era.
So the video creator has to obtain consent forms from all the people in their video? How is Google going to verify consent forms from poeple who's face isn't in their database? What about voice? This line of reasoning is turning distopian quite fast.
> Newly released documents show that the White House has played a major role in censoring
Americans on social media. Email exchanges between Rob Flaherty, the White House’s
director of digital media, and social-media executives prove the companies put Covid
censorship policies in place in response to relentless, coercive pressure from the White House
—not voluntarily. The emails emerged Jan. 6 in the discovery phase of Missouri v. Biden, a
free-speech case brought by the attorneys general of Missouri and Louisiana....
If putting videos or audio recordings of yourself online exposes you to impersonation that can destroy your livelihood, how does that not chill your freedom of speech?
Well, let’s be clear - that does not make you unable to speak. Telling the world truth or even lies for that matter about someone’s life has nothing to do with their ability to speak freely. If everyone in the world then stops wanting to hire them, that’s still not a violation of free speech - even if done falsely. Only in some narrow interpretations is it even illegal (and very hard to prove).
Libel is generally notoriously difficult to prove. You need to prove they acted with negligence, caused a direct harm, and intended to cause a direct harm. Posting a video online and having some random person watch it and decide to fire someone is tangibly different than sending that video directly to their employer. That broken link could easily be the difference that gets the civil case lost.
In practice, libel cases are hard. It might be easier if it’s a deepfake, but it’s still not easy.
To fulfill the requirements for civil libel claims you need to point to a specific harm. I suspect it would be difficult to prove no one wants to hire you and that they all decided that on the basis of said libel
Why would you need to prove "all"? If it's anywhere near that widespread it should be easy to demonstrate a ton of people not hiring based on false evidence. And it's hard to argue it wasn't malicious or negligent if it's that widespread.
Tracking down >1 people who all rejected you and proving they all rejected you because of a video is strictly more difficult than tracking down 1. In addition, a video being popular doesn’t have anything to do with the intent of the person who posted it.
I’m not an expert though. I only mean to say it would be difficult to prove, just like all libel cases are. I don’t think there’s anything inherent to a deepfake case that would make it an easy slam dunk for civil libel. I could be wrong though