Hacker News new | past | comments | ask | show | jobs | submit login

I’ve said it here before and I’ll say it again:

Audio and video evidence isn't admissible because it's audio or video (and may be inadmissible nonetheless). It's admissible because someone testifies under oath that they have personal knowledge of its provenance. The burden is on the party introducing the evidence to show that it's reliable, and the question of whether or not it is indeed reliable is a factual one for a judge or jury to answer. It's not assumed to be "true" or to accurately reflect reality just because it's a purported photo, video, or audio recording.




This works in a courtroom. This won't work for a politician who retweets a faked video that then incites real violence. And that, at least for now, is where we truly need to be concerned.


> This works in a courtroom.

Does it though?

Suppose there is a theft at a company. The police go to the company's security team and get the surveillance footage. Presumably admissible.

If it was an inside job, the surveillance footage could be a deepfake showing someone else committing the crime. Or maybe it's real surveillance footage. Without some way to distinguish the two, how do you know?


The interaction of our current legal system with SotA deepfakes seems terrifying.

To rip from current headlines: https://www.justice.gov/usao-wdpa/pr/erie-man-charged-arson-...

(Leaving aside the merits of the case / opinions, and just using it as an example)

".. the [Facebook Live & coffee shop] videos depict a male – with distinctive hair to the middle of his back wearing a white mask, white shirt, light blue jean jacket, black pants with a red and white striped pattern down the side and red shoes - setting a fire inside of Ember + Forge."

"A review of additional Facebook public video footage from the area of State Street near City Hall in Erie on the evening of May 30, 2020, shows the same individual without the mask but wearing identical clothing and shoes. The subject’s face is fully visible in this video footage."

And that's a federal arson charge (min 5 years, max 20 years prison).


And a witness might lie. How will we manage?


Frequently by incarcerating innocent people.


The onus of responsibility is still on the person doing the violence. Contrary to the fearmongering, I'd say deepfakes will just make people not believe things they read on the Internet (again), especially as the technology becomes more widespread.


Where the ultimate onus of responsibility falls doesn't matter much when you've got a bunch of Rohingya hanging from trees because some politician retweeted a fake video.


If you follow it through, that line of argument makes no sense. If the video weren't fake, would that somehow make the murder of Rohingya acceptable? Of course not.

Ultimately you end up back at the beginning: holding people, and not pieces of information, accountable for their actions. That's an issue with civil society, not social media.


Deepfakes have the potential to exacerbate existing issues in civil society. If your husband has just been murdered because of a riot instigated by a deepfake, it's not a great comfort to know that it's not really the "fault" of the deepfake but instead the "fault" of a broken civil society.

Plus, deepfakes have the potential to create much more noxious videos than real life. Real life videos depict real life people, flaws and complexities all included; deepfakes will be constructed to depict representations of the target that're most likely to generate inchoate rage of the mob. It's rare that the former happens to be exactly the latter.


Well the world does not operate based upon what is comforting. I thought that it was obvious even before COVID-19 but people keep on missing this.

Why not blame the person who confirmed your husband's death at this point if accepting comfortable illusions of someone easy to punish is what we are doing? Without them you would still have some remote hope of survival!

The first step of solving a problem is recognizing the actual problem - what we find comforting is only a distraction for rationalization purposes. Better to recognize that microscopic organisms caused the crop failure instead of burning an old woman as a witch.


Burning a woman as a witch doesn't build any kind of incentive structure to prevent future crop failures. Punishing politicians who incite violence creates the obvious incentive structure for politicians not to incite violence, which results in less violence. This applies both the deepfakes and calls to violence in general.

Politicians don't have a particular right to remain in office despite inciting violence, even if it gives them the sads if they face repercussions.


  If the video weren't fake, would that somehow make the murder of Rohingya acceptable?
Your answer is astray. What is at stake is evident : one can now provoke riots and deaths from thin air (and bits).


Genocides happen because of propaganda. The planners and financers and propagandists don't tend to kill with their own hands. But they command massive acts of violence.

Just one example: A radio station (RTLM) in Rwanda laid the groundwork of genocide by dehumanizing Tutsis, among other things by referring to them as "cockroaches" repeatedly. The station was deemed instrumental in the resulting murders of at least half a million people.

It's not fear mongering to look at the facts of history and see that propaganda is a primary way that violence scales. Deep fakes are another tool which will certainly be used towards these same ends. What propagandist would not want to use such a tool?


> It's not fear mongering to look at the facts of history and see that propaganda is a primary way that violence scales.

Propaganda is a primary way that human action and organization, whether violent or not, for good or ill, scales.


So people will believe what they choose to believe. A populist's wet dream. Truly nothing to be afraid of!


We already have plenty of politicians who misquote or lie to sow division and therefore violence. What's new here?


The argument would be that we have social antibodies to typical politician lies, but we've not developed the same antibodies to deepfakes. Since political deepfakes are inevitable IMO, the question is how can we accelerate the development of these needed social antibodies.


It doesn't work right now due to the partisan divide in America. Spreading misinformation to help your party's cause isn't currently looked down on - so, honestly, having deepfakes out there might not hurt much and might at least help reduce the weight that blatantly doctored videos carry with the general populace.


I appreciate you saying this, I hadn't known that. I think therefore it may be OK in the court of law but what about in the court of public opinion? I could see videos like this easily eroding our trust in the judicial system even further.


It's probably not gonna take a single generation to get used to this but as it always happened with new technology throughout centuries, people who are born into this new normal will know how not to be fooled.


Will they be able to? Widespread cynicism and distrust feels more likely.


But the jury are humans. Should we be confident that a jury of 12 (if in US) can pretend that really convincing deepfakes should not be trusted? Especially when this form of evidence has been trustworthy for a decent chunk of time?


If no one can testify about the video’s provenance, then it simply won’t be admitted into evidence. If someone commits perjury in order to get the video admitted, the jurors will be the least of the problems.


People commit perjury all the time




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: