Hacker News new | past | comments | ask | show | jobs | submit login
Criminals are using deepfakes to impersonate CEOs (fastcompany.com)
30 points by onetimemanytime on July 20, 2019 | hide | past | favorite | 13 comments



Twenty years ago people would fall for a convincing email, or a social media profile with a name and photo. Eventually people started to adjust their natural assumptions about what's trustworthy.

I think the same will happen here. People will learn not to trust audio and video for anything important. Cryptographic trust will become the only strong trust.

I think corporations and governments won't have trouble making this transition. There will be stragglers that learn the lesson the hard way, but everyone will move forward eventually.

What's more concerning is the implication for news and crime, where there inherently can't be a carefully built system of authentication. Videos recorded from phones will cease to mean anything. Maybe all media will stop being admissible in court.

We had a world like that around 150 years ago; trust was purely based on people. There was no general way to capture proof of events. We'll regress to that, but maybe the world won't stop.


People still fall for "convincing" emails and social media profiles with a name and a photo. I work tech support at my university, and we regularly have to deal with faculty clicking on things they shouldn't, or not clicking on things they should. I also do some work for a local non-profit and a month ago two employees came uncomfortably close to costing us thousands.

Cryptographic trust won't help much either unless it is packaged and presented in the right way to the end user - it's basically advanced black magic to most people. I would love to have cryptographic verification setup for the non-profit's communication so that they can have some guarantee of legitimacy, but first I have to figure out how to make it fit into their workflow and not require them to actually take any action.

It's important to keep in mind that we have some distorted sampling. I would guess that most if not all of the people on HN are technical to some degree, so we may tend to have a bit of a bubble. We are a pathetically, uncomfortably small minority.

Never, ever, ever rely on humans doing the right thing if you can help it, whether in the form of an individual, a corporation, or a government.


I did say "started to adjust their natural assumptions". Clearly some people haven't yet; perhaps for some people who didn't grow up with the internet, it's not really possible. But we as a society are adapting, is the point.


You vastly underestimate the difference between non-evidence (which held back civilization plenty) with fake evidence. People will easily believe lies.


I'm not saying it's a good thing; it's a clear downgrade. I'm just saying, civilization functioned.


This feels like Icarus' wings being melted. Tech gave us so much, but maybe this shows that it can go too far. Perhaps the Great Filter theory that at a certain point a civilization's technology becomes it's own demise, might have validity.

Banning certain AI work is going to be a great joke. Kind of like prohibition was in the 20s. It's not like a nuke that requires a lot of infrastructure and manpower. These AI attacks only need a computer from Best Buy to accomplish. Hell, building the box yourself is far cheaper and more efficient. Unless we all agree that all computers should be tracked like guns, plus monitored, I don't think there's a good solution out there.

This deepfake problem hasn't even scratched the surface of what's to come.


The extreme response is for companies to stop putting out programmable silicon. Everything on the cloud, only executing signed binaries


I wonder when this gets so bad that the only thing you can trust (apart from face to face interaction) is a digital signature. BTW, last I have heard, Microsoft is building a digital identity based off bitcoin blockchain. They may become invaluable for businesses again. I start having respect for their strategic planing. Hopefully it won't get to a criminal abuse of their dominant market position again.


First I cringed when reading about blockchain in this context but could it have a true value here? Some kind of global trust store?


Fastcompany article is plagiarized and stripped down from Axios:

https://www.axios.com/the-coming-deepfakes-threat-to-busines...


I really enjoyed Stephenson's latest take on this in Fall. A tech billionaire got sick of people believing in fake news and so created and then open sourced a bunch of tools to allow the generation of an artificial moral panic about any person, place, or topic. The objective was to make it so only blockchainy identities (established by another interesting introduced topic, holographic identies (holograph in the ancient use of the word)) would be trusted.

What it led to instead was the creation of a no man zone called "Ameristan" stretching across the middle of red state America, where people were plugged in to a feed of what one character described as "seriously weird shit," incomprehensible memes that self replicated and led to the creation of a wild ass version of Christianity.

Anyway, it makes me wonder how one could go about implementing this sort of proof of identity based on all sorts of different factors that, combined, are nearly impossible to spoof.


Good ol' MSM trying to get ahead of incriminating videos of powerful, well-known people doing extremely evil things...


Praxis




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: