Hacker News new | past | comments | ask | show | jobs | submit login
Deepfake used to attack activist couple shows new disinformation frontier (reuters.com)
246 points by aaron695 on July 15, 2020 | hide | past | favorite | 88 comments



The article is likely a submarine[1] for the deep fake detection company that it mentions (Cyabra).

[1]: http://paulgraham.com/submarine.html


Thanks for posting this. It's timely for me.

I'm stuck in a loop with my father for a while now where he forwards me a certain sort of propaganda. I feel like I've struggled to break through on a key point: when I say something smells like propaganda, it doesn't mean I think it is false--it means I think it's part of an organized campaign to build narratives and consensus.

In my case, the topic is highly charged. Reapproaching it with something low-stakes like this is at least worth a try.


Skilled propaganda isn't about making up stories, it is about amplifying and positioning existing truths. Nothing has to be a lie, and it affects the receiver as much as it impacts the whole information supply chain.

This piece might encourage someone else to research a different story and thus it amplifies the goals of the original propagandist. Many folks in a psyop/propaganda campaign are unaware of any sort of coercion. If the receiver understands it as propaganda the effort has failed.

It is an exciting time for sociologists, the disinformation war that is occuring right now is global in scope. If the cold war was WW3, we are currently in WW4.


This is what frustrates me so much with people who seem to think any article that doesn't lie is totally fine and can never be problematic. There seems to be a belief that you can't deceive unless you lie.

The very best propaganda doesn't lie, it simply is very selective about what truth it shares. By carefully choosing your facts, using loaded language, and by properly framing your statements, you can make almost any argument without lying.


This resonates with me. I sometimes used to have similar arguments about some forwarded messages with my late father, who was an electronics engineer and really sharp in his prime.

Intuitively it'd seem that the greater dissemination of information with social media and messaging apps would make people smarter and promote critical thinking, but in reality they seem to have the opposite effect. I still keep wondering if it's the environment or his age that was the primary factor. Do we become more gullible and less discerning as we grow older?. Perhaps I'll be much worse when I'll be as old as he was.


> Intuitively it'd seem that the greater dissemination of information with social media and messaging apps would make people smarter and promote critical thinking

I naively believed this too. It's wrong. The Internet and the various apps, sites, and social media things running on it aren't an information dissemination system, they are a data dissemination system. They take bytes and get them in front of many humans. The system itself cares not one whit as to whether the data it transmits has any connection to reality.

The result is that the Internet functions as a mostly indiscriminate amplifier of its inputs. Sadly, it is much easier to input lies than truth. First of all, creating lies takes much less effort since no measurement or fact checking is required. I could tell you a hundred lies about what the neighbor next door is doing in the time it took you to go over there and ring the doorbell. Worse, my lies can be crafted to take full advantage of all of the human biases and flaws in the social media systems. I can make lies that are emotionally loaded, play into stereotypical narratives, or have evocative photos. Meanwhile, you foolish truth-teller are restricted to only the narratives that actually happened and the imagery that you actually took of the event.

When you think of London in the Industrial Revolution, do you picture all of the bright shiny products of industrialization? The newly affordable textiles? Or is it more an image of a city covered filthy coal soot? It feels like we're in the latter state right now: buried under the pollution of disinformation.

We need to think of "information pollution" and need an "information environmentalism" movement if we're going to survive the Information Age.


These concerns, in turn, resonate with me.

I think my father is still sharp and have several competing quotidian explanations (projections?) of what's going on.

But, it's impossible to silence the nagging worry that there's a chemical/mental imbalance brewing, or that this will be one of those things I'll look back on some day as an early sign of something.


does your father also have the MSNBC brain cancer?


Makes sense. This old set of articles probably still applies:

https://www.crikey.com.au/topic/spinning-the-media/


Cool website, didn't know this one...thanks for sharing!


Could you please expand on what you mean? From the context I think you're saying it's a type of guerrilla marketing but the link doesn't seem to align with that. Nor does the article because the academics really do seem to have been falsely accused by the fake student/journalist mentioned in the article.


There's something ironic about planted articles from a fake person being planted by a PR firm.


interesting that PG had help from Aaron Swartz in writing the article (2005)


That's not a deepfake, it's a computer generated person. Deepfakes are when someone superimposes the face of someone onto a video of someone else.


I was thinking the same thing, is there a word for a computer generated person? The whole article felt off to me because of what I interpreted as a misunderstanding of what a deepfake is


> is there a word for a computer generated person?

Maybe "infomorph"?


This should be the answer, even if there is a more accepted term already in use. The etymology is undeniable and it very effectively describes the concept in question without it possibly making references to any other adjacent concepts.

Also it reminds me of Animorphs!


Sadly, apparently, the name is taken over by another concept. For now.


I would suggest "compuppet"


That sounds so close to a Pokemon name... :)


The opposite will happen. I doubt the call for preciseness wins out.

Reuters and other media will fetch deepfake to mean more things, and it will become a category of related frauds.

I have to wonder when the meaning change occurred. It's hard to tell if they got it from an interview, or a misunderstanding introduced it into the article.

>A generative adversarial network is the name given to dueling computer programs that run through a process of trial and error, according to Hao Li, chief executive and cofounder of Pinscreen, a startup that builds AI avatars.

>One program, the generator, sequentially fires out millions of attempts at a face; the second program, the discriminator, tries to sniff out whether the first program’s face is a fake. If the discriminator can’t tell, Li said, a deepfake is produced.

https://graphics.reuters.com/CYBER-DEEPFAKE/ACTIVIST/nmovajg...


CGI?


An avatar; placeholder for identity.


It's a good one too! From our experience, an avatar is commonly used in relation to information warfare operations and can be similarly compared to sockpuppets, or deep socially engineered identities that have the will to skew with the public opinion.


Good luck convincing society of that before “deep fake” takes off as a term.


Media outlets are already using deepfake as a general term for photographic manipulation. You might recall that video of Joe Biden making lewd gestures on Trump's twitter. That was reported as being a deepfake but is in fact just using generic pinching technology.

To expand on the parent comment's point, this is most likely using some variant of StyleGan2 which has plenty of endpoints on the Internet to make it easy for someone to just download a face for free.

However, they all have this uncanny StyleGAN glare. And there are still artifacts. Quality's improving though. Scrubbing it of EXIF data doesn't really deter detection.

Go ahead, try posting some StyleGAN avatars to Facebook or something. Your account will get flagged immediately for doing so. There's already a robust amount of interest in "deepfake" detection, and there are methods to fingerprint or adversarial attacking your machine learning models with data poisoning so you can reasonably determine a dead giveaway. Kind of like putting a dye pack in your bills.

What is NOT helping, is how media outlets can drum up some cheap panic over the "end of truth" or insinuating how the barrier for disinformation campaigns is lowering. Both statements can be valid but I think we are teaching the wrong lesson having people told they can't trust anything rather than to establish what trust means to them in the first place.


While you're professionally right, the term stuck to the image and video manipulation aspects to simplify the terminology for consumers and readers for the last 24 months. And finding a cool name for the image frames that are generated through GAN's (generative adversarial networks) is a great idea...

Probably something as simple, and catchy, as deepfakes (which was ultimately linked to the initial user from github, if I remember correctly).


Oliver's face was probably created with https://thispersondoesnotexist.com/

Surprised that publications don't do more research as anyone can create a social media profile that looks legit these days.

Trolls have been here for years and they'll continue to find new ways to exploit weaknesses.


That's not a deep fake, that's just a regular fake


Right – I thought from the headline they had concocted a video of the activists doing or saying something repugnant, which does seem like a problem that’s fast approaching.

This, on the other hand, is a case of a fake online profile being used to say terrible things about the activist couple. The only notable thing is the profile photo was computer generated.


I mean I bet they even just used thispersondoesnotexist.com until they got a picture that they thought would work.


Yeah I agree

As similar to several created on twitter to spread misinformation


Has anyone ever experimented with totally sanitizing identities when reading news? For example, if there were a browser plugin that would cleanse pictures, associations, academic credentials, and especially names from news, would that make a difference in the way that we as humans process information?

Deepfakes create an "adjacent identity" to trick people into aligning their opinions with someone who is in the same group. But, if we were aware of this, and removed the identity to strictly review the information, it might change the way that information is received.

This would only work if a majority did this, so it would never work, nevermind.


Reputation matters.

There are differing views on how much to showcase authors (or reporters) names, and some organs (The Economist, Hacker News) intentionally hide or de-emphasize these.

Almost all questions of identity actally revolve around trust in one of its various guises: trust, credit, accountability, entitlement, and making or receiving of payment. In creation or relating (mediating) information, the persistence of authorship attribution provides a bundling handle for trust. An author might file numerous stories per year, month, week, day, etc. Assessing and assigning trust to each story is expensive. Bylines, editors, and publications accrue positive or negative associations with time. No, trust is not fully consistent or transative, but as an efficiency heuristic, rolling up and bundling reputation offers powerful gains, and numerous checks.

Even a pseudonymous source (you're reading one now) can accrue a certain reputation.


"News" is usually a hybrid of celebrity gossip, ephemeral sports/weather, PR / political narratives, and journalism. You seem to be under the impression that everyone wants to "read news" like they are investigative journalists.

Yes, the crowd on HN is more likely to do this, but I suspect it is very overwhelming for the average news reader to correlate so many data points.


> “The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.

and:

> Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”

As an amateur photographer who only* knows enough Photoshop to quickly enhance images, I find these two statements completely unconvincing. This type of he-said-she-said news article makes me uncomfortable. TTBD: Truth To Be Determined.

* Edit: I know nothing about deep fakes but I expect "fake faces" to have "hallmarks" in the face.


What do you think is unconvincing about them and in what way do you believe that expertise in Photoshop has any relation to this technology?

At best Photoshop can play a role in covering tracks of evidence or artifacts of this much more sophisticated approach to faking identifies.

It’s not to say the developers and scientists at Adobe are lesser, it’s that it’s not the same tool or problem that’s being solved.

Put crudely Photoshop can let you draw a mustache on someone’s photo. This is about inventing a photo that never existed before.


> in what way do you believe that expertise in Photoshop has any relation to this technology?

They weren't expressing expertise in Photoshop. They were saying that basic use of it is the extent of their expertise.

> What do you think is unconvincing about them

Them being the 'experts' - they did nothing to convince other than say "I'm an expert in this thing."


> It’s not to say the developers and scientists at Adobe are lesser

adobe research scientists are crazy-strong in the area of deep/neural graphics[1]. Perhaps we should disentangle adobe research from photoshop?

[1] https://research.adobe.com/publications/


The article continues into the link at the bottom, which goes into great detail: https://graphics.reuters.com/CYBER-DEEPFAKE/ACTIVIST/nmovajg...


That is more helpful, thank you. I'm still unconvinced since the discrepancies described are easily explained with optical depth-of-field as well as resizing and sharpening algorithms; the basic photography kind of stuff I'm familiar with.


I'm also not convinced; the fake is convincing enough for most people, also given it's unlikely a picture like that will ever be displayed at more than 100px large.


the lede is burried, the "deepfake" wrote for multiple newspapers :

  In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.” 
so there's a real person behind the account that simply wants to stay anonymous, in this case traditionnaly the newspaper bears the responsability for the smear.


That doesn't follow. My first guess was today those articles were just part of the identity's cover, and were written by a group that was running "Taylor".


The lower left (his right) section of the collar has a bit of "crunchy pixels." Otherwise, it looks pretty damn realistic.

This appears to be a "thispersondoesnotexist.com" image. Is that site considered "deepfake"?

I always assumed "deepfake" to mean altering existing images to add actual people's likenesses (like the classic "celebrity porn" images). In particular, applied to video.

This type of thing is likely to become de rigeur, for astroturf and scam accounts. Disturbing, but not particularly newsworthy.

Before, people would just scrape images from stock photo sites, or even from some poor sap's Facebook photos. Doing it this way simply makes it less likely to be uncovered.


> This type of thing is likely to become de rigeur, for astroturf and scam accounts. Disturbing, but not particularly newsworthy.

It's newsworthy if the fake account's posts are used to assassinate the character of real people and the fake persona is pointed out for being fake:

> Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.


Look at his rightmost tooth.


Also the neck seems off, as if the person is tipped way back in a chair but facing forward and smiling.


I am not sure this is a new frontier at all. This is a case of no one being who they say they are on the internet. Frankly, given the mood in US these days, it seems wise not to share your opinion too openly ( or leave your contact information ).

Since it is not deepfake ( it may very well be an enhanced pic for all we know ), question becomes what is the purpose behind this author's post. Did they just learn about deepfakes and wanted to capitalize on it? Did they want to propagate a specific point ( and if so, which one )?

I miss the days I took articles at face value ( ignorance is bliss ). Now, I can't help myself and analyze former bastions like reuters. It is sad to me.


Deep fakes are going to start becoming part of our news, our entertainment, and our justice systems. "Fake news" a few years ago was just cherry picked data and skewed graphs, now the probability of any piece of media being completely fabricated is within a range to consciously consider.


I’d argue they’ve been here for a while and we just called it “cinema”. Remember all the superimposed historical scenes from Forrest Gump?

The technology got easier, everyone became a publisher, and now we have a more prickly name for it.


1. As others have noted, that's not a deepfake.

2. This looks like any other sock puppet account but the benefit of using a generated person is that they won't accidentally cross paths with the real one and get outed.

This is not an innovation just marginally more effective approach to obfuscating your trail.


If this guy has no reputation then who cares if he physically exists?

I blame social media. It comes with the implication that it somehow makes sense to care what total strangers think of things.


Imagine what might happen if someone were, say, to carefully craft an online persona (or several) over the course of a decade or more, complete with substantial recognition on numerous online platforms, news and academic citations, interviews, and public campaigns.

All based on created or assumed identifiers and imagery.

I am of course referring to myself. Though there may be others.


The only news here seems to be that newspapers will publish articles with verifying who is writing them.


> The only news here seems to be that newspapers will publish articles with verifying who is writing them.

Perhaps you meant, "_without_ verifying who is writing them"?


Yes.


Our world isn't ready for any "ethical" AI, especially any facial recognition nonsense. This AI nonsense is just gonna get worse.


There is nothing "intelligent" about any of this. These are dumb algorithms doing what they're told. That said, there is everything "artificial" about this. Perhaps we can shorten "AI" to merely "A?"


I typed this already on the duplicate submission but here it goes.

Anytime I see things about GAN, deepfakes and bots it reminds me of the videogame Deus Ex Human Revolution (2011).

The game is set in the not so distant future and revolves around technology, ethics and humanity.

It touches on various concepts like; AI, biomedical advancements, hacking, weaponry and civil rights.

The particulary profound bit that I relate to situations like this is around a TV news broadcaster called Eliza Cassan.

---Spoilers Ahead!---

Eliza works for one of the most powerful media and news agencies, called Picus.

She is a celebrity and with her abilities to get the latest news and coverage, she is able to direct the conversations on various political and ethical issues that I mentioned earlier.

As you progress through the game and break into the Picus News HQ, you come to realise that no employees have met or dealt with Eliza Cassan in person.

This is because she is revealed to be an AI powered hologram composed from what I would assume are desirable human aesthetics.

It is not established who controls Eliza but you can tell she has a degree of autonomy but is penultimately controlled to operate for the benefit of her operators.

Her AI is also intrusive and appears to be able to mine data from around the world (cameras, hacked devices), allowing her unprecendented global access which she uses to analyse an direct conversation for her viewers.

A thoroughly good game which gives technologically pro/negative people a taste of what our future "may" be like.

https://deusex.fandom.com/wiki/Eliza_Cassan


S1mone is about an actress who quits a movie and is replaced by a CGI puppet, not completely dissimilar from situations in the production of Back to the Future 2 and All The Money in the World. The main character of S1mone also conceals the fact that she isnt real.

https://en.wikipedia.org/wiki/Simone_(2002_film)

https://en.wikipedia.org/wiki/All_the_Money_in_the_World


Imagine a video showing leader of a country discussing pre-emptive nuclear strike on another nuclear armed country, only that content of the video is not real but an ultra realistic deep fake video?

Not sure how accessible is the Israeli tool used for detection of manipulation in the article.

There is an urgent need gap for accessible tools for detecting deep fake[1](Sorry for the plug, it's my problem validation platform, if your startup has such tool or you know of any open-source tool feel free to comment there).

[1]https://needgap.com/problems/21-deep-fake-video-detection-fa...


already happened, and it wasn't even a fake:

https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minut...


Maybe Oliver Taylor is a dog?

https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...

The cartoon is from 1993


On one hand - I have a very small hope that in the future all video evidence will be treated as potentially fake and unreliable. On the other - it will be a horror show with real criminals. Our future is a bleak one.


I really don't see why you hope for that since, as you point out, once deepfakes are good enough (which might already be the case) we can't even trust our own eyes anymore.

Plato's cave allegory is going to be a very real, practical concern for people in a few decades. There won't be any record, any evidence that couldn't trivially be falsified at a large scale. You can have AIs generate serious-looking thesis written by serious-looking people with serious-looking credentials from a serious-looking university in a real-looking city and all of it be faked at scale. Terrifying indeed.


There are going to be a lot of people who share that hope, given the rapidly shifting window of permissible thought and deed - combined with technology's unblinking eye. I always thought that bit rot would act in such a manner, because it is getting harder every year to faithfully preserver internet events... but this might do the job sooner.


Exactly. To clarify, my first sentence was about a state when this so called "deep fake" tech is already present and used but courts and judges didn't caught up yet and are treating any photo/video evidence as real by default.


There is no video in the article, just a still image.


I'm positive this photo is computer generated. Those GAN "photos" all have a certain look: the background contains no concrete objects, even out of focus ones, but are usually not a solid color backdrop. It's usually something that looks like a normal background until you look closer, and realize it doesn't make sense. There are also problems with the details of this face, if you look closely enough. This one's rightmost tooth has an odd blurred boundary with the one next to it. The shirt also look odd.


Is he also plainly missing the left ear? Looks like it's covered by the face but I doubt it would be entirely invisible. Usually one thing that works well to detect (women's) GAN-generated faces are earrings: they're usually different from one side to the other.


Somewhat off topic / tangent: How long until courtrooms no longer accept video footage as evidence due to the prevalence of fake videos? i.e. police body cams, store security camera, videos of powerful people engaged in inappropriate activity, etc. What implications might this have?


Courtrooms will always accept video so long as the proper foundation can be provided and the authenticity can be proved. Even the best edits and deepfakes are rather simple to explain. If anything, as the technics become more advanced, we would see a rise in forensic media analysts (I made that phrase up) who would assess the reliability of the media. Many people are prone to lying when it benefits them, but we are never going to stop letting people testify in court because of that. The same goes for video and other editable media.


That 'heatmap' looks suspiciously like the blurred brightness channel of the image.


I suggest watching a British mini-TV show "The Capture", about this problem.


Hi all - Dan from Cyabra here. Glad to see the conversation it sparked among the community.

I'm here if you got any questions...we're genuinely trying to bring transparency, in a complex battlefield of falsehoods.


If you think this is a normal fake, see this for why it is not - https://news.ycombinator.com/item?id=23844236

GAN images perhaps are not called Deepfakes, I guess.

GANs have not been seen in the wild except ones taken from https://thispersondoesnotexist.com/

If this is not taken from https://thispersondoesnotexist.com/ then it's a really big deal, this is what would be interesting if you can prove it either way.


One assumes that this has been going on since print media has existed, it's just the barriers of entry are getting lower (easier to concoct fake personalities, easier to get stories spread by paying to clicks). See also for example https://www.thedailybeast.com/right-wing-media-outlets-duped...

I would be stunned if there weren't thousands or perhaps millions of "people" on Twitter that are just mercenary bots farming timeline history and meow-meow beans, waiting to be activated. It's just too easy NOT to do, right?


"One assumes that this has been going on since print media has existed"

For an interesting look in to this, see The Commissar Vanishes: The falsification of images in Stalin's Russia.

https://www.indexoncensorship.org/2017/08/commissar-vanishes...

https://www.amazon.com/Commissar-Vanishes-Falsification-Phot...


IRL also - apart from BLM, which was completely organic and without coordination.


As the cost of doing complex operations gets distilled into just a button, who really wins?


Whomever steers the botnets or gets paid for the bandwidth.


NSO... Why am I not surprised??


Classic.


> Deepfakes like Taylor are dangerous because they can help build “a totally untraceable identity,” said Dan Brahmy, whose Israel-based startup Cyabra specializes in detecting such images.

Dangerous for the government, good for the citizens.


Nope, because the citizens will be bombarded with fakes to influence their voting behaviour, corrupting the democratic system.


On the internet, by random people? That really isn't new, and was possible without deep fakes, you can use an anime avatar on Twitter if you want to, or just stay an egg.

In newspapers? There's a trivial solution: check that your authors actually exist and are who they say they are.


People were being influenced since the dawn of times. Devaluing an internet’s stranger opinion is a good thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: