
Deepfake used to attack activist couple shows new disinformation frontier - aaron695
https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E
======
woadwarrior01
The article is likely a submarine[1] for the deep fake detection company that
it mentions (Cyabra).

[1]:
[http://paulgraham.com/submarine.html](http://paulgraham.com/submarine.html)

~~~
abathur
Thanks for posting this. It's timely for me.

I'm stuck in a loop with my father for a while now where he forwards me a
certain sort of propaganda. I feel like I've struggled to break through on a
key point: when I say something smells like propaganda, it doesn't mean I
think it is _false_ \--it means I think it's part of an organized campaign to
build narratives and consensus.

In my case, the topic is highly charged. Reapproaching it with something low-
stakes like this is at least worth a try.

~~~
woadwarrior01
This resonates with me. I sometimes used to have similar arguments about some
forwarded messages with my late father, who was an electronics engineer and
really sharp in his prime.

Intuitively it'd seem that the greater dissemination of information with
social media and messaging apps would make people smarter and promote critical
thinking, but in reality they seem to have the opposite effect. I still keep
wondering if it's the environment or his age that was the primary factor. Do
we become more gullible and less discerning as we grow older?. Perhaps I'll be
much worse when I'll be as old as he was.

~~~
munificent
_> Intuitively it'd seem that the greater dissemination of information with
social media and messaging apps would make people smarter and promote critical
thinking_

I naively believed this too. It's wrong. The Internet and the various apps,
sites, and social media things running on it aren't an _information_
dissemination system, they are a _data_ dissemination system. They take bytes
and get them in front of many humans. The system itself cares not one whit as
to whether the data it transmits has any connection to reality.

The result is that the Internet functions as a mostly indiscriminate amplifier
of its inputs. Sadly, it is _much_ easier to input lies than truth. First of
all, creating lies takes much less effort since no measurement or fact
checking is required. I could tell you a hundred lies about what the neighbor
next door is doing in the time it took you to go over there and ring the
doorbell. Worse, my lies can be crafted to take full advantage of all of the
human biases and flaws in the social media systems. I can make lies that are
emotionally loaded, play into stereotypical narratives, or have evocative
photos. Meanwhile, you foolish truth-teller are restricted to only the
narratives that actually happened and the imagery that you actually took of
the event.

When you think of London in the Industrial Revolution, do you picture all of
the bright shiny products of industrialization? The newly affordable textiles?
Or is it more an image of a city covered filthy coal soot? It feels like we're
in the latter state right now: buried under the pollution of disinformation.

We need to think of "information pollution" and need an "information
environmentalism" movement if we're going to survive the Information Age.

------
rthomas6
That's not a deepfake, it's a computer generated person. Deepfakes are when
someone superimposes the face of someone onto a video of someone else.

~~~
werber
I was thinking the same thing, is there a word for a computer generated
person? The whole article felt off to me because of what I interpreted as a
misunderstanding of what a deepfake is

~~~
phpnode
> is there a word for a computer generated person?

Maybe "infomorph"?

~~~
AndrewUnmuted
This should be the answer, even if there is a more accepted term already in
use. The etymology is undeniable and it very effectively describes the concept
in question without it possibly making references to any other adjacent
concepts.

Also it reminds me of Animorphs!

~~~
RealityVoid
Sadly, apparently, the name is taken over by another concept. For now.

------
yveys
Oliver's face was probably created with
[https://thispersondoesnotexist.com/](https://thispersondoesnotexist.com/)

Surprised that publications don't do more research as anyone can create a
social media profile that looks legit these days.

Trolls have been here for years and they'll continue to find new ways to
exploit weaknesses.

------
tantalor
That's not a deep fake, that's just a regular fake

~~~
hairofadog
Right – I thought from the headline they had concocted a video of the
activists doing or saying something repugnant, which does seem like a problem
that’s fast approaching.

This, on the other hand, is a case of a fake online profile being used to say
terrible things about the activist couple. The only notable thing is the
profile photo was computer generated.

~~~
dwighttk
I mean I bet they even just used thispersondoesnotexist.com until they got a
picture that they thought would work.

------
xrd
Has anyone ever experimented with totally sanitizing identities when reading
news? For example, if there were a browser plugin that would cleanse pictures,
associations, academic credentials, and especially names from news, would that
make a difference in the way that we as humans process information?

Deepfakes create an "adjacent identity" to trick people into aligning their
opinions with someone who is in the same group. But, if we were aware of this,
and removed the identity to strictly review the information, it might change
the way that information is received.

This would only work if a majority did this, so it would never work,
nevermind.

~~~
dredmorbius
Reputation matters.

There are differing views on how much to showcase authors (or reporters)
names, and some organs ( _The Economist_ , _Hacker News_ ) intentionally hide
or de-emphasize these.

Almost all questions of _identity_ actally revolve around _trust_ in one of
its various guises: trust, credit, accountability, entitlement, and making or
receiving of payment. In creation or relating (mediating) information, the
persistence of authorship attribution provides a bundling handle for trust. An
author might file numerous stories per year, month, week, day, etc. Assessing
and assigning trust to each story is expensive. Bylines, editors, and
publications accrue positive or negative associations with time. No, trust is
not fully consistent or transative, but as an efficiency heuristic, rolling up
and bundling reputation offers powerful gains, and numerous checks.

Even a pseudonymous source (you're reading one now) can accrue a certain
reputation.

------
sradman
> “The distortion and inconsistencies in the background are a tell-tale sign
> of a synthesized image, as are a few glitches around his neck and collar,”
> said digital image forensics pioneer Hany Farid, who teaches at the
> University of California, Berkeley.

and:

> Artist Mario Klingemann, who regularly uses deepfakes in his work, said the
> photo “has all the hallmarks.”

As an amateur photographer who only* knows enough Photoshop to quickly enhance
images, I find these two statements completely unconvincing. This type of he-
said-she-said news article makes me uncomfortable. TTBD: Truth To Be
Determined.

* Edit: I know nothing about deep fakes but I expect "fake faces" to have "hallmarks" in the face.

~~~
WhitneyLand
What do you think is unconvincing about them and in what way do you believe
that expertise in Photoshop has any relation to this technology?

At best Photoshop can play a role in covering tracks of evidence or artifacts
of this much more sophisticated approach to faking identifies.

It’s not to say the developers and scientists at Adobe are lesser, it’s that
it’s not the same tool or problem that’s being solved.

Put crudely Photoshop can let you draw a mustache on someone’s photo. This is
about inventing a photo that never existed before.

~~~
interestica
> in what way do you believe that expertise in Photoshop has any relation to
> this technology?

They weren't expressing expertise in Photoshop. They were saying that basic
use of it is the extent of their expertise.

> What do you think is unconvincing about them

Them being the 'experts' \- they did nothing to convince other than say "I'm
an expert in this thing."

------
slim
the lede is burried, the "deepfake" wrote for multiple newspapers :

    
    
      In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.” 
    

so there's a real person behind the account that simply wants to stay
anonymous, in this case traditionnaly the newspaper bears the responsability
for the smear.

~~~
andrewflnr
That doesn't follow. My first guess was today those articles were just part of
the identity's cover, and were written by a group that was running "Taylor".

------
ChrisMarshallNY
The lower left (his right) section of the collar has a bit of "crunchy
pixels." Otherwise, it looks pretty damn realistic.

This appears to be a "thispersondoesnotexist.com" image. Is that site
considered "deepfake"?

I always assumed "deepfake" to mean altering existing images to add actual
people's likenesses (like the classic "celebrity porn" images). In particular,
applied to video.

This type of thing is likely to become _de rigeur_ , for astroturf and scam
accounts. Disturbing, but not particularly newsworthy.

Before, people would just scrape images from stock photo sites, or even from
some poor sap's Facebook photos. Doing it this way simply makes it less likely
to be uncovered.

~~~
rthomas6
Look at his rightmost tooth.

~~~
itronitron
Also the neck seems off, as if the person is tipped way back in a chair but
facing forward and smiling.

------
A4ET8a8uTh0
I am not sure this is a new frontier at all. This is a case of no one being
who they say they are on the internet. Frankly, given the mood in US these
days, it seems wise not to share your opinion too openly ( or leave your
contact information ).

Since it is not deepfake ( it may very well be an enhanced pic for all we know
), question becomes what is the purpose behind this author's post. Did they
just learn about deepfakes and wanted to capitalize on it? Did they want to
propagate a specific point ( and if so, which one )?

I miss the days I took articles at face value ( ignorance is bliss ). Now, I
can't help myself and analyze former bastions like reuters. It is sad to me.

------
AtHeartEngineer
Deep fakes are going to start becoming part of our news, our entertainment,
and our justice systems. "Fake news" a few years ago was just cherry picked
data and skewed graphs, now the probability of any piece of media being
completely fabricated is within a range to consciously consider.

~~~
dvtrn
I’d argue they’ve been here for a while and we just called it “cinema”.
Remember all the superimposed historical scenes from Forrest Gump?

The technology got easier, everyone became a publisher, and now we have a more
prickly name for it.

------
caseysoftware
1\. As others have noted, that's not a deepfake.

2\. This looks like any other sock puppet account _but_ the benefit of using a
generated person is that they won't accidentally cross paths with the real one
and get outed.

This is not an innovation just marginally more effective approach to
obfuscating your trail.

------
upofadown
If this guy has no reputation then who cares if he physically exists?

I blame social media. It comes with the implication that it somehow makes
sense to care what total strangers think of things.

~~~
dredmorbius
Imagine what might happen if someone were, say, to carefully craft an online
persona (or several) over the course of a decade or more, complete with
substantial recognition on numerous online platforms, news and academic
citations, interviews, and public campaigns.

All based on created or assumed identifiers and imagery.

I am of course referring to myself. Though there may be others.

------
h3rsko
The only news here seems to be that newspapers will publish articles with
verifying who is writing them.

~~~
brianzelip
> The only news here seems to be that newspapers will publish articles with
> verifying who is writing them.

Perhaps you meant, "_without_ verifying who is writing them"?

~~~
h3rsko
Yes.

------
creativeCak3
Our world isn't ready for any "ethical" AI, especially any facial recognition
nonsense. This AI nonsense is just gonna get worse.

~~~
nathanaldensr
There is nothing "intelligent" about any of this. These are dumb algorithms
doing what they're told. That said, there is _everything_ "artificial" about
this. Perhaps we can shorten "AI" to merely "A?"

------
mothsonasloth
I typed this already on the duplicate submission but here it goes.

Anytime I see things about GAN, deepfakes and bots it reminds me of the
videogame Deus Ex Human Revolution (2011).

The game is set in the not so distant future and revolves around technology,
ethics and humanity.

It touches on various concepts like; AI, biomedical advancements, hacking,
weaponry and civil rights.

The particulary profound bit that I relate to situations like this is around a
TV news broadcaster called Eliza Cassan.

 _\---Spoilers Ahead!---_

Eliza works for one of the most powerful media and news agencies, called
Picus.

She is a celebrity and with her abilities to get the latest news and coverage,
she is able to direct the conversations on various political and ethical
issues that I mentioned earlier.

As you progress through the game and break into the Picus News HQ, you come to
realise that no employees have met or dealt with Eliza Cassan in person.

This is because she is revealed to be an AI powered hologram composed from
what I would assume are desirable human aesthetics.

It is not established who controls Eliza but you can tell she has a degree of
autonomy but is penultimately controlled to operate for the benefit of her
operators.

Her AI is also intrusive and appears to be able to mine data from around the
world (cameras, hacked devices), allowing her unprecendented global access
which she uses to analyse an direct conversation for her viewers.

A thoroughly good game which gives technologically pro/negative people a taste
of what our future "may" be like.

[https://deusex.fandom.com/wiki/Eliza_Cassan](https://deusex.fandom.com/wiki/Eliza_Cassan)

~~~
basch
S1mone is about an actress who quits a movie and is replaced by a CGI puppet,
not completely dissimilar from situations in the production of Back to the
Future 2 and All The Money in the World. The main character of S1mone also
conceals the fact that she isnt real.

[https://en.wikipedia.org/wiki/Simone_(2002_film)](https://en.wikipedia.org/wiki/Simone_\(2002_film\))

[https://en.wikipedia.org/wiki/All_the_Money_in_the_World](https://en.wikipedia.org/wiki/All_the_Money_in_the_World)

------
Abishek_Muthian
Imagine a video showing leader of a country discussing pre-emptive nuclear
strike on another nuclear armed country, only that content of the video is not
real but an ultra realistic deep fake video?

Not sure how accessible is the Israeli tool used for detection of manipulation
in the article.

There is an urgent need gap for accessible tools for detecting deep
fake[1](Sorry for the plug, it's my problem validation platform, if your
startup has such tool or you know of any open-source tool feel free to comment
there).

[1][https://needgap.com/problems/21-deep-fake-video-detection-
fa...](https://needgap.com/problems/21-deep-fake-video-detection-fakenews-
machinelearning)

~~~
082349872349872
already happened, and it wasn't even a fake:

[https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minut...](https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minutes)

------
bryanlarsen
Maybe Oliver Taylor is a dog?

[https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...](https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog)

The cartoon is from 1993

------
Yizahi
On one hand - I have a very small hope that in the future all video evidence
will be treated as potentially fake and unreliable. On the other - it will be
a horror show with real criminals. Our future is a bleak one.

~~~
simias
I really don't see why you hope for that since, as you point out, once
deepfakes are good enough (which might already be the case) we can't even
trust our own eyes anymore.

Plato's cave allegory is going to be a very real, practical concern for people
in a few decades. There won't be any record, any evidence that couldn't
trivially be falsified at a large scale. You can have AIs generate serious-
looking thesis written by serious-looking people with serious-looking
credentials from a serious-looking university in a real-looking city and all
of it be faked at scale. Terrifying indeed.

~~~
tal8d
There are going to be a lot of people who share that hope, given the rapidly
shifting window of permissible thought and deed - combined with technology's
unblinking eye. I always thought that bit rot would act in such a manner,
because it is getting harder every year to faithfully preserver internet
events... but this might do the job sooner.

------
rthomas6
I'm positive this photo is computer generated. Those GAN "photos" all have a
certain look: the background contains no concrete objects, even out of focus
ones, but are usually not a solid color backdrop. It's usually something that
looks like a normal background until you look closer, and realize it doesn't
make sense. There are also problems with the details of this face, if you look
closely enough. This one's rightmost tooth has an odd blurred boundary with
the one next to it. The shirt also look odd.

~~~
Udik
Is he also plainly missing the left ear? Looks like it's covered by the face
but I doubt it would be entirely invisible. Usually one thing that works well
to detect (women's) GAN-generated faces are earrings: they're usually
different from one side to the other.

------
LinuxBender
Somewhat off topic / tangent: How long until courtrooms no longer accept video
footage as evidence due to the prevalence of fake videos? i.e. police body
cams, store security camera, videos of powerful people engaged in
inappropriate activity, etc. What implications might this have?

~~~
Fezzik
Courtrooms will always accept video so long as the proper foundation can be
provided and the authenticity can be proved. Even the best edits and deepfakes
are rather simple to explain. If anything, as the technics become more
advanced, we would see a rise in forensic media analysts (I made that phrase
up) who would assess the reliability of the media. Many people are prone to
lying when it benefits them, but we are never going to stop letting people
testify in court because of that. The same goes for video and other editable
media.

------
baxter001
That 'heatmap' looks suspiciously like the blurred brightness channel of the
image.

------
Yizahi
I suggest watching a British mini-TV show "The Capture", about this problem.

------
Cyabra
Hi all - Dan from Cyabra here. Glad to see the conversation it sparked among
the community.

I'm here if you got any questions...we're genuinely trying to bring
transparency, in a complex battlefield of falsehoods.

------
aaron695
If you think this is a normal fake, see this for why it is not -
[https://news.ycombinator.com/item?id=23844236](https://news.ycombinator.com/item?id=23844236)

GAN images perhaps are not called Deepfakes, I guess.

GANs have not been seen in the wild except ones taken from
[https://thispersondoesnotexist.com/](https://thispersondoesnotexist.com/)

If this is not taken from
[https://thispersondoesnotexist.com/](https://thispersondoesnotexist.com/)
then it's a really big deal, this is what would be interesting if you can
prove it either way.

------
verytrivial
One assumes that this has been going on since print media has existed, it's
just the barriers of entry are getting lower (easier to concoct fake
personalities, easier to get stories spread by paying to clicks). See also for
example [https://www.thedailybeast.com/right-wing-media-outlets-
duped...](https://www.thedailybeast.com/right-wing-media-outlets-duped-by-a-
middle-east-propaganda-campaign)

I would be _stunned_ if there weren't thousands or perhaps millions of
"people" on Twitter that are just mercenary bots farming timeline history and
meow-meow beans, waiting to be activated. It's just too easy NOT to do, right?

~~~
pmoriarty
_" One assumes that this has been going on since print media has existed"_

For an interesting look in to this, see _The Commissar Vanishes: The
falsification of images in Stalin 's Russia_.

[https://www.indexoncensorship.org/2017/08/commissar-
vanishes...](https://www.indexoncensorship.org/2017/08/commissar-vanishes/)

[https://www.amazon.com/Commissar-Vanishes-Falsification-
Phot...](https://www.amazon.com/Commissar-Vanishes-Falsification-Photographs-
Stalins/dp/1849762511)

------
sova
As the cost of doing complex operations gets distilled into just a button, who
really wins?

~~~
thephyber
Whomever steers the botnets or gets paid for the bandwidth.

------
panpanna
NSO... Why am I not surprised??

------
FelipeAraujo88
Classic.

------
LudwigNagasena
> Deepfakes like Taylor are dangerous because they can help build “a totally
> untraceable identity,” said Dan Brahmy, whose Israel-based startup Cyabra
> specializes in detecting such images.

Dangerous for the government, good for the citizens.

~~~
Cthulhu_
Nope, because the citizens will be bombarded with fakes to influence their
voting behaviour, corrupting the democratic system.

~~~
luckylion
On the internet, by random people? That really isn't new, and was possible
without deep fakes, you can use an anime avatar on Twitter if you want to, or
just stay an egg.

In newspapers? There's a trivial solution: check that your authors actually
exist and are who they say they are.

