
US House of Representatives Hearing on the Dangers of Deepfakes and AI - MintChocoisEw
https://lionbridge.ai/articles/deepfakes-a-threat-to-individuals-and-national-security/
======
parksy
Beyond geopolitical concerns - as the tech becomes more widespread and easier
to use, to the point where teenagers start using it to punk one another, no
one will believe anything they see, hear or read ever again without an
extremely robust digital provenance system (and who built that system? are
they to be trusted? does it rely on intrusive measures? etc)

It's great the developers and researchers are taking pause - this is great
tech that could be used positively and creatively in so many areas - and with
any new tech comes pain from misuse.

If history is anything to go by, legislation is likely catch up slightly too
late to stop the cultural impact, be overly broad, drag in a lot of relatively
harmless outliers, undermine the positive use-cases and won't deter dedicated
high-stakes bad actors.

~~~
pgt
Personal SSL certificates? I don't see why original media can't be signed by
the creator. We'll see similar measures as we do for entering passwords into
non-SSL web pages:

"Warning: this video is unsigned and claims to originate from NYTimes.com."

~~~
faissaloo
Because that still relies on trust.

~~~
FungalRaincloud
Trust, to me, is not the problem. You can build trust. Known-good certificates
can be distributed physically, and require signed messages for replacement.
Or, we can develop schemes for distribution digitally via validated channels.
For example, each worker at a company has a particular known-good digital
presence, verified by their own public key, and distribution happens with them
as the source, essentially creating an expanding ring of trust to the key
being distributed. Violating such a ring of trust is not going to be easy, if
it is well enough built.

There are two issues I do see, though, and they're kind of the same issue.
Right now, we have this concept of a central store of public certificates. It
makes it easy for you to get a certificate for a particular entity, but it
also makes the central store a target. If you can compromise a central store
(or a machine that is attempting to access said central store), you probably
have the resources to at least redirect the user to your own site and leave
them none-the-wiser, and you probably have the resources to man-in-the-middle
their connection entirely and just snoop your heart out. So central stores of
trust are a bit of an issue, and the ways around that are non-trivial to set
up. A good example is probably KeyBase, who allow you to certify your various
online presences with your private key. So if someone wants to replace your
information on KeyBase with their own one, and they have the resources to do
so, now they also have to compromise all the places you've distributed that
key to. Or, they have to compromise one of those centralized stores of
trust....

The big issue with centralized stores of trust is that they build blind trust.
That's the big issue with humans in general, though. We don't want to question
what we're watching. And we probably don't want to be bothered with validating
that the "trusted source" of the certificate used to sign this content is
actually _trusted_. It's just too much mental overhead. We want it to be
automatic. We want central stores of trust, because it's just _easier_. The
work is going to be convincing people that _easier_ is dangerous, in this
case. Or, it's going to be to convince software companies to build in
inconvenient technology and not make it trivial to turn off.

~~~
intended
“Easier” is the whole point of the society you live in.

To be fair, the point of society is trust. It’s a way to trust information and
ensure the species is safe.

The whole point of using markets and capitalism is because they generate more
trust worthy results than top down driven systems.

Until this mess, which makes it seem like a central authority will be better
than a system that leaves nodes on the three open for manipulation.

Essentially, we had a distributed decision making society. Now we found a hack
to break that society structure. The cost for such a society to manage
verification is absurdly high - every person will have to spend non trivial
effort to verify that they are not being manipulated.

In contrast central decision making societies like China will just avoid that
cost and be more competitive, beating out western democratic systems.

------
iamnothere
It feels like we're having the 1990s debate on "weapons-grade" encryption all
over again. Yes, some software has the potential for abuse. But there's no
reasonable way to stop it from being written and used. New technology springs
up to counter whatever harms arise, society adapts, and we move on.

~~~
dannyw
Don’t expect it to turn out the same this time. Back then, technology was seen
as full of promise and potential. Now, thanks to a number of actors like the
familiar Facebook, our industry is no longer seen in the same light.

~~~
iamnothere
My point still stands:

> But there's no reasonable way to stop it from being written and used.

Can't stop encryption, can't stop P2P file sharing, can't stop deep fakes
either.

------
ipnon
Imagine someone calling your grandparents on the phone with your deepfaked
voice crying and saying you've been kidnapped and need $10,000 to come back
home.

~~~
Merad
Deepfake voices would twist the knife even deeper, but they aren't necessary.
This kind of scam already hits seniors all the time:

[https://www.chicagotribune.com/opinion/commentary/ct-
perspec...](https://www.chicagotribune.com/opinion/commentary/ct-perspec-
phone-scam-fraud-fake-lawyer-call-seniors-0122-20190117-story.html)

[https://www.aarp.org/money/scams-
fraud/info-2018/grandparent...](https://www.aarp.org/money/scams-
fraud/info-2018/grandparent-scam-scenarios.html)

[https://www.consumer.ftc.gov/articles/0204-family-
emergency-...](https://www.consumer.ftc.gov/articles/0204-family-emergency-
scams)

------
jl6
What are the top ten “bombshell” audio or video recordings from the 21st
century so far which ignited action because we believed them (probably
rightly) to be truth, which might these days (or soon) be plausibly denied as
deep fakes, leading to inaction?

~~~
save_ferris
I don’t think this falls into that category, but the first example that comes
to mind was the 1938 “war of the worlds” radio broadcast[0].

The dramatization was written as a news report and didn’t identify itself as
fictional until well into the broadcast. Angry callers reported mobs taking to
the streets, but the scale of the reaction has been debated.

0:
[https://en.m.wikipedia.org/wiki/The_War_of_the_Worlds_(radio...](https://en.m.wikipedia.org/wiki/The_War_of_the_Worlds_\(radio_drama\))

~~~
craftyguy
That was from the 20th century, not the 21st century.

------
Merrill
Media that is not cryptographically signed and authenticated by inclusion of
the signature in a trusted register or blockchain cannot be trusted to be
authentic.

~~~
danieldk
That's true, but it also gives plausible deniability to accidentally taped
evidence (e.g. mic left on before/after a press conference). The person making
the newsworthy statement, could always claim it is a deep fake.

Of course, if reliable sources were present at the occasion, they could
jointly sign a recording, but under other circumstances it could be fake.

Another potential problem is producing fake confessions in e.g. criminal
cases. The person who allegedly confesses would not sign the recordings, an
the police's signatures are worthless if they are fabricated.

~~~
maxheadroom
> _Another potential problem is producing fake confessions in e.g. criminal
> cases. The person who allegedly confesses would not sign the recordings, and
> the police 's signatures are worthless if they are fabricated._

To look at it from the dystopian, draconian point-of-view, deep-fakes created
and used by the local justice system[s] could be signed by a police officer
(who's intent is "solving the case, no matter what").

No matter how many witnesses and experts that you throw at it, the court will
almost _always_ believe the officer (because of unjustly doted infallibility)
and that deep-fake could cost someone their life.

Just as other tools have been used for good and bad, on both sides of law
enforcement, the probability lends to this equally be abused; especially, in
areas where " _south of the Mason-Dixon line_ " and " _the south will rise
again_ " are still parts of every-day parlance.

------
dillonmckay
“Citron is currently writing a model statute that would address wrongful
impersonation and cover some deepfake content.”

Existing libel laws would cover this, no?

~~~
TakakiTohno
Hmm I think it depends on alot of things. Public officials have little rights
in terms of satirical parodies, so there could be grey areas in terms of using
deepfakes to impersonate officials doing stupid things. Articles in the
satiracal site, theonion.com, are often libelous but since it's on a comedy
site it is allowed.

~~~
wbl
Great, we're going to give Trump the ability to throw Alec Baldwin in jail.

~~~
mc32
Or you can have the daily beast doxx you for doctoring a parody video... of a
politician

------
dclowd9901
This might be a good opportunity for camera manufacturers to introduce a
feature to sign video with some sort of authentication. If a video is true and
real, then it could be authenticated by Canon, Sony, Apple, etc. Any video
without this authentication should be suspect.

------
koboll
Photoshop is more than _30_ years old. And in three decades, the impact of
manipulated images on people's reputations hasn't been quite as devastating as
people once feared.

Why won't it be the same this time? It's much easier to fool a human than to
fool tools designed to detect manipulation, and that's limited fake photos'
impact severely. As long as we have ways to detect fakes, and media
responsible enough to do the cursory investigation required to verify
authenticity, I don't see why manipulated videos poses any greater risk than
manipulated photos.

~~~
mushufasa
> In three decades, the impact of manipulated images on people's reputations
> hasn't been quite as devastating as people once feared.

As the cost of producing fakes approaches zero, the noise-to-signal ratio
increases until people distrust everything they see and hear. I'd argue we are
already on that path, and have already begun feeling the consequences. (Read
about the Russian "Internet Research Agency" and their role in tampering the
US 2016 election.) Deep fakes accelerates that trend.

> As long as we have ways to detect fakes, and media responsible enough to do
> the cursory investigation required to verify authenticity...

Media is becoming less prone to cursory investigation (fragmentation,
understaffing/lack of revenue). A GAN is specifically designed to fool
detection, and they're only getting stronger.

~~~
JetSpiegel
> As the cost of producing fakes approaches zero, the noise-to-signal ratio
> increases until people distrust everything they see and hear.

This is a bold face lie. The cost of producing the fake is nearly irrelevant,
what matters is the reputation of the broadcaster. This has been so simce time
immemorial and did not change with Photoshop.

Many journalistic revelations stand, even though they could be trivially
faked. The Snowden documents would cost trivial amounts of money to fake (it's
a bunch of PowerPoints), but people believe it, because se US government
issued a warrant for his capture. The metadata is much harder to fake.

------
polyomino
This page has an invisible Facebook like button on top of the text

------
evanb
I've been thinking about this some recently. Is some scheme like the following
remotely plausible?

\- Have a piece of hardware in-frame that can show hashes that is hooked to
the operating cameras.

\- In each frame show the hash of the previous frame (low latency would be
required, as would the transmission of the full-quality original).

\- Publish the original seed and the final hash (and, since the original is
broadcast, the whole chain of hashes can be verified).

~~~
sooper
The fakes can do this too, though...

And how do you validate what is displayed on TV? Who will validate most videos
they see?

------
jaimex2
We've had fake news and media for years. People wisen up, even if deep fakes
come about reputable media outlets will call it out.

If you want to do something more without hurting free-speech improve existing
slander and defamation laws to specifically target deep fakes.

Hold the platforms accountable the same way copyright has, its worked wonders.

~~~
tstrimple
> We've had fake news and media for years. People wisen up

I haven't seen much evidence of this. Now we've got a significant portion of
the population who calls every news report that conflicts with their view of
the world fake news and then turn around and consume media which has been
measurably shown to be among the most inaccurate available. All of this
behavior is supported by the President of the United States. When will people
wisen up?

~~~
afterburner
The time scale for wisening up is measuring in years, probably 5-10 years
minimum, maybe more. People and societies take a while to clue in and adapt.
It will happen though. Mass social media use is a relatively new thing.

Deepfakes wont be any more challenging to deal with than the struggle for
accurate information in the pre-TV, or even pre-print days. People will learn
to look for reputable sources, and propaganda will keep trying to trick people
in new ways.

~~~
hn_throwaway_99
I don't see any evidence of this, at all. I see a large portion of the
populace gleefully indulging in their own self delusion, even when the ability
to verify or disprove content is so easily available. To the parent's point,
"fake news" has just become a convenient word to shout when you don't want to
hear or believe something.

------
intended
This particular issue is “vexing” to put it mildly. It’s nightmare fuel to be
honest, because when you start trying to think of solutions to deep fakes and
fake news, you very _very_ quickly, run up against the assumed norms of our
enlightenment era ideals. Free speech, expression, and government control.

Because it’s clear, that there’s no private solution to essentially an
evolutionary war between carnivore (malicious humans) and herbivore(people who
don’t want to be manipulated).

Right now, the Chinese total control model is the model that’s working, and
while on HN we may find it abhorrent, people in high places are being forced
to make pragmatic choices. For them a Chinese styled information system will
win out.

To prevent this, I think it’s increasingly time to re-examine our norms on
information and expression.

In general, taking a step back - human society (markets, media, reporters,
books, news) are effectively a giant solution to finding out what is “true”
and what is “something else”.

We are excellent at solving these problems, we build families (parents are
decision makers and know what information and implications make sense), clans
(what is ideal for this group of people with similar genes I can trust),
businesses (contracted people with relatively aligned interests), and more.
You get the drift.

So the question resolves to: How, do we organize ourselves, to verify
information, to at least keep parity with the verification rates before the
internet era.

The key difference, is the mass production of information/content. The rate of
content generation outstrips the ability to verify.

This last part will not change. We will always lose as an information society,
if we get into a pure verification war with computer generated information.

This leads to the first conclusion:

1) clear measures to control the rate of generation of fake data.

to be blunt: this means jail. Punitive and clear response to stop this
behavior, across borders.

Here’s where a decent chunk of HN will be aghast. This is what I meant by our
values coming into conflict with the necessities of the solution.

Which brings us to problem 2/ addendum to solution 1.

2) who watches the watchers?

If govt has the power to punish people for “fake news” how do we know that’s
not misused for “inconvenient news”.

Well, the weak solution that presents itself is a fire walled agency, with
guaranteed ability to be funded, in order to seek out and identify
manipulation and spread of faked information.

Hopefully, this agency won’t crumble on day 1, on the inherent contradictions
of its role and responsibilities.

Some structure of this sort, gives us a pathway through the near future to
deal with this issue.

Hopefully, it can be used to buy the time to solve the issue we are facing.

The irony, of advocating for a ministry of truth, in order to save the truth,
is not lost on me.

But unless action is taken to stop propaganda, mass produced information
creation, then it is a guarantee that our old human ways of assessing
information will fail.

The only option which will be left on the table is dystopia, or some bizarre
world where nothing and anything is true.

~~~
gruez
>Punitive and clear response to stop this behavior, across borders.

Good luck punishing "fake data" from China/Russia.

~~~
intended
You develop similar capacities as them and then agree to not use it.

~~~
JetSpiegel
What, you think Putin is blackmailable? He is already accused of being a
murderer in the west, what kind of leverage can you add from that?

