
AI-generated fake content could unleash a virtual arms race - kristintynski
https://venturebeat.com/2019/11/11/ai-generated-fake-content-could-unleash-a-virtual-arms-race/
======
echelon
These deep fake articles are becoming a meme. They mostly seem alarmist, and
yet they're not authored by people actually in the industry.

Deep fakes automate what deep pockets and state actors could already do with
Photoshop and other professional tools. The world isn't going to become a
scary place because the barrier to entry got lower and the technology has been
democratized. People are smart. Fakes will be detectable through entropy
measures, corroboration, common sense, etc.

FWIW, I've been working on real time voice to voice style transfer.

[https://drive.google.com/open?id=1zRvJEGJjTpKvvzel-J0agh3fKB...](https://drive.google.com/open?id=1zRvJEGJjTpKvvzel-J0agh3fKBn9aqGy)

There are already a few other (non-real time) players in this field.

I'm hoping to spin this up as a small social app or filter and sell it so I
can fund my capital-intensive film making startup.

I think this tech _should_ be widely available. Not only will it make people
think and question more, but it'll be fun too.

It's also amusing (and terrifying) to see all the anti-1st Amendment
legislation aimed at combating deep fakes. The truth is that there is nothing
to fear except our freedoms being taken away.

~~~
ipython
I don’t think you should dismiss these concerns so quickly. It sounds like you
have experience in this field. Perhaps that would make it easier for you to
spot potential fakes? What about your grandma? How would she fare?

And besides, the end game isn’t to fool everyone into believing a fake. No,
the more insidious goal is to flood the zone with enough dis- and
misinformation to overload our ability to filter it. It’s like gaslighting at
scale- at some point you just stop being able to process information because
it’s so voluminous and of dubious quality that you stop believing any of it.

~~~
bransonf
> What about your grandma? How would she fare?

Grandma’s still falling for the phone and mail scams. No amount of legislation
is going to fix the reality of technological illiteracy among the oldest
adults.

Deepfakes might fool some of today’s adults who don’t quite understand, but we
are raising a generation that has turned into a meme: “Everything you read on
the internet is true” -Abraham Lincoln

I think the real silver lining here is that the internet is an alternate
reality. Many of us refuse to believe that, but social media has created
manufactured people. The only solution is to bring people back to the real
world. The people are real here. Their opinion, no matter how controversial,
comes from a real mouth, and the face you see is the one they were born with.

If anyone forms their worldview based entirely on things they read on the
internet, they probably would be just as susceptible to our real world forms
of propaganda/gaslighting/ whatever you want to call it.

~~~
skybrian
This essentially means the web is too difficult for some users and they need
something else, like maybe an app store. Maybe some company will win big by
providing a safer (or apparently safer) alternative?

Previous examples: Gmail had a better spam filter. Apple and Google did a
better (though not perfect) job of protecting users from arbitrary code
execution, as did the web itself, way back when.

This doesn't happen all that often, but if it succeeds, power users will scoff
at how nerfed the new thing is.

I'm reminded of an old story [1] about an early game for children:

> I found myself unable to reconcile the idea of a virtual world, where kids
> would run around, play with objects, and chat with each other without
> someone saying or doing something that might upset another. Even in 1996, we
> knew that text-filters are no good at solving this kind of problem, so I
> asked for a clarification: "I’m confused. What standard should we use to
> decide if a message would be a problem for Disney?"

> The response was one I will never forget: "Disney’s standard is quite clear:
> No kid will be harassed, even if they don’t know they are being harassed."

But maybe text filters will be better if you throw enough machine learning at
the problem?

[1] [http://habitatchronicles.com/2007/03/the-untold-history-
of-t...](http://habitatchronicles.com/2007/03/the-untold-history-of-toontowns-
speedchat-or-blockchattm-from-disney-finally-arrives/)

~~~
bostik
> _This essentially means the web is too difficult for some users and they
> need something else_

I think you are on the right track, but not going all the way. The bigger
issue here is that media literacy is _incredibly_ hard. You need a wide body
of knowledge, essentially an educated[ß] mind, and an almost unhealthy
skepticism against absolutely everything you read, see or hear.

As a short cut, a good first approximation is to be a cynic. Assume everyone
is pushing their own agenda, and that even at best you can only see half of
it.

(If you are asking yourself what agenda _I_ am pushing with this post, well
done. You're off to a good start.)

ß: The ability to question information, conduct research, cross-check the
results of research, and have the mental agility to identify your own biases -
these are not natural tendencies, but learned traits. We can lump them all
under the "educated" label, even if that's not the optimal term.

~~~
skybrian
Yes, it is hard. But I think it's not just education, but epistemic humility.
We have no direct knowledge of what's going on in other parts of the world.
The past is often not recorded accurately, the future often unpredictable. So
our default assumption should often be that we don't know what's going on.

Highly educated people in the grip of an ideology can dream up conclusions far
beyond the limited and unreliable evidence we get from media consumption. They
are often rewarded for this.

And one of these ideologies is the myth of rugged individualism (or competent
adulthood), the idea that each person can and should figure out what's going
on by themselves. It's obviously not true of children and the elderly, but
most of us outsource a lot of our thinking. Living in modern civilization
inherently means having a lot of trust and dependency on others.

The ideals of media literacy are simply unrealistic for most people. It's not
clear what the alternative is, though.

------
blunte
This pretty much describes the end of the internet as we know it. Even before
AI generated "content", the internet has become lower signal-to-noise as time
has moved forward.

It is already the case that for many everyday searches I do, I am forced to be
very creative in my search phrase in hopes of filtering out the garbage sites
that manage to dominate the first results page.

Watching less tech-saavy people use computers (such as elder family) is
enlightening and frightening. They either cannot tell real content from fake
content, or worse they are satisfied with what they get from obviously
suspicious sites.

Maybe my concerns of polluted websites are less relevant considering the
general population is getting more of their "information" from within Facebook
rather than even going to search engines (of which they use the default for
their browser!).

~~~
seibelj
New companies and technologies will be invented to solve this problem. Every
problem has a solution. You are falling into the same trap that has caught
humans since the dawn of man. The printing press, the car, the internet, and
now “deep fakes” will cause hand wringing but will not destroy us. Just give
it time.

~~~
glenstein
>The printing press, the car, the internet

These all came with real tradeoffs and we've just accepted them. The printing
press and the internet, in their own ways, sped up the world and shortened
attention spans. Cars changed cities. The benefits have been there, but we've
engaged with or ignored the harms posed by changes in different ways, and the
same unconscious trade is going to happen again.

------
Abishek_Muthian
Considering video, audio are accepted as evidence in most courts without any
independent verification; I'm seriously worried about the implications of deep
fake on justice.

There is an urgent need gap[1] on detection of deep fakes.

[1]:[https://needgap.com/problems/21-deep-fake-video-detection-
fa...](https://needgap.com/problems/21-deep-fake-video-detection-fakenews-
machinelearning)

~~~
bostik
Risky Business did a really good interview on the subject early last year[0].
Law profession is already aware of the potential problems.

Me? I welcome the future where audio and video evidence are just another piece
of evidence.

0: [https://risky.biz/RB489/](https://risky.biz/RB489/)

------
lordgrenville
We've had fake photographs for decades and it hasn't seemed to make a big
difference in politics. But I think that's because in the past you had
gatekeepers, like the editors and factcheckers of "respectable" publications,
who would ascertain the legitimacy of a picture before using it. They'd make
mistakes sometimes, but got it right 99% of the time.

Now news spreads horizontally through social media and group chats. It's
common to see, say, a clip purportedly of police brutality right now in
country X, which is actually 7 years old and from country Y. Someone will
correct it, someone will dispute the correction, whatever - the damage is
done. So I don't think deepfakes will move the needle much. The real damage is
the end of gatekeeping, and that's already happened.

~~~
QuantumGood
We haven't had high-velocity media for decades, and information is easier to
make extremely false and get believers than photographs. You can't create a
complete narrative through photos alone. You need associated information.

------
YarickR2
Well, this probably means end of unsigned content ; every line of text, every
article, etc should be / will be signed by living person's key, or it will be
heavily penalized in search engine output; governments will run keystores with
citizens' keys, and content signatures will be checked against such keystores
to ensure content authenticity (or lack thereof) . Time to reopen GPG , I
guess.

------
joe_the_user
I was experimenting with this stuff and you can too here [1]. It's kind of
impressive but not convincing. The main impression it gives is it doesn't know
what subjects affect which objects, what one kind of relation implies about
another relation and so-forth. Still, it gives a sequence of words with a
consistent "feel" which is impressive.

However, I would still only find it's text convincing for producing ... a
marketing blog since such things just seem like a contentless stream of
buzzwords to begin with. If anything, it gives a certain idea of how marketing
speech require something, a stream of words with certain feeling, but not real
logic.

[1] [https://talktotransformer.com/](https://talktotransformer.com/)

~~~
jeffshek
I built [https://writeup.ai](https://writeup.ai) to help with that, but while
it helps, it still feels like it's missing "something" at times.

~~~
joe_the_user
The thing is that I think language over a longer term is about actually
communicating a structure to world - in a way that requires knowledge of the
world. It is just that over a shorter period, a good portion of language isn't
about this communication but about just certain coloring of communication.
Which is to say that I think this lacks more than it seems at first blush.

------
achow
OTOH: I'm pretty excited that these technologies are maturing so that they can
be harnessed for empowering common people, or workers in enterprises to make
their content beautiful, simple & into effective stories.

One example: Pentagon's slide decks.
[https://archive.org/details/MilitaryIndustrialPowerpointComp...](https://archive.org/details/MilitaryIndustrialPowerpointComplex)

------
QuantumGood
The effects of an ever-higher velocity of fake news isn't clear, but there is
no "solution".

Real news not believed, fake news believed has been an unsolved problem for a
long time. For example, the history of medical advances show doctors not
believing exceptionally solid science in many cases.

There are a number of quotes about progress along the lines of "First they say
it's impossible, they they fight it, then they say they believed it all
along".

This is a people problem and a media velocity problem going back to the famous
quote "A lie travels around the globe while the truth is putting on its
shoes."

You can't stop people from believing a lie after it has been released.
Removing the lie doesn't help. "Reputable" sources not repeating the lie
doesn't help.

------
this_was_posted
We shouldn't talk too much about our skepticism on this becoming problematic.
Otherwise believable skeptic text can be generated by malicious actors through
AI once it does become problematic so that they can drown out real concerns
with virtual trolls.

------
shams93
This has been true long before ai. Writing and journalism have always been
weaponized. The opposite could be true in that it's easier to recognize
automated fake news than well crafted hand done human deception.

------
jon_akimbo
People very concerned about this should spend some time reading ${opposing
political group} social media. As you'll discover, people will believe what
suits them. Veracity is of remarkably little interest to a remarkably high
percentage of the population. Most people, and this is not an exaggeration,
would sooner kill/die than change their mind. And if that's true, then
consider the mental acrobatics individuals are willing to go through before
they even reach that point.

------
zahrc
I have personally yet to be convinced of AI generated media content (read
articles, videos, photos) maybe the bias that I know that they are AI-
generated, but to me it’s equivalent buying a cheap knockoff iPhone from
China: it’ll work if you don’t really think about it, or do not know the
difference.

We have to top-up education and teach media-awareness in school, while giving
badly researched and generally toxic content the cold shoulder.

------
hertzdog
I try to take a different direction. Let's suppose some ai generated content
is better than human created content (IMHO we are quite there). Let's go
further: maybe in the future we will trust again only some "trusted sources"
(newspapers? HN?) while everything else will be not taken into account because
the quality will be low (like some comments saying the source is not in the
industry...).

~~~
account73466
>> maybe in the future we will trust again only some "trusted sources"
(newspapers? HN?) while everything else will be not taken into account because
the quality will be low (like some comments saying the source is not in the
industry...)

Do you realize that current conversational NNs are better at making comments
than you?

~~~
hertzdog
Yes. That’s the point :)

------
greggman2
I often wonder if Ranker, Thrillist, Collider, Vulture are all AI based. The
seem to show up in every search

------
nightnight
All tech demos without strong use cases yet. Machine-generated content,
spinning content, etc. are black hat tactics employed for decades in order to
game Google. Works (just look at what crap ranks high) but the foundation for
new huge industries? No.

------
100011
I am going to take the contrary opinion here. AI-generated fake content will
inflate away the informational value transmitted by whatever it is trying to
fake. It's like 'deep fakes', they'll just destroy trust to video.

------
seddin
I might be wrong, but on some social networks as Reddit, many comments or
shared links seem too weird, like if they were not real, and some posts that
get resposted always end up with the same comments or similar words.

------
r0h1t4sh
Looks like this would be the new form of spam we will have to fight.

------
daxfohl
How do we know this article was not generated by a bot?

------
unityByFreedom
Doubtful. It's easier to photoshop fake content and we haven't seen that get
out of control.

------
EGreg
Wow that AI-generated blog text actually made sense! The best I have ever
seen. How did they do it?

------
HocusLocus
muching virtual popcorn

