
GPT-2 is not as dangerous as OpenAI thought it might be - luu
https://nostalgebraist.tumblr.com/post/187579086034/it-seems-pretty-clear-to-me-by-now-that-gpt-2-is
======
Bartweiss
I think this is a pretty compelling claim. The published full GPT-2 samples
are significantly better than the initial release, but pretty similar to the
774M corpus output. They're disconcerting in as much as they're a leap forward
for AI writing, and they have remarkably strong tone consistency. But I see
two major weaknesses with GPT-2 that make it hard to imagine effective
malicious use.

First, the output is inconsistent. The scary-good examples are scattered among
a lot of duds, and it semi-regularly breaks down into either general
incoherence or distinctively robotic loops. You couldn't turn it loose to
populate a website or argue on forums without any human oversight, so it at
most represents a cost reduction in the output of BS, not a qualitative shift.

Second, it's absolutely crippled by length. Antecedents often go missing after
about a paragraph. Nouns (especially proper nouns) that are likely once are
likely in numerous roles, so stories will pull in figures like Obama as
subject, object, and commentator on the same issue. Even stylistically, the
tight guidelines of news ledes slacken within a few paragraphs, which results
in a loss of focus and a rising likelihood of loops and gibberish.

GPT-2 is absolutely an impressive breakthrough in machine writing, I don't
mean to disparage that. But as far as it's potential for deceit or trolling,
it's not particularly threatening. Quantity of output is rarely the limiting
factor on impact for that, and GPT-2 doesn't offer enough in sophisticated
tone to make up for what it loses in basic coherence.

(For anyone wondering "why is this sourced to a random tumblr?",
nostalgebraist is some flavor of AI professional who's played with GPT-2 to
produce some pretty interesting results, and produced some other useful essays
like an explanation of Google's Transformer architecture
([https://nostalgebraist.tumblr.com/post/185326092369/the-
tran...](https://nostalgebraist.tumblr.com/post/185326092369/the-transformer-
explained)).)

~~~
jcims
>You couldn't turn it loose to populate a website or argue on forums without
any human oversight, so it at most represents a cost reduction in the output
of BS, not a qualitative shift.

It might be getting closer than you think:
[https://old.reddit.com/r/SubSimulatorGPT2/](https://old.reddit.com/r/SubSimulatorGPT2/)
(Note can be nsfw, meta sub has best-of highlights)

Still a lot of complete nonsense but this is just a hobby project (not mine)

~~~
taneq
When you combine the output available now with the fact that a surprising
number of humans on the internet would probably fail a written Turing test,
it's definitely useful at least for denial-of-service style mischief.

~~~
c0bb
It seems to be supervisable at this point, specifically for shorter responses.
I was thinking if you're trying to AstroTurf or some other such form of
maliciously using message boards/forums, rather than giving free reign you
could present an operator with the prompting text and some arbitrary number of
generated replies to choose from. Still entails some leg work but will likely
yield more believable responses more frequently.

Sure this is only economical if at all when replying to shorter prompts, but
that is likely going to be the majority on most forums and message boards.

~~~
jcims
Would be good for phishing too.

------
nradov
The whole notion of GPT-2 being "dangerous" was ridiculous to begin with.
OpenAI does some impressive technical work but they're a little too impressed
with themselves and appear somewhat detached from reality.

~~~
throwaway2048
I think the danger of neural network bots overrunning social media with
garbage is entirely valid.

~~~
sharemywin
It seems pretty clear to me by now that GPT-2 is not as dangerous as OpenAI
thought (or claimed to think) it might be. (It may yet be as deadly as OpenAI
believed it was, but we won't know until it's actually released). That's not
even mentioning that it's entirely likely that it will become the first
machine learning platform to win a Turing Award.

What do you think? Are you ready to get involved in the GPT-2 debate? Or do
you already have your own AI platforms and don't like GPT-1? Or do you have
questions and would like the community to shed light on them? Please join us
in IRC at #gpt:
[http://gstreamer.freenode.net/?channel=gpt](http://gstreamer.freenode.net/?channel=gpt)

Please refer to the #GPT4 thread at Wikipedia.

GPG key fingerprint: A58F D3A2 CAF7 2E0D D4A3 8CD1 C721 5FB8 2F8E 15C4

Links

~~~
nradov
What's the point of having a debate? GPT is proprietary IP belonging to a
private company. They can release it or not as they please, regardless of
whether their stated reasons are valid.

~~~
archgoon
I believe that sharemywin's post is satire. It is at least, intended to appear
to be gpt-generated.

------
minimaxir
In my opinion, the rationale that the original GPT-2 is too dangerous to
release is not good for two reasons: a finetuned small GPT-2 model on a
targeted domain dataset ([https://minimaxir.com/2019/09/howto-
gpt2/](https://minimaxir.com/2019/09/howto-gpt2/)) _already_ gives better
results than a large default GPT-2 model for targeted news, and despite
independent researchers creating GPT-2 clones at a fraction of the cost of
adversarial organizations, there hasn't been any evidence of mass-produced
fake news in the _six months_ since GPT-2 was open sourced and the paper was
released.

GPT-2 will likely be used more for mass-producing crazy erotica
([https://twitter.com/Fred_Delicious/status/116678321475044557...](https://twitter.com/Fred_Delicious/status/1166783214750445573)
[NSFW text]) than fake news.

~~~
HNLurker2
>GPT-2 will likely be used more for mass-producing crazy erotica

We should, at any cost, do not release this to the masses

------
alexmlamb
Fake news concern isn't the right place to focus. News type content is low-
volume, has a structured sharing model (i.e. people usually share the
articles), is long-form, and requires fairly high precision. This is where
humans excel and where automated language models are the weakest.

On the other hand, comments on sites like Facebook/Reddit/4chan are perfect
for language model bots. The content is high-volume, semi-anonymous, usually
shared without an explicit network, and can be extremely low precision.

So if you had a bot that could get on any discussion network for planning
protests for example, and spammed it with destructive and divisive fake
comments, it could actually make organizing pretty hard. And while there's
some demand on the language model, many comments are just a few sentences
long. And the content doesn't need to be very precise, it just needs to be
convincing enough to be distracting.

I also think that the worst abusers are likely to be current power-players
like Google/Facebook/Chinese Gov rather than small actors.

------
atemerev
I have played with OpenGPT-2 for a few hours, testing its capabilities for
generating politically-motivated texts in particular.

Indeed, it fails at large sample sizes, but it is _quite_ capable of replying
to tweets and writing e.g. comments on Hacker News, Reddit or elsewhere.

With some human assistance, it can produce believable text fragments that can
be later copypasted and reassembled into genuine articles or blog posts. If
your full-time job is shitposting and/or generating politically divisive
content by megabytes, it increases your productivity by orders of magnitude.

~~~
jamesrom
This, 100% this. It's the massive productivity gains, multiplied by the ever
increasing ability to measure virility/persuasiveness that makes this
combination dangerous.

~~~
rahidz
The biggest problem here is that, like so many of these "smart machines",
OpenGPT-2 is a self-correcting machine that does not care about real-world
context.

...

That, of course, is exactly what GPT-2 wants you to think ;)

[https://i.imgur.com/lFkB3gI.png](https://i.imgur.com/lFkB3gI.png)

~~~
atemerev
You have officially won this thread.

------
tw1010
Meh, all this "we can't release it because it's dangerous" business is just
marketing

------
anilakar
I know I'm treading on dangerous waters here considering Sam Altman's
involvement, but I'm gonna burn some karma regardless: Is there anything open
about OpenAI or is it just a name?

~~~
juped
The mission statement reads, in part:

> We’re hoping to grow OpenAI into such an institution. As a non-profit, our
> aim is to build value for everyone rather than shareholders. Researchers
> will be strongly encouraged to publish their work, whether as papers, blog
> posts, or code, and our patents (if any) will be shared with the world.
> We’ll freely collaborate with others across many institutions and expect to
> work with companies to research and deploy new technologies.

I have no idea if they live up to this but "GPT-2 is too dangerous to release"
crackpottery suggests they at least sometimes don't.

~~~
Miraste
That mission statement has been abandoned. OpenAI restructured as a for-profit
corporation several months ago.

------
kdavis
Plot twist, the article was written by GPT-2

~~~
snazz
I gave the first sentence to
[https://talktotransformer.com](https://talktotransformer.com) and it returned
this:

It seems pretty clear to me by now that GPT-2 is not as dangerous as OpenAI
thought (or claimed to think) it might be. Generating fake news using GPT-2 is
more difficult than the original "propaganda" model, but as we have seen it
can be very effective.

The problem with GPT-1, and all AI projects that rely on it, is that they have
completely misunderstood what human beings are like. Humans like to look and
feel like they are human. It's how we have evolved. While the OpenAI AI team
(and probably the GPT-1 team as well) believe they understand the human brain,
they completely lack empathy when it comes to real humans. A fake news system
that gets to the truth of a story based solely on "what it would feel like to
be" a human being is no more human than a machine trying to "feel" what it is
like.

As such, the idea that the OpenAI AI team believes their AI will be able to
"see" what a human might see is a complete and utter failure. The truth is
that we don't understand what

~~~
bitL
...and that's just GPT-2_medium.

~~~
slavik81
I actually thought snazz had forgotten to include the generated snippet.
However, I didn't understand the reasoning in the second generated paragraph
and stopped reading. At that point, I still thought they were human (and
wrong).

I suppose my question is why it matters who wrote it. I've always been taught
that an argument should be judged on its own merits, and from that perspective
nothing changes.

~~~
comex
Well, if the blog post actually had been written by GPT-2, its very existence
would be a counterexample to one of its main claims, that GPT-2 isn’t really
good enough to generate convincing long-form nonfiction.

Also, that part of the argument is not purely a priori, instead being
supported by a wide range of factual claims about specific limitations of
GPT-2. If the post had been written by GPT-2, those claims would probably be
false, since GPT-2 is not designed to differentiate truth from plausible-
sounding fiction. And false claims would invalidate the whole argument.
Assuming a human author, on the other hand, the claims are probably true. They
_could_ be false if the author was either misinformed or lying, but those
possibilities are subjectively unlikely.

~~~
slavik81
> Assuming a human author, on the other hand, the claims are probably true.

Maybe this is where we differ? I don't agree. People are mistaken all the
time. Your hypothetical even assumes an untrustworthy human is directing the
algorithm.

To me, the most convincing point in favour of the truth of the claims in the
article is that nobody has contested them. The claims appear to be easily
falsifiable, so the more scrutiny they withstand, the more trust they deserve.

------
WhitneyLand
What's eye opening is the need for articles like these to seriously
investigate and characterize the distinction with humans. Then comments here
discussing it.

Beyond all the AI hype, there are notably more scenarios where the question
has to be seriously asked and a detailed answer is informative.

It reminds me of the .com bubble in a way. There is too much hype,
misunderstanding of the real current state of things, and bad predictions
about the future.

However only flip side simultaneously something huge is happening, it's real.
The real non-academic milestones of progress are just smaller, more numerous
and subtle than the headlines.

In this case, it's not whether it's dangerous that's interesting as much as
the subtle improvements made and how the march continues forward by degree.

------
awinter-py
fwiw I pumped 'GPT-2 is not as dangerous as' into writeup.ai and got:

> GPT-2 is not as dangerous as the older version. The second issue is that the
> newer version of the MSA- 1 does not support the new EFI bootloader. This
> means that the old MSA- 1 cannot be used to boot the newer MSA-2. The MSA-2
> is a newer version of the MSA-1.

Talktotransformer.com thinks:

> GPT-2 is not as dangerous as erythromycin and sulfamethoxazole and should be
> used with caution during pregnancy.

While this tool is not yet smart enough to comment on this debate, a future
version of it might be.

Other good prompts are 'A good pasta sauce begins with'

~~~
jonathankoren
"hello" generates...

hello_world:

from funcs import funcs_builder from funcs_spec import funcs_spec from
funcs_unit import Unit import ui import os import sys import sys.stdin import
time import json from scopes import Context , QueryBuilder , StructContext ,
ContextHandler , QueryBuilderType , ContextType , StringContext def add_from (
builder , context_id ): """ Create a query builder that expects a builder
object. """ # Create the context that contains the custom query builder if
Context ( builders_context ) is None : context_type = StructContext ( builder
) . type . get () # Define the builder for this context in context_builder (
context_id ): context = builder . get_context ( context_id ) # Create the
query builder for the query builder in context . query_builder ( context ,
context_id ): return builder return "hello_world" class CustomSchema (
QueryBuilder ): querysource = { 'base' : 'world' , 'world_scopes' : [],
'query_builder' : 'hello_world' , } query_builder = CustomSchema () def add_to

------
codezero
I'm probably not qualified to say this so take it with a grain of salt, but
I'd be surprised if OpenAI is concerned about a lone hobbyist, and is much
more concerned about nation states with access to much more hardware,
software, and experts who can do things that no single person or small group
of people could do.

~~~
mattnewport
If you have the resources of a nation state then you already have the
resources to push your agenda through traditional media. It's not clear what
benefit this type of AI generated content would have over the traditional
approach.

~~~
rm_-rf_slash
Asymmetric information warfare. Flood every channel with misleading and
contradicting information beyond the capacity for fact-checking. Make entirely
fake social media threads to steer perceptions towards a predetermined
direction. Create exaggerated stereotypes of both sides of a debate to inflame
and harden opinions.

All of this is already being done right now. The difference is that a robust
language model would allow this to be done with far fewer humans involved.
Fewer people to raise their hands about how far down the ethical rabbit hole
they are willing to go. And a single AI model could react to news much more
quickly than a team of people who have to sleep at some point.

~~~
whatshisface
That's not asymmetric, any nation could do it. Also, the Reddit farms wouldn't
have to sleep, because the writers could work in shifts.

~~~
cardiffspaceman
The human writers are constrained by what they think they can get away with.
The automated writers don't even work that way. You can see from the examples
used in discussions here, the more limited the domain of discourse the more
convincing the samples are (except the hello world one that looked like a
mash-up of every programming tutorial ever written). I think this thing could
do a great job making fake announcements to air travelers about flight delays.

I think the synthetic texts are already convincing enough that the natural
desire to try to impart meaning to writings/utterances will go a long way to
convincing forum user "victims" that something is being said that needs to be
thought about. And here in this discussion, we know that the fakes are fakes
(except for one comment which is unlabeled and I'm not sure about), in an
aggression against a forum the fakes won't be labeled.

~~~
rm_-rf_slash
The less said, the easier it is for a language model to approximate a useful
thread comment for the purposes of mass propaganda.

“These graphics look terrible. I will never play this game.”

“$Candidate is a corporate shill and everyone knows it.”

“I can’t wait for $Artist’s next album! They’re sooooo good!”

Doesn’t need to be an extensive, well thought out comment to drive thought and
discourse. GPT2 is good enough for that.

------
buboard
My understanding was that the main issue is this would make spam too hard to
detect.

There is also the question of whether we are overanalyzing the output of the
model. The model pretty much spits out garbage, but our brains are hellbent to
find patterns in that stream, because thats what brains do. A simple analysis
would probably show that it s no more meaningful than random words.

------
Animats
Like other chatterbots that don't really know what they're talking about,
coherence between sentences is poor. There's no internal model beyond likely
words.

The next stage is something that takes an outline and cranks out a paper or
speech to fit. There are specialized systems like that for sports reporting.

------
Improvotter
Could we talk about how horrible Tumblr's "privacy form" is? It is literally
impossible to decline.

------
spyder
You can see how GPT-2 works on a social media site in the "SubSimulatorGPT2"
subreddit, where every post and comment generated by it (but it's probably
just the smaller model) sometimes it's funny sometimes it's scary:

[https://www.reddit.com/r/SubSimulatorGPT2/](https://www.reddit.com/r/SubSimulatorGPT2/)

------
jamesrom
Anyone claiming GPT-2 not to be dangerous is very uninformed about humans.

If you can generate text, and find a way to measure it's persuasive
effectiveness (which is not hard in 2019).. It will be used to push an agenda.
And given enough time, it will do so with hypnotic levels of persuasion.

The claim it's not dangerous is absolutely missing the mark.

------
tomweingarten
Did anyone else read that whole article waiting for the reveal that it was
generated by GPT-2?

Ok fine... I skimmed it

------
designium
Just quick question. Which graphic card should I use to run 774M version? I
tried running with 2070 RTX in the 355M and it went out of memory.

~~~
bitL
Haven't tested myself, but the minimum for current SOTA NLP models is 1-2x
Titan RTX.

------
vessenes
This is an article that the author is going to regret in five to ten years,
sadly.

Basic claims:

1\. Long form text coherence problems in GPT-2 mean it's not useful for
creating propaganda.

2\. No propaganda has been noticed in the wild since GPT-2 announced, hence
GPT-2 or its near successors are not dangerous.

3\. Existing alt-right propaganda (presumed written by humans) is already
plenty effective, who needs better written propaganda?

Maybe it's enough just to write down the logical premises, but I'll say what I
think, (and we can check back here in 10 years and see who's more correct, I
hope it's the author) -- anything that changes the cost of information
creation and dissemination is fundamentally a highly powerful force.

In A16Z terms, software looks like it's going to eat writing, at least certain
forms of writing. We know from numerous examples that this means radically
faster innovation time, cheaper scale and wealth creation and destruction are
likely not far behind.

Compounding the problem for threat assessment is the toupeé fallacy -- only
obviously poorly generated text is noticeable as "AI generated". I would urge
anyone thinking about these things to flat out disregard any statements that
AI-generated text is not in the wild or if it is that it is not effective. You
literally have no way of knowing if you have read AI generated text -- in
fact, the examples curated from existing models suggest curated text can be
high quality enough that it is likely being used online in some way today.

It's not going to get harder to generate text that reads well. It's not going
to get more expensive. It's not going to get slower. Architectures that tune
text creation to create clickbait titles with underlying goals are going to
get worked on and thought about and tested.

Whether it will be more typically a nation state tool, corporate tool or
instead the equivalent of a molotov cocktail - cheap digital force extension
for renegades or infoterrorists -- this isn't clear yet. But my money would be
on all three, sadly.

To solutions - Comparatively little science fiction thought work has been done
about this that I'm aware of. Vernor Vinge speculated about broadscale
disinformation as a public service in Rainbow's End -- The Friends of Privacy
-- and at some level projects like Xanadu and more recently advertisement
attribution (blockchain or not) tech startups are all working on this from
different angles.

To my mind, these questions come from a part of the Internet's architecture -
text isn't generally signed or strongly attributed, and if it were, we don't
really have a solid global identity infrastructure - hence a lot of worrying
and hand wringing.

I'm trying to invest in stuff that works on this identity layer specifically,
but honestly it's an immense problem with few compelling stories about how
things could change.

~~~
repolfx
_To my mind, these questions come from a part of the Internet 's architecture
- text isn't generally signed or strongly attributed, and if it were, we don't
really have a solid global identity infrastructure - hence a lot of worrying
and hand wringing._

You're way overthinking this. Any website with an SSL cert is a "channel" and
if it doesn't publish AI generated text, then people can go there to get pure
human generated nonsense instead of machine generated nonsense. No new
infrastructures are required: in fact we already have many such websites, like
newspapers or blogs.

Also, really, listen to yourself. "Infoterrorist"? What is an infoterrorist?
You're making up meaningless new words on the fly to try and create a general
sense of unease in your reader. Indeed you're trying to make people fear
speech, which could itself be described as alt-left propaganda. Does it
matter? No not really. You're just a guy posting on the internet, as am I.
Spambots have existed since the start of the internet. The content is what
matters, not where it came from.

------
ryanmercer
I disagree, you can have software spit out these meh articles then pay a room
full of people with the writing ability of an average 15 year old to go line
by line rewriting the content with their own 'voice' and you can easily churn
on massive amounts of content based on whatever you trained it on.

For over a decade individuals have used software to take already written
content and change it just enough to fool search engines for SEO purposes,
it's been an effective tactic despite the articles often being largely
unintelligible to a human. If you can make something moderately intelligible
to a human, about a given area and have some minimum wage employee
'personalize' it and BAM you can turn a room of a 10, 20, 100 people into a
hardcore content machine.

I've suggested to Altman before that, given a sufficient body of work, you
could churn out fiction in the style of famous deceased authors by having the
machine do the bulk of the work then having a small team go in and edit the
work to make it fully coherent and an enjoyable read.

With someone like me, that uses their name as their username virtually
everywhere, you could sufficiently train the machine on my reddit and blog
alone to imitate me on social media platforms. It could learn my writing
style, my habits of using 'heh' and 'haha' way too much on
reddit/twitter/facebook and you suddenly create Bizarro Ryan that you can
create new social media accounts for and start tossing in some hate speech in
an anti-me campaign. While this wouldn't do much to me, to a
celebrity/politician/expert in a field it could absolutely ruin their career,
even if later proven to have been faked because popular opinion will still
associate that person with that undesirable behavior.

While pursuing this technology would be amazing for creating new literary
works from people like Verne, Heinlein, Burroughs (your favorite authors
here), I can weaponize it RIGHT NOW.

This is the problem with entities like Open AI, they're all "AI is great, AI
is good, AI is our future savior, yay AI" but are any of them going "well,
here's the 16 ways I can think off, off the top of my head, how to wildly
exploit this technology for personal/corporate/government gain"? AI doesn't
have to be SkyNet or robotic killing drones to be exploited, an individual can
benefit considerably (and cause considerable hardship for an entity) with
stuff like this. Who at places like Open AI are asking these questions? Where
are the employees/advisors/consultants that look at each project and offer
real time feedback on ways to abuse the project in its current and near-future
states?

Maybe they have someone but, methinks they don't.

~~~
retsibsi
> given a sufficient body of work, you could churn out fiction in the style of
> famous deceased authors by having the machine do the bulk of the work then
> having a small team go in and edit the work to make it fully coherent and an
> enjoyable read.

This is interesting but very dubious in my opinion. The current state of the
art tech seems to be good at low-level stuff, like stylistic mimicry and
maintaining (relative) coherence at the sentence level (and sometimes the
paragraph level). It seems weaker at higher-level coherence, and I've seen no
evidence that it would be capable of creating a book-length, or even short-
story-length, work with a plot that made any sense (let alone a compelling
one) or characters that are plausible (let alone interesting). If it does fail
at those things, what are you supposed to do with the okay-in-isolation
fragments that it spits out? You'd be lucky if they could be stitched together
into anything worthwhile, even with a lot of human effort.

> With someone like me, that uses their name as their username virtually
> everywhere, you could sufficiently train the machine on my reddit and blog
> alone to imitate me on social media platforms. It could learn my writing
> style, my habits of using 'heh' and 'haha' way too much on
> reddit/twitter/facebook and you suddenly create Bizarro Ryan that you can
> create new social media accounts for and start tossing in some hate speech
> in an anti-me campaign. While this wouldn't do much to me, to a
> celebrity/politician/expert in a field it could absolutely ruin their
> career, even if later proven to have been faked because popular opinion will
> still associate that person with that undesirable behavior.

If someone wanted to target an individual, or a small number of people,
couldn't they already do this manually? And if they wanted to target a huge
number of people, surely they would very quickly burn the credibility of the
platforms they hijacked.

~~~
ryanmercer
>If someone wanted to target an individual, or a small number of people,
couldn't they already do this manually? And if they wanted to target a huge
number of people, surely they would very quickly burn the credibility of the
platforms they hijacked.

Yes, but you can add incredible amounts of credibility to claims if you've
used AI to create a bunch of deep faked images of completely artificial
people, populated social media profiles, had AI create photos of these
individuals together in random settings, create a network of these accounts
that follow each other as well as real people/are friends with each other and
real people, and organically feed claims out.

This sort of AI use, for faking images/video/audio/text, makes this much much
easier to do with more believably as well as to scale it considerably for
personal use or for hiring out.

You can already go on various darknet markets and hire various harassment
services.

You can already go on various websites and order blackhat SEO that uses very
'dumb' software to generate content to spam to obviously fake social media
accounts, blog posts, etc for SEO purposes - there are dozens and dozens of
VPS services that rent you a VPS with gobs and gobs of commercial software
pre-installed (with valid licenses) specifically intended for these uses and
if you'd rather just farm it out there are hundreds of providers on these
forums that sell packages where you provide minimal information and in days or
weeks they deliver you a list of all of the links of content they've created
and posted.

With stuff like GPT-2 you suddenly make more coherent sentences, tweets, short
blog posts, reviews etc that you've trained on a specific demographic in the
native language and not written by an English as a 3rd language individual
that then had software reword it to past copyscape protection. Pair it with
deepfaked images/video that you then add popular social media filters to and
you can suddenly create much more believable social media presences that don't
scream 'BOT' because it isn't a picture of an attractive woman in a bikni with
the name Donald Smith that's only friends with women in bikinis with names
like "Abdul Yussef" "Greg Brady" "Stephanie Greg" "Tiffany London" that you
constantly see sending people friend requests on fb or following you on
twitter/instagram because you used #love in a post.

Software applications like this, make the process much easier to do with a
higher level of believability. Humans, without knowing, are often decent at
detecting bullshit when they read a review or a comment. Inconsistent slang or
regional phrasing, grammar that feels wrong but not necessarily artificial
(English as a second language for a German speaker for example, where it might
be something like "What are you called?" instead of "What is your name?" or
more subtle like "What do you call it?" instead of "What's it called"?) which
can be defeated with AI that is trained on tweets/blog posts/instagram posts
that someone scrapes of 18-23 year old middle class women, or 30-60 year old
white male gun owners, or 21-45 year old British working class males.

The whole point of AI is to make some tasks easier by automating them, when
you're dealing with AI that mimics imaes/video/speech you're just making it
far easier for individuals that already manually employ (or use 'dumb'
software) these tactics to scale and increase efficacy.

------
Havoc
It was a great PR stunt though

------
paggle
“It is less good at larger-scale structure, like maintaining a consistent
topic or (especially) making a structured argument with sub-parts larger than
a few sentences.”

Have you surfed Facebook recently? Shockingly few people are good at these
either. The quality of actual human writing on social media is so bad that
today’s neural networks can be of equal quality, which is _very dangerous_ if
used to make certain ideas seem more widely held than they are. After all, our
morals/ethics are basically set by what we observe in the world around us
(compare your view on torture, public execution, slavery, etc) to what your
view would be if a baby with your exact DNA had been born in Ancient Babylon.

