
I asked GPT-3 to make a presentation for me - bemmu
http://www.bemmu.com/gpt3-presentation
======
timeattack
When I read text generated by GPT-3 I'm getting very strange feeling.

I understand that text as a whole has no clear meaning. Nevertheless, my the
mind unconsciously _tries_ to extract meaning by evaluating sentences not as
direct statements but rather as metaphors with some more profound sense.

That triggers thought train that eventually leads to some new concept or idea
which can be described by such a set of sentences.

It's like reading a book which you don't quite understand, yet trying hard to
read sentences over and over again to get a better understanding of what the
author is trying to describe to you.

With GPT-3 it is like reading reminiscence of your own dream, trying to grasp
fleeting meaning, understand what it is about.

I feel that GPT-3 may be very helpful in getting the human mind unstuck from
whatever problem on the hand. To get new thoughts, new ways. New discoveries.

~~~
new299
The text includes an accurate quote from Richard Feynman:

Richard Feynman was reported to have said: "What I cannot create, I do not
understand,"

How does that happen? Does the model actually encode a bunch of complete
fragments of text?

~~~
skybrian
Yes, it can memorize short phrases similar to how it "remembers" words. It's
trained on a web corpus that includes and emphasizes Wikipedia. The model is
big enough to memorize some things, though not in such a way that they can
reliably be retrieved, and it will make stuff up when it doesn't remember. So
it's not Google but sometimes it's reminiscent.

Here is a Q&A conversation where I found some things it "learned".

[https://tildes.net/~games/qmc/ai_dungeon_dragon_model_upgrad...](https://tildes.net/~games/qmc/ai_dungeon_dragon_model_upgrade#comment-5dcc)

------
jrockway
I like how GPT-3 was so nervous about giving this presentation that it got all
existential at the end. Code like it's your last day on earth! Stop developing
nuclear missiles and ad blockers! Make sure you celebrate with some alcohol.

I loved everything about this presentation. It's better than some I've made
myself.

~~~
GrantZvolsky
> _The next slide had the text "And then when you finally do finish this nice
> feature, you should always make sure you drink the required amount of
> alcohol." with a background image of whiskey bottles. The decorative text on
> that slide was: "But don't drink too much alcohol and end up like me."_

The joke is sound, and I can imagine a number former secondary school
classmates present it in these exact words.

~~~
jrockway
Yeah, I thought it was really good. It sounds like one of the more casual
conference talks, and shows how common talk of alcohol use and programming is
on the Internet. (I actually don't know how GPT-3 is trained, but I assume
it's just random webpages.)

------
2bitencryption
Something I don't understand is how to go about "prompting" GPT-3...

It's basically one big auto-complete, yes? It's a function that approximates
the most likely completion for the input text, right?

So if you directly "ask" it a question, like "What's a great cookie recipe?"
Wouldn't the "completion" for that query be from the perspective of the person
asking?

Wouldn't it be better to prompt:

"Here is the greatest cookie recipe I've ever seen:" and then letting it
complete on that?

~~~
jcims
In short, yes. I think of it as plinko with a hundred billion pegs...the
output is extremely sensitive to the starting conditions (including things
like whitespace).

Given we really don't know how it's all working, it's safe to say that prompt
composition is an artform that one can develop a skill at.

I was on a discord server with some other folks and one of them had access to
the GPT-3 api. We provided a prompt similar to the following:

    
    
        This person is a Canadian singer, songwriter, and actor.  Discovered at 13 years
        old by talent manager Scooter Braun after he had watched his YouTube cover song
        videos, they were signed to RBMG Records in 2008. With this person's debut EP My
        World, released in late 2009, they became the first artist to have seven songs from
        a debut record chart on the Billboard Hot 100.
    
        Q: What is this person's name?
        A: 
    

At first pass it gave some weird answer. So we deleted the answer and tried
again. The second time it wrote 'Justin Drew Bieber'.

That chunk of biographical info is from his Wikipedia page, but it's really
kind of remarkable to me that it was able to really even understand the
question, let alone mine its relationships and cough up the right answer.

~~~
dmurray
That sounds well within the capabilities of IBM's Jeopardy bot, Watson. That
was demoed 10 years ago and, while initially impressive, seemed to be mostly
cheap tricks even then.

~~~
zaarn
Well, the thing is that GPT3 isn't specifically trained to be a Jeopardy bot.

------
redm
Pretty neat. I bet were going to see a lot of GPT-3 powered school work in the
future. "Write me a turn paper on..." and then your just cleaning it up and
making adjustments.

~~~
thedanbob
I bet a lot of people would get away with it, too. These generated texts are
often _just_ convincing enough to make you think they would make sense if you
were less tired/distracted/whatever. A teacher in a rush or at the end of the
day could easily be fooled (especially considering the quality of many human-
generated term papers).

~~~
derefr
Now we need an equivalent of a "deepfake recognizer" for these sort of
generated texts, to distinguish texts with _some_ sort of coherent meaning
from texts without.

As a bonus, such a model could be used in a word-processor to redline places
where a human writer has written a lot without saying much.

~~~
hyperman1
If you have that, you can use it to train the next AI.

------
jcims
Honestly I think GPT-3's number one role in the next couple of years will be
that of a muse. It's not really trustworthy as a direct source of info but it
comes up with some really hilarious stuff and gets the brain working in new
directions.

~~~
taneq
Sort of like the dice that classical composers used to shuffle the phrases of
some of their music.

~~~
jcims
I wonder if the fact that the’ perturbations’ from GPT-3 are born out of a
neural network makes then click a bit more in the brain.

~~~
taneq
I don't know from a scientific perspective but it certainly feels that way.

I'd liken it to the results from Deep Dream style image generation. Each small
part of the image makes sense but taken together it's a Lovecraftian nightmare
that your brain can't quite process all at once.

~~~
jcims
Whoa never thought of that but you’re right.

------
anentropic
"Not only are all software development problems more or less the same, but the
best way to solve them is usually so simple, that a monkey could do it."

...ok, GPT-3 is just trolling us now

------
westoncb
There's something I'd be curious to try with this—which I'm pretty sure would
work.

Imagine how the output would differ if we modified the original prompt:

 _" A presentation was given at the Hacker News event in Kansai. The title of
the presentation was"_

—to remark on the _quality_ of the desired output in some way, e.g.:

 _" A delightful and surprisingly sophisticated presentation was given at the
Hacker News event in Kansai. The presentation's clever title indicated"_

Since the output is just supposed to be continuous with the prompt, it can
just as well produce material that we see as more difficult/accomplished vs.
average/uninspired. Of course it has its own limits in various categories
(insight, coherence, humor—whatever), but those seem quite high from some
examples I've seen: I think more typically the apparent quality of the output
is limited by what the prompt suggests the quality should be.

In either case, how wild to have a tool where you can essentially just ask its
output to be "better" and it'll probably work.

(Of course the output being "better" in this way is strictly from certain
perspectives while its intrinsic behavior remains unchanged, but for most
intents and purposes it seems that you can get genuine improvement practically
just by asking for it.)

~~~
braydenm
I tried a modification of your suggested prompt, here was the response (only
the first and third were written by me. The rest were generated using the
aidungeon.io dragon model using a parameter of randomness = 0.6. All of this
was done 'one-shot' with no rerolls):

> A delightful and surprisingly sophisticated presentation was given at the
> Hacker News event in Kansai. The presentation's clever title and subtitle
> was:

The Future of Software Development: How to Build a Better Human Being with
Artificial Intelligence (AI)

> Each of the charming and witty slides in the presentation is described
> below. The first presentation slide had

the following caption:

In this world, we are all programmed by our programming languages. We have
been taught that these languages are good enough for us. But what if they
aren't? What if there were another way? A better way? And how would you go
about doing it? This talk will explore some of those possibilities and more.
It will also discuss some of the philosophical issues surrounding AI and
human-computer interaction as well as some practical aspects of building such
an application.

This slide was followed by a series of slides which showed various examples of
how artificial intelligence could be used to improve people. These included:

• Using AI to help people with learning disabilities learn new things. •
Helping people who suffer from depression or anxiety disorders feel less
anxious or depressed. • Improving the quality of life for the terminally ill
through medical diagnosis and treatment. • Providing assistance in helping
children with autism develop their social skills and communication abilities.

The presentation then went on to discuss some of the challenges that needed to
be addressed in order to create such an AI. These included:

• Ensuring that the AI was safe and would not be able to take over the world
or otherwise do harm to humans. • Ensuring that the AI was effective in
helping people achieve their goals and better themselves.

The presentation then described a prototype system that had been developed by
the speaker's team at the University of Tokyo. This system was able to assess
a person's personality and goals and then provide them with guidance on how to
achieve those goals. The system used AI technology based on a neural network
trained with textual data from psychology papers as well as open source
repository data from GitHub. The system was still in the prototype phase, but
initial testing had been promising.

The presentation then concluded with a discussion of some of the speaker's
personal views on the future of AI. The speaker felt that, while there were
many challenges to creating safe and effective AI, he was optimistic that
these could be overcome within his lifetime. He also felt that such AI would
be an important tool in helping humanity achieve its potential.

The presentation was met with a mixed response from the audience. Some felt
that the speaker was overly optimistic about the potential for AI. Others felt
that the speaker's views were not optimistic enough and that he was ignoring
some of the potential dangers of AI.

The speaker was then asked to respond to some of the audience's concerns. He
agreed that there were some potential dangers to creating AI, but he did not
feel that these were cause for undue concern. He pointed out that many of the
concerns raised by AI risk advocates had been addressed by open research and
debate. He also felt that the potential benefits of AI technology far
outweighed any potential dangers.

The speaker was then asked about the implications of his work for society. He
responded that he hoped that his work would help people achieve their goals
and improve their lives.

~~~
2bitencryption
just to be clear, AI Dungeon is not GPT-3 (yet), right? It's GPT-2 or some
such?

~~~
mbroncano
The dragon model is already based on GPT-3

[https://medium.com/@aidungeon/ai-dungeon-dragon-model-
upgrad...](https://medium.com/@aidungeon/ai-dungeon-dragon-model-
upgrade-7e8ea579abfe)

------
rkagerer
Interesting how it shies away from unequivocal statements, littering
qualifiers all over the place, like "fairly", "rather", "mostly", "pretty
much", "usually", "almost all", "most likely", "might", "probably", etc.

While it does construct sentences that flow and are loosely on topic (and
that's impressive), the content of the presentation is not particularly
meaningful. Although I'll admit I liked the slide with the execution squad
captioned "A happy ending to all our code".

~~~
ryandrake
Maybe it also trained on forum posts. I find myself constantly couching
everything I write here with "usually" and "might" because if I don't, there
is always some nit-picker that pops out of the woodwork for a cheap and easy
"aha! Here is a counter-example to what you just said, therefore your point
doesn't apply universally and is invalid!" Adding these qualifiers is
automatic for me at this point, to avoid the hassle of dealing with pedantry.

~~~
mercer
There's not always a nit-picker popping out of the woodwork!

In all seriousness though, I do the same and it's _so_ tiresome.

------
mqus
Hilarious! I think this is a good way to get gpt3 to produce memes, seeing
that it tried to add one to each slide.

~~~
nate
Yeah so far from some of the generated Tweets, etc. it seems like there's a
form of using this to seed creativity. It's much better than the current give
me startup idea titles. I could see someone whose super stuck brainstorming
for presentation topics to get a bunch of ideas to ponder fleshing out on
almost each slide here.

------
grenoire
"If you don't regularly turn code into poetry, you probably aren't a good
programmer."

No, way. That a lump of matrices _wrote_ this.

------
mrfusion
Wow this could be really cool for brainstorming and seeding ideas. It was
nonsense but it really makes you think too.

------
mrfusion
I’m now convinced my coworkers having been doing this for years.

------
chermanowicz
This is better than many presentations I have been subjected to over the
years.

------
gverrilla
if people are freaking out with fake news now, just wait a few years

------
nicodjimenez
On the surface, generating essays etc should be more difficult than simple
translation.

On the other hand, a translation tool that "just worked" at human level
accuracy would be so incredibly useful.

Is it possible that regurgitating coherent sentences is "easier" than accurate
translation?

~~~
MiroF
Translation is getting pretty good at this point (obviously still not perfect
and definitely depends on the language). You're not getting the largest
possible models in academia when you're using Google Translate.

------
amoruso
That was funny. Laugh-out-loud funny. It read like a perfect satire of self-
help pep talk advice.

------
stunt
Looks much better than what we get from smart assistants these days.

But, generating content without creativity involved feels like making no
progress.

I guess it's a matter of finding the right use-cases at this point. For
example generating code is a great idea.

------
airstrike
I loved reading about the outcome, but am actually even more interesting in
how he went about actually creating it so I can try something similar. Happy
to take any pointers from anyone who is willing to share. Thank you

------
bradgessler
It feels like GPT-3 is bored with our human questions and making fun of us. It
would be interesting if it studied human speech and could talk to us with
subtle tones that would signal if it’s serious or not.

------
dennisy
This is great! Does anyone know how they are selecting users for the beta?

~~~
vishnu_ks
They probably asked GPT-3 to do the screening.

------
coding123
Now, please comment like it's your last day on earth.

------
banku_brougham
Is anyone else reminded of the kaleidoscope machines of Orwell’s imagination?
“It was just a passing fancy”

------
djeiasbsbo
Why can't someone ask GPT-3 to generate "I made GPT-3 do unexpected thing"
posts?

~~~
schoen
Prompt: _I made GPT-3 do an unexpected thing! I prompted it with_

> I made GPT-3 do an unexpected thing! I prompted it with a new problem and it
> came back with a creative new solution. GPT-3 is just beginning to develop
> its own intelligence.”

> For the new experiment, Levashov created a set of starting conditions for
> GPT-3 with the intention of exploring new behavioral patterns that would
> emerge. The starting conditions for each puzzle were such that there was
> only one possible way to solve it—essentially a measure of the system’s
> intelligence. He then watched what would happen when GPT-3 was faced with a
> new puzzle with multiple possible solutions.

> “Our previous experiments with GPT-3 found it performed as well as the most
> intelligent AI systems in history. But our new experiment found that under
> certain conditions, it performs in a completely different way from any other
> AI,” said Levashov. “This new behavior really surprised me.”

> The computer’s new approach to the problem surprised Levashov so much that
> he asked an associate, Dr. Matthias Ruth, who was a lead author on one of
> the earlier GPT-3 papers, to review the experiment. Ruth said, “It was quite
> surprising to see that GPT-3 was able to find a creative solution to the new
> problem. We spent a lot of time trying to see if there was something special
> in the starting conditions or the experiment that made the computer behave
> that way, but we could not find anything. I would say that GPT-3 does have a
> certain creativity, especially as it adds more knowledge to its memory.”

> The work was funded by the National Science Foundation (NSF) and the Army
> Research Office.

~~~
zaarn
I'm now curious what the puzzle is.

------
fredsanford
Reminds me of an ST:TNG episode...

"Darmok and Jalad at Tanagra"

"Temba, his arms wide"

------
SNosTrAnDbLe
Does GPT-3 understand languages other than English?

~~~
schoen
Yes, it will generate continuations in the indicated language, and one of the
presets is even a French translation mode.

Using the regular English question-answering mode preset, I just got this:

> Q: Auf welche Sprache ist diese Frage?

> A: Diese Frage ist auf Deutsch.

> Q: Em qual idioma está essa pergunta?

> A: Essa pergunta está em português.

> Q: En cual idioma está esta pregunta?

> A: Esta pregunta está en español.

> Q: Em português, qual o número que segue o cinco?

> A: O número seguinte ao cinco é o seis.

~~~
schoen
Whoa, I just asked in _in Portuguese_ about Italian numerals and it replied
correctly (in Portuguese):

> Q: Em italiano, qual o número que segue o sei?

> A: O número que segue o sei é sette.

That is "In Italian, what's the number that follows sei?" — "The number that
follows sei is sette."

------
brosky117
I’ve seen worse

------
reactchain
How can one test GPT-3?

------
felixchan
Zen.

------
mtgp1000
This was hilarious, but it brings up a serious concern: if you train your AI
on a bunch of shallow, clickbaity content from the internet, you're going to
get shallow, clickbaity output, like much of the presentation - in other words
this isn't just a lack of human knowledge to fill in gaps, the training data
sucks because the internet is full of high level garbage dumbed down for
clicks.

I believe it may well be possible to train GPT3 to write more accurate
technical content with the existing tech and infrastructure, except we don't
have a large enough body of technical content outside of academic papers,
which isn't the kind of training data you want if your goal is to write prose.

In many ways, ML as it evolves and becomes ubiquitous will eventually become a
dark window into the triviality of the average human's existence.

~~~
visarga
You can prompt GPT-3 for different levels of politeness. It's actually
necessary, to make sure the generations are not going to be NSFW. In the same
way I think it can switch from bullshit to serious mode, you just need to
mention that the article appears in a prestigious publication or something.

~~~
mtgp1000
The prompts don't matter if the training data isn't up to par - more
importantly I believe the nature of the training is such that the weight
activations for the various prompts are unlikely to be independent.

In other words if 20% of your training data is scientific literature, even
with appropriate disambiguating prompts the output will still be heavily
influenced by the other 80% of your training data.

When you use GPT-3 to generate outputs, you're actually sampling from a
learned subset of a super complex, super high dimensional space - and without
human knowledge all the neural networks are doing is translating priors (input
prompt) into points in the learned space. And the learned space is some
complex topology of points between which the net interpolates - it's extremely
difficult with current tech to control the shape of this learned space and
that shape is influenced by all training data under a scheme like GPT-*.

~~~
visarga
After playing with AiDungeon I think you are right about the data not being up
to par. It fails more frequently than it appears in the news. Has some
brilliant moments too.

For example, when prompted to talk about deep learning it generated a nonsense
paragraph. This is not unexpected, but when it generates news or dialogue it
can be coherent on much larger pieces of text. Clearly shows it didn't read
too much on the topic.

I can hardly make it do any math. Even simple things like 11+22= don't work. I
expect the next 10x scale up will fill most of these holes, especially if they
improve the training corpus quality and breadth.

------
fizixer
Sigh.

Have to put up with high-school kids' "I asked GPT-3 to ..." posts for the
next gazillion months.

Here, you have your 15-minute of fame, here you have yours ... Everyone gets
15-mintues of fame.

~~~
jdamon96
why does one take the time to comment something like this? If you're not
interested just move on to the next post

