Hacker News new | past | comments | ask | show | jobs | submit login

When I read text generated by GPT-3 I'm getting very strange feeling.

I understand that text as a whole has no clear meaning. Nevertheless, my the mind unconsciously _tries_ to extract meaning by evaluating sentences not as direct statements but rather as metaphors with some more profound sense.

That triggers thought train that eventually leads to some new concept or idea which can be described by such a set of sentences.

It's like reading a book which you don't quite understand, yet trying hard to read sentences over and over again to get a better understanding of what the author is trying to describe to you.

With GPT-3 it is like reading reminiscence of your own dream, trying to grasp fleeting meaning, understand what it is about.

I feel that GPT-3 may be very helpful in getting the human mind unstuck from whatever problem on the hand. To get new thoughts, new ways. New discoveries.




I think GPT-3 is especially suited to triggering your pareidolia in this particular format, because we know that presentation slides often need an accompanying speech to give them context/meaning; and without it, they have a tendency to be just this disjointed.

So, when you read "presentation slides" like this, the same mental algorithm that tries to piece together what "the speech" was for a normal slide deck, kicks in in your brain, and gives you some valid-seeming ideas.


You're spot on with it feeling dreamlike. Any short chunk feels like it makes sense, but on the whole it flows together in bizarre ways where it doesn't feel quite like there's underlying structure. I don't think I've ever experienced media that was as reminiscent of the experience of dreaming as GPT-3 generated passages are.

It reminds me of how Deep Dream was the first thing that _really_ reminded me of what psychedelic visuals are like, compared to a "trippy" piece of art. GPT-3 _really_ reminds me of dreaming compared to human attempts at evoking that feeling.


I originally held that David Lynch's film "Mulholland Drive" used all the conventional motifs of a horror film to construct a whole that was less than, and yet too much more than a cohesive horror story. It wasn't complete, but had the sense that it was. Stitched together in a nonsensical manner, but which felt familiar.

I'm not saying that's true, but it was my conversational theory for a while.

Maybe he made the GPT-3 of films.


Improv (https://en.wikipedia.org/wiki/Improvisational_theatre ) is among other things an art of building worlds and stories by justification of things you don't understand. So naturally there is an "An Improvised Theatre Experiment" called "Improbotics" (https://improbotics.org/) from Belgium where a generated text pronounced by robot is used in improv scenes in multitude of different ways.

According to Ben Verhoeven's interview taken 2020-06-11 by Moscow Improv Club (https://www.instagram.com/tv/CBTQsCanQ4g/ ) they used GPT-2 finetuned on movie subtitles.


Ironically, I remember people saying similar about simpler programs like Eliza (very far) back in the day.

Wonder what'll be the multiple orders of magnitude upgrade from GPT-3 that we say it about next?

I do agree with you, though. This is getting close to free-writing in terms of being able to unearth stuff semi-randomly. Imagine a GPT-3 that saw all your past journals and online conversations, and bouncing stuff off that.


> I feel that GPT-3 may be very helpful in getting the human mind unstuck from whatever problem on the hand. To get new thoughts, new ways.

Sort of a newfangled Oblique Strategies? https://en.m.wikipedia.org/wiki/Oblique_Strategies


It’s the uncanny valley but for writing.


Agreed. The text _seems_ like it could mean something -- so you squint harder and try to find the meaning in it. Sometimes you do!

To me, the most sensible slide in this deck was "Why you should always code like it's your last day on Earth." / "It'll push you just enough to get you to finish whatever you need to finish". Surprisingly true!


The text includes an accurate quote from Richard Feynman:

Richard Feynman was reported to have said: "What I cannot create, I do not understand,"

How does that happen? Does the model actually encode a bunch of complete fragments of text?


Yes, it can memorize short phrases similar to how it "remembers" words. It's trained on a web corpus that includes and emphasizes Wikipedia. The model is big enough to memorize some things, though not in such a way that they can reliably be retrieved, and it will make stuff up when it doesn't remember. So it's not Google but sometimes it's reminiscent.

Here is a Q&A conversation where I found some things it "learned".

https://tildes.net/~games/qmc/ai_dungeon_dragon_model_upgrad...


Is GPT-3 the ultimate Oblique Strategy?


I apologize. I did not see your comment before posting my reference to Oblique Strategies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: