Hacker News new | past | comments | ask | show | jobs | submit login

I have a suspicion that, since GPT-3 gorged its enormously swollen fuzzy digital belly on a lot of real prose, it might, when prompted, occasionally throw in a piece of the prose in question, or a whole original passage, because to its fitting function, it would be a suitable thing to do.

But then again, aren’t we all doing the same, regurgitating bits and pieces that we heard or read before and liked?

Don’t we all sometimes, when busy with our own thoughts and worries, occasionally spout out oddly appropriate responses to what our spouses are speaking, without actually listening to and processing the conversation, maybe because we heard it all so many times we can readily predict what they are going to say? And then we agree that we need to go take the alligator from the repair shop.

However, if anything, GPT-3 is mostly showing how little meaning there is, but in how many words.

Never mind me, I’ll pour myself another one.




>> But then again, aren’t we all doing the same, regurgitating bits and pieces that we heard or read before and liked?

We can do that. We can also not do that. Language models can only do that.

My comment is stripped of nuance (language models don't do that exactly) but it should point at the difference: humans can be as dumb as bricks, or as smart as humans. Language models don't even come close to our brighter moments. They don't even come close to the brighter moments of our pets, and our pets can't generate human language.

You have kids? Think of whom you'd like to spend the next decade or two around: a five year old human or GPT-3? Which one would you find more interesting, original, curious, do you think?

I think people who are the most excited about GPT-3 are the ones who are used to even dumber, more boring interactions with computers than the repetitive and unoriginal text generation of language models. Or perhaps it's humans who are used to boring interactions with other humans. Technology has done some weird things to us of late I fear.


> You have kids? Think of whom you'd like to spend the next decade or two around: a five year old human or GPT-3? Which one would you find more interesting, original, curious, do you think?

Kids and language models are very different kinds of "interesting". I like to be around my child, because I love her and it's good for the soul. If I only have one time slot and a choice who to spend it with, GPT-n can go pound sand, I'll be with my kid.

A language model can certainly be fascinating in other ways. It brings me into a philosophical mood and spurs some questions about the humanity and quality of human knowledge, and efficiency of passing ideas around, but those don't imply answers I'd like to have, and tend to eat my soul away.


Founder of Sudowrite [1] here.

> However, if anything, GPT-3 is mostly showing how little meaning there is, but in how many words.

We imbue words and symbols with meanings, and we strive to give our stories meanings that transcend the prose on paper. Yes, it's true that GPT-3 can spout non-sequitars. But overall, it does stay on topic quite well, as compared to previous iterations like GPT-2.

But sometimes, you WANT that, you want to be provoked to get unstuck on writing that tough scene or character. I've found that using a system like Sudowrite is not only about generating new text, but reading the generations and discovering how the neural net is interpreting what you've written. I believe this particular use case hasn't been fully explored, outside of summarization algorithms.

[1] https://www.sudowrite.com/


> Yes, it's true that GPT-3 can spout non-sequitars. But overall, it does stay on topic quite well, as compared to previous iterations like GPT-2.

I've heard GPT-3 could produce better text than BuzzFeed writers. You could read it without even spotting a difference.

It doesn't so much imply GPT-3 is so good (it is good, no denying that) as it implies how amazingly content-free BuzzFeed is. And mind you, there are people working on BF, writing articles and whatnot, and making pretty penny off it. Even more people read the stuff.

Maybe we the humans actually need to dilute grains of actual useful information into gobs of fluff to be able to process it well, just like vitamins have different bioavailability depending on which food they are contained within. I'm fine with this conclusion too. It's just amusing to be aware of that.

Next thing we're going to learn is that a remote-working middle manager at some large company who's rather good at his job and even has awarded annual bonuses, is discovered to have been a GPT-3 (or maybe GPT-4) simulacrum all along, with his subordinates and bosses alike being none the wiser. Which is going to say tons about the standards of management at that company.


>Don’t we all sometimes, when busy with our own thoughts and worries, occasionally spout out oddly appropriate responses to what our spouses are speaking, without actually listening to and processing the conversation, maybe because we heard it all so many times we can readily predict what they are going to say?

No. Sometimes I can predict what a friend will say but I won't actually say it. The more I think about it, the more I realise that AI at the moment is the unlistening (but hearing) spouse. And just like it seems we can engage with such a person (though can only rarely), so too can the AI trick us ('simulate') that behaviour.


> Don’t we all sometimes, when busy with our own thoughts and worries, occasionally spout out oddly appropriate responses to what our spouses are speaking, without actually listening to and processing the conversation, maybe because we heard it all so many times we can readily predict what they are going to say?

Uh.. no, not me. That sounds very strange to me, and sad. I always listen to everything she says. And expect the same back. Otherwise it wouldn't be a good relationship, I think. (Maybe I'm super-naïve!..or something.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: