Hacker News new | past | comments | ask | show | jobs | submit login

If GPT-3 has a consistent position on anything, it's only because the corpus it was trained on was consistent about it. So, for example, it will reliably autocomplete Jabberwocky because there are a lot of copies of this poem in the corpus and they are all the same.

If there were two versions of this poem that started the same way, it would pick between the variations in the corpus randomly. In other cases it might choose based on the style of prose or other stuff like that.

GPT-3 can get some trivia right, but it's only because the editors of Wikipedia already came to consensus about it and Wikipedia was weighted more. It doesn't have a way of coming to a consistent conclusion on its own.

Without consistency, how can it be said to know or believe anything? You might as well ask what a library believes. Sure, the authors may have believed things, but it depends which book you happen to pick up.




I agree with you in that I would make a strong distinction between what a model like GPT-3 does and whatever it is that humans do.

But I do think you're missing the point just a bit. When we speak and think, we use all kinds of metaphors that express judgements about the world, usually without realizing it. In other words, the way we use language encodes concepts in a deep way.

To borrow an example from George Lakoff, we, in English, use war-metaphors to talk about arguments. Of arguments and of wars you can say things like "he's marshalling his forces," "they're ceding their territory," or "she's girding her defenses". In fact, almost anything you can say about a war you can also say about an argument. In American politics, with regard to partisan squabbling and the filibuster, we talk about "the nuclear option". The fact that these metaphors make sense to us indicates a judgement, something like "arguments are like wars". That judgement shows up in billions of lines of English scraped from the internet and can be fed into a model, allowing GPT-3 to "make that connection" via purely statistical methods.

Yes, this is a bit like asking "what a library believes". But a lot of these metaphors show up in our languages and, in a way, they express judgements, which is something akin to a belief. Does that mean a library has beliefs? Is this all knowledge is? I wouldn't go that far. But the argument is an interesting one and worth raising.


Well, it's certainly interesting that it can learn metaphors, and this can be useful for creative purposes, so it's fun to play with.

But a sophisticated understanding of metaphors could be used to tell the truth or to lie. In the case of GPT-3, it doesn't know the difference. Telling the truth and lying come out of the same autocompletion process.

If you consider the use of a metaphor to be showing judgement, it means that a particular metaphor seems to be appropriate to use in a particular context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: