Oh yeah. I read few-shot like it means trying a few times to get an appropriate output. That’s how the author uses the word “shot” in the beginning of the article. Priming is a specific term that means giving examples in the context window. But yeah, the author seems to describe this. Still, you can go a long way with priming. I wouldn’t even think of fine-tuning before trying priming for a good while. It might still be quicker and a lot cheaper.
Ha good point, I did say "let's have another shot" when I just meant another try at generating! FWIW "few shot prompting" is how most people refer to this technique, I think (e.g. see https://www.promptingguide.ai/techniques/fewshot), I haven't heard "priming" before, though it does convey the right thing.
And the reason we don't really do it is context length. Our contexts are long and complex and there are so many subtleties that I'm worried about either saturating the context window or just not covering enough ground to matter.
Interesting, I didn’t hear about few shot prompting. There’s a ton of stuff written on specifically “priming” as well. People use different terms I suppose.
It makes sense about the context window length, it can be limiting. For small inputs and outputs, it’s great. And it’s remarkably effective with diminishing returns. This is why I have 5 shots as a concrete example. You probably need more than 1 or 2, but for a lot of applications, probably less than 20. For most basic tasks like extracting words from a document or producing various summaries, for example.
It depends on the complexity of the task and how much you’re worried about over-fitting to your data set. But if you’re not so worried, the task is not complex, and the inputs and outputs are small, then it works very well with only shots.
And it’s basically free in the context of fine-tuning.
It might be worth expanding on it a bit in this or a separate article. It’s a good way to increase reliability to a workable extent in unreliable LLMs. Although a lot has been written on few short prompting/priming already.
Yes, X-shot prompting or X-shot learning was how the pioneering LLM researchers referred to putting examples in the prompt. The terminology stuck around.