For each task, the authors feed a context window of text with either zero to a few sample queries and responses, followed by a query without the response. The model generates a response for the last query. BTW, this approach is analogous to what you would do with a human being: you would provide zero to a few sample questions and answers, and then ask a question.
For each task, the authors feed a context window of text with either zero to a few sample queries and responses, followed by a query without the response. The model generates a response for the last query. BTW, this approach is analogous to what you would do with a human being: you would provide zero to a few sample questions and answers, and then ask a question.