Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  This demonstrates that in the lack of useful context GPT-3 will answer the question entirely by itself—which may or may not be what you want from this system.
You can instruct it not to do that. This is explained in OpenAI's post about the same technique[0]:

  Answer the question as truthfully as possible, and if you're unsure of the answer, say "Sorry, I don't know"
[0] https://github.com/openai/openai-cookbook/blob/main/examples... (which is now linked in OP)


I'm not sure if that would work in this case, because it IS sure of the answer, it's just that the answer isn't included in the context.

I could try "Answer the question only if you can do so using the provided context" though, that could be interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: