This demonstrates that in the lack of useful context GPT-3 will answer the question entirely by itself—which may or may not be what you want from this system.
You can instruct it not to do that. This is explained in OpenAI's post about the same technique[0]:
Answer the question as truthfully as possible, and if you're unsure of the answer, say "Sorry, I don't know"