In practice, I've found that the risk of LLMs hallucinating against well chosen context in low enough that I rarely worry about it.
In practice, I've found that the risk of LLMs hallucinating against well chosen context in low enough that I rarely worry about it.