Hacker News new | past | comments | ask | show | jobs | submit login

I’ve had pretty good experience with it personally. It quite often just tells me it doesn’t know or isn’t sure instead of just making something up.





I did something similar and to my surprise effectively made the LLM in my tests admit when they don't know something. Not always but worked sometimes. I don't prompt "don't hallucinate" but "admit when you don't know something". It's a logical thing in the other hand, many prompts just transmit the idea of being "helpful" or "powerful" to the LLMs without any counterweight idea. So the LLM tries to say something "helpful" in any case.

Playing around with local models, Gemma for example will usually comply when I tell it "Say you don't know if you don't know the answer". Others, like Phi-3, completely ignores that instruction and confabulates away.

Stop trying to make f̶e̶t̶c̶h̶ confabulate happen, it's not going to happen.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: