Hacker News new | past | comments | ask | show | jobs | submit login

> By telling it not to lie to you, you're biasing it toward a particular output in the event that its confidence is low.

This is something I really don't understand about LLMs. I think I understand how the generative side of them work, but "asking" it to not lie baffles me. LLMs require a massive corpus of text to train the model, how much of that text contains tokens that translate to "don't lie to me", and scores well enough to make its way into the output?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: