Hacker News new | past | comments | ask | show | jobs | submit login

But that's literally what it is. The only reason you can have dialog-like interactions with language models is because they have been trained with special "stop tokens" surrounding dialog, so the model can (generally) auto complete something that looks like a reasons, and then the inference engine can stop producing text when the model produces the stop token.



Technically it is, of course, but the experience is completely different, and I get the feeling people call it that to downplay it.


I think understanding that helps me get more out of them. I feel like I am better able to provide information to the model with the expectation that it will need that information to autocomplete the dialog that I want.


Or when it produces “\nYou:”. But that doesn’t matter much, since the value is in what happens in a dialog.


s/reasons/response/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: