Hacker News new | past | comments | ask | show | jobs | submit login

You're right; but it should be noted that that is a design choice not an inherent property of the technology.

Currently, the design goal isn't to make LLMs feel "lifelike." It is likely that lifelike LLMs will be released in the future, which could result in sarcastic or dismissive replies to poor questions or missing information.




Backwards. LLMs will naturally adapt lifelike patterns of their training data and are at least easy to prompt into behaving as such.

We tune them to specifically not do that.


I expect public school systems to place limits on LLMs. "Don't ask about that." "You can't talk about that topic at school." And so on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: