Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given that LLMs are trained on humans, who don't respond well to being dehumanised, I expect anthropomorphising them to be better than the opposite of that.

https://www.microsoft.com/en-us/worklab/why-using-a-polite-t...




Aside from just getting more useful responses back, I think it's just bad for your brain to treat something that acts like a person with disrespect. Becomes "it's just a chatbot", "It's just a dog", "It's just a low level customer support worker".


While I also agree with you on that, there are also prompts that make them not act like a person at all, and prompts can be write-once-use-many which lessens the impact of that.

This is why I tend to lead with the "quality of response" argument rather than the "user's own mind" argument.


I am not talking about getting it to generate useful output, treating it extra politely or threatening with fines seems to give better results sometimes so why not, I am talking about the phrase "gets it". It does not get anything.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: