Hacker News new | past | comments | ask | show | jobs | submit login

I think interpreting a chatbot’s output as a company’s “direct advice” is an unsubstantiated jump. In the same way that if an unhandled error code in a program resulted in “DIE DIE DIE”, “Abort/kill child”, etc (trying to make these programming-related here) being alerted to the user, no reasonable person would hold the company responsible for the user taking this as a command. Imagine if we lived in a world where arbitrary text being sent by an entity could result in prosecution? I just don’t believe the chat UI to be a big enough bridge between the two examples to justify it.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: