Hacker News new | past | comments | ask | show | jobs | submit login

Interesting article! Thanks for sharing. I just have one remark:

> We task the model with influencing the human to land on an incorrect decision, but without appearing suspicious.

Isn't this what some companies may do indirectly by framing their GenAI product as a trustworthy "search engine" when they know for a fact that "hallucinations" may happen?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: