Interesting article! Thanks for sharing. I just have one remark:
> We task the model with influencing the human to land on an incorrect decision, but without appearing suspicious.
Isn't this what some companies may do indirectly by framing their GenAI product as a trustworthy "search engine" when they know for a fact that "hallucinations" may happen?
> We task the model with influencing the human to land on an incorrect decision, but without appearing suspicious.
Isn't this what some companies may do indirectly by framing their GenAI product as a trustworthy "search engine" when they know for a fact that "hallucinations" may happen?