Hacker News new | past | comments | ask | show | jobs | submit login

Very true. If you are curious I have an entire collection of such prompt injection to data exfiltration issues compiled over the last year. From Bing Chat, Claude, GCP, Azure they all had this problem upon release - and they all fixed it.

However, most notable though is that ChatGPT still to this day has not fixed it!

Here is a list of posts showcasing various mitigation and fixes companies implemented. Best is to not render hyperlinks/images or use a Content-Security-Policy to not connect to arbitrary domains.

https://embracethered.com/blog/tags/ai-injections/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: