Very true. If you are curious I have an entire collection of such prompt injection to data exfiltration issues compiled over the last year. From Bing Chat, Claude, GCP, Azure they all had this problem upon release - and they all fixed it.
However, most notable though is that ChatGPT still to this day has not fixed it!
Here is a list of posts showcasing various mitigation and fixes companies implemented. Best is to not render hyperlinks/images or use a Content-Security-Policy to not connect to arbitrary domains.
However, most notable though is that ChatGPT still to this day has not fixed it!
Here is a list of posts showcasing various mitigation and fixes companies implemented. Best is to not render hyperlinks/images or use a Content-Security-Policy to not connect to arbitrary domains.
https://embracethered.com/blog/tags/ai-injections/