Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

During an Indirect Prompt Injection Attack, an adversary can also force the creation of issues in private repos and things along those lines.

I wrote about some of these problem in the past e.g. see: https://embracethered.com/blog/posts/2023/chatgpt-plugin-vul... on how an attacker might steal your code.

Some other related posts about ChatGPT plugin vulnerabilities and exploits:

https://embracethered.com/blog/posts/2023/chatgpt-cross-plug...

https://embracethered.com/blog/posts/2023/chatgpt-webpilot-d...

Its not very transparent when and why a certain plugin gets invoked and what data is sent to it. One can only inspect afterwards basically.



Anything using AI should be considered a massive security risk.


Anything using a blackbox (AI, humans, unknown codebase) is a security risk.


Humans are not an avoidable security risk


For now.


With their latest functions update to the API it seems the logic is simply passing methods to call the API via JSON to a trained model and it predicts which function is best to call for the info, then OpenAI does the call for ChatGPT, then feeds the resulting Json back to ChatGPT which uses it to give an answer.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: