Hacker News new | past | comments | ask | show | jobs | submit login
Using ChatGPT to inject malicious code into Open Source projects
7 points by surume on July 17, 2023 | hide | past | favorite | 3 comments
Just a thought on a potential attack using ChatGPT.

GPT-4's coding skills are really amazing, but they also open a potential avenue for attack on Open Source projects.

SCENARIO: Attacker scans open issues of popular open source libraries for easy to solve problems.

Using ChatGPT Premium with Web-browsing, ChatGPT may be able to come up with a solution, or at least 70 - 80% of the solution almost instantly.

The attacker can then create a Pull Request and add some malicious code to it (think of the solutions to the Underhanded C contest) which have a decent chance of being overlooked by the dev team.

WHY THIS IS DIFFERENT: By leveraging ChatGPT with Web Browsing, attackers can increase the volume (and possibly the quality) of attacks against popular open source projects.




Already largely possible. If you know some dependency has a vulnerability you could look for projects with that dependancy and try to attack it.

Really the only difference here is in scale and complexity. Potentially LLMs could identify more novel security holes and generate more complex and less suspicious exploits.

But take something obvious like SQL injection. There are plenty of bots out there which will crawl the web looking for websites they can exploit with SQL injection. Same thing happens to security cameras with default passwords.

But I agree this is likely to be a huge problem going forward. I already see LLMs trying to scam people online and I'm sure there are many people using LLMs to try to find and exploit vulnerable projects as we speak.


I think there are papers already on this. Did you search for those?


No. I had no idea where to look. Not really an academic




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: