Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Secure ChatGPT – a safer way to interact with generative AI (github.com/pangeacyber)
35 points by oliverf on June 1, 2023 | hide | past | favorite | 7 comments
Hi HN,

I’m the founder of Pangea. We’ve built a developer platform where you can easily add security to your code through a simple set of APIs - features like authentication, secrets management, audit logging, redaction of PII, restricting embargoed countries, known threat actor intelligence, etc.

With the ChatGPT and LLM explosion, we thought of ways to reduce the risk of both the inputs and outputs from these services. Our NextJS sample app adds a security layer on top of ChatGPT with various security services that you can implement quickly.

It’s basically a front end to the OpenAI API that you can deploy which does a few security-related things:

- AuthN - it provides authentication to track who inputs what and when

- Redact - provides PII redaction with detection of over 40 different types of sensitive information

- Secure Audit Log - logs the user, cleansed prompt, and model to a secure tamper-proof audit trail

- Sends the cleansed prompt to the OpenAI API and receives the response

- Domain Intel - Performs a Domain Reputation lookup on any domain names in the response

- URL Intel - Performs a URL Reputation lookup on any URLs in the response

- Defangs any malicious domains or URLs found in the response

- On closing your session, the history of prompts disappears

Storing what users have prompted allows you to better train your model, feed it more relevant information, and keep an audit log of the history. The Secure Audit Log service can store the user inputs in a secure log so that you can track who did what and when.

The final layer of defense is a Domain Intel service to detect and neutralize the malicious URLs and domain names in the OpenAI API's response.

The proof-of-concept app is open-source on GitHub. Visit our repo https://github.com/pangeacyber/secure-chatgpt and deploy the app with a simple NPX command.

We’d love your feedback.

-Oliver




Loving this Oliver, especially the obfuscation features! Also ran into this recently for non-dev citizens: https://www.private-ai.com/private-chatgpt/


Some very nice and thoughtful work done here. One of the security risks that I see is the passing of corporate information into ChatGPT which may be sensitive. Is there a way to obfuscate that potential issue in an automated fashion?


Thanks Ken! We’re just adding support to MD5 and SHA256 hash personally identifiable information but that’s only one way… the next step is definitely tokenization which allows obfuscation and reversal to those with the right permissions.. this would allow you to abide by the growing number of regional privacy restrictions.


The obfuscation of PII is pretty slick and addresses one of the biggest worries customers have about shipping data to OpenAI. Is ChatGPT still able to effectively make connections about what's going on with your security data if it can't see the sensitive data?


This is great! Can these APIs also protect users from other LLMs?


That's the scary part and it's definitely coming.. if it isn't here already. If an LLM is a threat actor and using traditional threat vectors like:

- is a bot or traffic is originating from a botnet

- or your network is connecting to a known bot C&C server

- injecting malicious URLs

- directing you to malicious hosted domains

- sending you malicious file objects

- your password has been breached as part of a large scale data breach

Then yes.. although technically these APIs are meant to be embedded into a cloud app which then ensures the user is protected. There's a lot of working being done right now to use LLM's for defense to simulate what a SOC analyst would do triaging a security event.. but there's likely equally amount of working being done to have LLM go on the offense. You could automate Nigerian 419 scams, spear phishing, all kinds of wire fraud.. and if you hooked an LLM up to penetration testing tools it could literally launch real attacks.. it's a new world..


Instead if you meant whether our code can be adapted to other LLMs - yes for sure.. the API would just need to be adjusted for that LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: