Some very nice and thoughtful work done here. One of the security risks that I see is the passing of corporate information into ChatGPT which may be sensitive. Is there a way to obfuscate that potential issue in an automated fashion?
Thanks Ken! We’re just adding support to MD5 and SHA256 hash personally identifiable information but that’s only one way… the next step is definitely tokenization which allows obfuscation and reversal to those with the right permissions.. this would allow you to abide by the growing number of regional privacy restrictions.
The obfuscation of PII is pretty slick and addresses one of the biggest worries customers have about shipping data to OpenAI. Is ChatGPT still able to effectively make connections about what's going on with your security data if it can't see the sensitive data?