Hacker News new | past | comments | ask | show | jobs | submit login

People will find insecure workarounds which add vulnerabilities.



Then you fire them and sue them for leaking confidential information.

It doesn't matter if they have a checkbox saying 'we totally won't save your information we promise', no one should be shoveling confidential data into ChatGPT and if people keep doing so then they're a security risk.


How will you catch them?


This right here.

People have tasted the fruit of the tree of (machine) knowledge.

They're not going to let a simple webfilter stop them.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: