Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT bug exposes AI chat histories to other users (theverge.com)
4 points by vincent_s on March 22, 2023 | hide | past | favorite | 1 comment



I was wondering about stuff like this the other day when I saw stuff about Microsoft integrating GPT into MS365. How do they deal with permission granularity and ensure they aren't surfacing private information or conversations when they shouldn't be?

For example, if I have a Teams meeting with 15 people, how does that conversation become part of the model? Does everyone end up with a slightly customized model that's trained on the data they have access to?

Regardless of how it works, I think there's going to be some information that's unintentionally surfaced. Based on my experience, people don't understand the permission systems. They often leave data too accessible and the only reason it doesn't become a big deal is that most people aren't looking for data they aren't supposed to be accessing.

What happens when GPT mixes with confusing permission systems and exposes data that people think is private?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: