Hacker News new | past | comments | ask | show | jobs | submit login

Yes, those concerns exist, but they're also practically impossible to enforce.

At my enterprise, it's a three step solution, two of which don't work.

1. Written policy concerning LLM output and its risks, disallow it for being used for any kind of official documentation or decision making. (This doesn't work, because no one wants to use their own brain to do tedious paperwork.)

2. Block access to public LLM tools via technical means from company owned end-user devices. (This doesn't work because people will just open ChatGPT on their home PC or mobile.)

3. Write and provide our own gpt-3.5 frontend, so that when people ignore rules #1 and #2 we have logs, and we know we're not feeding our proprietary info to to OpenAI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: