Yea, best to lean on legal, who will ask "who owns intellectual property written by openAI?" and look at the USPTO decision and the supreme court refusal to grant cert[1] and decide "not us."
Can you elaborate on why that would be the case? I ask because we've had some discussion about drafting an 'AI' policy with our CTO and though we haven't said anything officially, we are concerned about exfiltration of secrets, and the accuracy of information.
I do wonder if there will one day be communications formatters like what Prettier does for code, where no matter the style of writing going in, it will come out consistent. But until there are 'communications rewriters' to match a predefined company tone and style guide it seems like anyone's use of a tool like this would by ad-hoc.
What do you think the most sensible policy is to have right now, and why?
Not the OP, but the way I look at it - exfiltration of secrets can happen in any other possible way anyways, and you would need to lock down most of your SaaS software. I treat GPT the same I would treat any other Google search - don’t share your secrets, careful with copying stacktraces for weird error debugging and etc.
Then you fire them and sue them for leaking confidential information.
It doesn't matter if they have a checkbox saying 'we totally won't save your information we promise', no one should be shoveling confidential data into ChatGPT and if people keep doing so then they're a security risk.