Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We sign BAAs with all our healthcare customers + all our vendors. Currently using Claude computer-use. Zero-data retention signed with both Anthropic and OpenAI, so none of the information getting sent to their LLMs ever get retained




>none of the information getting sent to their LLMs ever get retained

Is it possible to verify that?


Yup! We have signed certificates that explicitly state this, with all LLM providers we use.

That's not "verification" by any definition of the word.

Good point. In a way we can verify to a customer that we have that policy set up with them by showing them the certificate. But you are correct in that we haven't gone as far as asking for proof from Anthropic or OpenAI on not retaining any of our data but what we did do is we got their SOC 2 Type II reports and they showed no significant security vulnerabilities that will impact our usage of their service. So now we have been operating under the assumption that they are honoring our signed agreement within the context of the SOC 2 Type II report we retrieved, and our customers have been okay with that. But we are definitely open to pursuing that kind of proof at some point.

All of which has nothing to do with OpenAI or Anthropic deciding to use your data??? SOC 2 Type II is completely irrelevant.

You've got two companies that basically built their entire business upon stealing people's content, and they've given you a piece of paper saying "trust me bro".


I appreciate your skepticism. At the end of the day we're focused on delivering real value while taking every security precaution we can reasonably take and build new technology at the same time. Eventually as we grow we'll be able to do full self hosting for our customers and perhaps even spin up our own LLMs in our own servers. But until then, we can only do so much.

Welcome to the invalidated EU-US Safe Harbour, the invalidated EU-US Privacy Shield, and the soon-to-be invalidated EU-US Data Privacy Framework (DPF) and Transatlantic Data Privacy Framework (TADPF).

Digital sovereignty and respect for privacy and local laws are the exception in this domain, not the expectation.

As Max Schrems puts it "Instead of stable legal limitations, the EU agreed to executive promises that can be overturned in seconds. Now that the first Trump waves hit this deal, it quickly throws many EU businesses into a legal limbo."

After recently terrifying the EU with the truth in an ill-advised blogpost, Microsoft are now attempting the concept of a 'Sovereign Public Cloud' with a supposedly transparent and indelible access-log service called Data Guardian.

https://blogs.microsoft.com/on-the-issues/2025/04/30/europea...

https://www.lightreading.com/cloud/microsoft-shows-who-reall...

If Nation States can't manage to keep their grubby hands off your data, private US Companies obliged to co-operate with Intelligence Apparatus certainly won't be.


You make valid points. At the end of the day we're focused on delivering real value while taking every security precaution we can reasonably take and build new technology at the same time. Eventually as we grow we'll be able to do full self hosting for our customers and perhaps even spin up our own LLMs in our own servers. But until then, we can only do so much.

Honestly, I'm surprised your lawyers let you post that here.

+1 for honesty and transparency


Typically with this sort of thing the way it really works is that you, the startup, use a service provider (like OpenAI) who publish their own external audit reports (like a SOC 2 Type 2) and then the SOC 2 auditors will see that the service provider company has a policy related to how it handles customer data for customers covered by Agreement XYZ, and require evidence to prove that the service provider company is following its policies related to not using that data for undeclared purposes or whatever else.

Audit rights are all about who has the most power in a given situation. Just like very few customers are big enough to go to AWS and say "let us audit you", you're not going to get that right with a vendor like Anthropic or OpenAI unless you're certifiably huge, and even then it will come with lots of caveats. Instead, you trust the audit results they publish and implicitly are trusting the auditors they hire.

Whether that is sufficient level of trust is really up to the customer buying the service. There's a reason many companies sell on-prem hosted solutions or even support airgapped deployments, because no level of external trust is quite enough. But for many other companies and industries, some level of trust in a reputable auditor is acceptable.


Thanks for the breakdown Seth! We did indeed get their SOC 2 Type II reports and made sure they showed no significant security vulnerabilities that will impact our usage of their service.

Is it a 3rd party that is verifying?

We haven't looked into this kind of approach yet, but definitely worthwhile to do at some point!

So you’re taking the largest copywriting infringements at their word for it?

Right now we are taking the policies we signed with our LLM vendors as a verification of a zero data retention policy. We did also get their SOC 2 Type II reports and they showed no significant security vulnerabilities that will impact our usage of their service. We're doing our best to deliver value while taking as many security precautions as possible: our own data retention policy, encrypting data at rest and in transit, row-level security, SOC 2 Type I and HIPAA compliance (in observation for Type II), secret managers. We have other measures we plan to take like de-identifying screenshots before sending them up. Would love to get your thoughts on any other security measures you would recommend!

How exactly would you do this? Be realistic

I’m guessing OP is asking if it’s possible to verify they’re honoring the contract and deleting the data?

Nope.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: