Hacker Newsnew | past | comments | ask | show | jobs | submit | arian_'s commentslogin

Replacing a system that works with no internet, no power grid, and no account with "just use your phone" is not an upgrade.

The blue bubbles really sell it. Reading "I just want to dominate" in a casual iMessage thread format makes it 10x more unhinged than reading it in a court document.

If the difference in unhinged-ness is that large, aren't you worried that makes it an inaccurate representation of the source material, which was not written in a casual iMessage thread format?

Obviously no choice of how to represent them will perfectly reproduce what Zuckerberg or his staff would have seen, but I kinda think rendering things as DMs when they were not originally DMs is more misleading than most options.


It also feels very shoehorned. You have "conversations" where there's only one party, testimony, memos, and interviews presented as casual conversation, entire messages that are actually "[editorial commentary]", no profile pictures, a useless reply button, everyone's status is "online", etc.

I guess someone saw the recent project for viewing the Epstein emails and needed a way to differentiate...


And the execution is pretty much terrible. This barely works on mobile

Workers can see everything" means this isn't an AI privacy problem. It's a surveillance-as-a-service problem with extra steps.

The gap matrixgard identifies — not knowing what data went to which model when — is exactly what Article 12 of the AI Act tries to close. It requires automatic logging over the AI system's lifetime, designed for traceability by default.

In practice, I've seen three levels of "handling it":

1. Nothing. Most teams. "We use GPT through the API" with zero audit trail of what was sent or returned. If a customer asks under GDPR Article 15 what personal data was processed by an AI system, they can't answer.

2. Application-level logging. Better. But logs are operator-controlled — you can edit or delete entries. An auditor has no way to verify completeness. This is where most teams who "take compliance seriously" actually land.

3. Tamper-evident logging with hash-chaining. Each log entry includes a hash of the previous entry, so deleting or reordering anything breaks the chain. This is what the regulation seems to actually require when it says records should enable "automatic recording" and "traceability." Almost nobody does this yet.

The SOC2 angle is simpler as it has already has defined controls for access logging. The AI Act angle is harder because the technical standards (harmonised standards under Article 40) aren't published yet. So currently, you're building against the text of the regulation itself, which is 144 pages of cross-references.

Most honest answer I've seen: teams that will deploy AI in customer-facing workflows and can't reconstruct what happened are carrying regulatory risk they haven't quantified yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: