I’ve been experimenting with something I’m calling Differential Sync Protocol (DSP).
It’s an HTTP/2 layer (written in Rust) that reduces payload sizes by sending binary diffs instead of full resource bodies. The server tracks resource versions per client session, computes minimal deltas, and only transmits what’s changed. If that’s not possible, it falls back to a normal full response.
In practice, this cuts bandwidth dramatically (often ~90%) for high-frequency polling APIs — things like dashboards, log streams, chat threads, IoT feeds. Not as useful for tiny or highly volatile resources.
Under the hood it’s built in Rust using similar for diffing, with a compact binary wire format, session management (TTL, memory limits, cleanup), and comes with a demo client + server.
The most crucial point is that Email Sleuth does not send an actual email message. It performs the SMTP handshake (EHLO), specifies the sender (MAIL FROM), and attempts to specify the recipient (RCPT TO). It stops before the DATA command, which is where the email body and headers (subject, content, etc.) would be sent.
Catch all domains is a major source of inaccuracy. The verify_smtp_email function includes a basic catch-all detection heuristic. If the initial RCPT TO for the target email succeeds (2xx), it then tries RCPT TO with a randomly generated, likely non-existent email address at the same domain (e.g., no-reply-does-not-exist-123456@domain.com). If this also succeeds, it flags the original result as inconclusive_retry with a message indicating a "Possible Catch-All". This isn't foolproof (some servers might have smarter catch-all filters), but it's a common technique.
Great point, and you're right, I definitely leaned into the technical side in the post. I didn’t want it to come off as pitchy or overly “marketed”. Because I'm not trying to sell anything here at the moment. I’m really trying to understand the value from people who’ve been closer to this problem.
I originally started building this while working in the defense industry. I saw firsthand how vulnerable those environments can be, especially with early GenAI adoption, and how much risk there was around leaking sensitive or classified info through prompts/responses. That really stuck with me, and led to the idea of a real-time policy layer for LLMs.
That said, defense is a tough market to break into, especially without deep networks—so we’ve been exploring other verticals where compliance, privacy, or brand safety is a concern. But we’re still figuring out who the buyer is, how they evaluate this kind of tooling, and how to talk about it in their language.
Thank you, this really helps. Totally agree—hallucinations and leakage are scary, especially when prompts can be engineered to expose things you didn’t think were vulnerable.
We’ve been leaning toward open-sourcing the data plane for exactly the reasons you mentioned: trust, adoption, and building a community around the core tech. But I’ll be honest—there’s still that fear in the back of my mind: what if someone forks it, strips out the branding, and rehosts it? Or if buyers say “well, it’s open source, why should we pay anything?”
Did you or your team ever wrestle with that? Or have you seen OSS models work well in this space where the control plane still delivers enough value to justify a paid tier?
It’s an HTTP/2 layer (written in Rust) that reduces payload sizes by sending binary diffs instead of full resource bodies. The server tracks resource versions per client session, computes minimal deltas, and only transmits what’s changed. If that’s not possible, it falls back to a normal full response.
In practice, this cuts bandwidth dramatically (often ~90%) for high-frequency polling APIs — things like dashboards, log streams, chat threads, IoT feeds. Not as useful for tiny or highly volatile resources.
Under the hood it’s built in Rust using similar for diffing, with a compact binary wire format, session management (TTL, memory limits, cleanup), and comes with a demo client + server.