I built an app that preserves, encrypts, searches, reuses, and hands off the full work traces people create with Claude, Codex, Cursor, OpenClaw, and other AI agents.
It turns Claude, Codex, Cursor, OpenClaw, and other agent sessions into private data assets for your future AI employees.
Some technical details:
- AES-256-GCM encrypted local vault for transcripts, attachments, and state
- No DataMoat cloud vault or server-side transcript storage
- Vault keys and transcript data stay on the user’s machine
- Supported sources today include Claude CLI, Codex CLI/app local sessions, Claude Desktop local-agent sessions on macOS, OpenClaw, and Cursor agent transcripts
- Captures locally written thinking/reasoning blocks when the source tool stores them on disk
- Stores both raw source records and normalized searchable records
- Supports encrypted attachment blobs for supported images, PDFs, documents, and other files
- Password-based unlock with an scrypt verifier
- Optional TOTP authenticator support
- 24-word BIP39 recovery phrase and one-time recovery codes
- Secure Enclave-backed unlock path on supported Macs, with Touch ID in the packaged macOS app
- Packaged macOS app is signed and notarized; Linux source install is available; Windows ZIP builds are available but still unsigned
We believe every person and company should have the fundamental right to own their AI data and build their own data moat.
Do you think KYC is required?
It's a B2B service and the customers need to provide callback URL which contains their domain in it, and we know their domain from it
We conducted similar research earlier and successfully improved performance to a level comparable to models with 3x larger layer sizes. https://arxiv.org/html/2409.14199v3 We utilize more computational time in the latent space to achieve better performance. However, this approach introduces greater resistance compared to Chain of Thought (CoT) reasoning in the token space, especially if the number of CoT rounds in the latent space exceeds 20.
I would using the term "better approximation of the data distribution" instead of "reasoning" to describe this kind of process.
I think so. I believe this type of reasoning method, which achieves better results through longer computation time, is very useful on edge devices like mobile phones. Consider a scenario where we only need the model to output a function/action call on the phone; we don't require it to provide an immediate response.
The only way to solve it is to cut out the middlemen, that is unlikely to happen though. I tried it in a couple of places that I contracted at, but was quickly shot down. From an employer's perspective, it is very convenient for them to outsource everything except the actual interviewing, even if costs them more. They just don't want the hassle of looking through resumes, arranging interviews etc. It is understandable.
Well if i make the simplex iteration 20% faster what does do that performance and any terms performance of my mixed-integer linear programming solver one might implement in the framework. Due to parallelism and shared state the answer is non trivial. Let's say you want to people to cooperate in writing such a solver with payout proportional to speed up the solver got over a certain family of problems then accurate estimates how much contribution a new component has for the performance would be really helpful and would be an alternative having big companies bank roll closed source solver development.
Deep learning is uterly irrelevant for the famework but it is a heuristic that could be employed ...
I built an app that preserves, encrypts, searches, reuses, and hands off the full work traces people create with Claude, Codex, Cursor, OpenClaw, and other AI agents.
It turns Claude, Codex, Cursor, OpenClaw, and other agent sessions into private data assets for your future AI employees.
Some technical details:
- AES-256-GCM encrypted local vault for transcripts, attachments, and state - No DataMoat cloud vault or server-side transcript storage - Vault keys and transcript data stay on the user’s machine - Supported sources today include Claude CLI, Codex CLI/app local sessions, Claude Desktop local-agent sessions on macOS, OpenClaw, and Cursor agent transcripts - Captures locally written thinking/reasoning blocks when the source tool stores them on disk - Stores both raw source records and normalized searchable records - Supports encrypted attachment blobs for supported images, PDFs, documents, and other files - Password-based unlock with an scrypt verifier - Optional TOTP authenticator support - 24-word BIP39 recovery phrase and one-time recovery codes - Secure Enclave-backed unlock path on supported Macs, with Touch ID in the packaged macOS app - Packaged macOS app is signed and notarized; Linux source install is available; Windows ZIP builds are available but still unsigned
We believe every person and company should have the fundamental right to own their AI data and build their own data moat.
Source: https://github.com/max-ng/datamoat
If you want to support the project, please consider starring the repo. Thank you!
reply