Appreciate the thoughtful feedback. You're right — today it's
IDE-only. The hook layer gives us the cleanest interception point
for local dev, which is where most teams are first adopting AI
agents.
The CI/remote case is on the roadmap. The architecture is already
adapter-based (Claude Code and Cursor are separate adapters that
normalize to a standard event schema), so adding a CI adapter
that wraps agent execution in GitHub Actions or a container is
the same pattern — different hook surface, same policy engine
and telemetry output.
Your point about context-aware policy is spot on and it's the
harder problem. An `oculi.yaml` that says "deny curl in CI but
allow in dev" is straightforward. The real challenge is detecting
when a benign-looking command has a different risk profile based
on environment — secrets available, network access, filesystem
scope. That's where we're headed.
Right now we're onboarding design partners to nail the local dev
experience. CI enforcement is next. If you're running AI agents
in CI and want to be involved, I'd love to talk —
founders@oculisecurity.com
The CI/remote case is on the roadmap. The architecture is already adapter-based (Claude Code and Cursor are separate adapters that normalize to a standard event schema), so adding a CI adapter that wraps agent execution in GitHub Actions or a container is the same pattern — different hook surface, same policy engine and telemetry output.
Your point about context-aware policy is spot on and it's the harder problem. An `oculi.yaml` that says "deny curl in CI but allow in dev" is straightforward. The real challenge is detecting when a benign-looking command has a different risk profile based on environment — secrets available, network access, filesystem scope. That's where we're headed.
Right now we're onboarding design partners to nail the local dev experience. CI enforcement is next. If you're running AI agents in CI and want to be involved, I'd love to talk — founders@oculisecurity.com
reply