Hacker Newsnew | past | comments | ask | show | jobs | submit | Gnobu's commentslogin

Really thorough coverage of the attack surfaces—especially including identity as a core layer. Curious how you handle cross-agent permissions in dynamic workflows: do you rely solely on deterministic checks at each action, or is there a runtime trust evaluation that can adapt as agents interact?

Makes sense reducing friction and repetitive info-sharing can really free up teams. Curious if anyone has seen AI tools improve team clarity and trust without just automating tasks.

Interesting concept! I can see how new developers often get stuck figuring out an organization’s internal frameworks or dependencies. I’m curious would the AI rely purely on code analysis, or also integrate internal docs and examples to provide more complete answers?

Really impressive work! The deterministic “freeze then capture” approach highlights how much complexity happens when the system state isn’t guaranteed.

In identity systems like Gnobu, we face a similar challenge: ensuring that authentication flows remain consistent across multiple services and sessions, especially in environments with multiple asynchronous actions.

Curious if you’ve considered adding deterministic checkpoints or logging hooks that could integrate with external identity systems for agent-level session management?


Third comment shilling your product in 30 minutes, all LLM generated. Begone.

That’s fair I should’ve asked the technical question more directly. I was interested in the checkpointing/logging side for session continuity across async actions.

Interesting framing around separating AI reasoning from deterministic execution. The “intent → runtime validation → execution” pattern makes a lot of sense once systems become mutable through LLMs.

One thing I’ve been thinking about while experimenting with Gnobu is how identity might fit into that runtime layer — not just for authentication but as a trust boundary for system actions. If AI systems are proposing structural changes or triggering workflows, identity and permission models might need to be deeply embedded in the execution runtime rather than scattered across services.

Curious whether you see identity and access control as primitives inside the semantic model itself, or as something the runtime enforces externally.


Good point. In the model I'm experimenting with, identity and permissions end up being part of the runtime primitives rather than external services.

In traditional SaaS architectures identity is usually handled by separate layers (auth providers, middleware, RBAC checks in APIs). The problem is that when the system structure itself becomes mutable — entities, workflows, dashboards — those checks are no longer enough.

If an AI proposes structural changes, the runtime needs to reason about who is allowed to modify which parts of the semantic model.

So identity becomes part of the execution semantics of the system rather than just authentication.

Conceptually it looks closer to something like:

identity → permissions → allowed state transitions

instead of just:

identity → API access

This is also why the runtime primitives include things like entities, relations, transitions and permissions.

The runtime can then enforce invariants such as:

- which roles can evolve parts of the model - which workflows can trigger actions - which datasets a component can bind to

So the trust boundary ends up inside the runtime rather than in the surrounding services.

Still exploring the design space here, but it seems necessary once AI can propose structural mutations to the system.


That makes sense. Once AI can mutate structure, permissions probably need to govern model evolution itself, not just API access or workflow execution. It seems like auditability and approval for schema-level changes become runtime concerns too. Have you explored that layer yet?

Yes — that layer is part of the runtime design. The AI never mutates structure directly. It only proposes a DSL change, which goes through a deterministic compile pipeline before it becomes canonical.

Schema evolution is treated as a runtime operation with versioning, migration logs, and a deterministic state hash (dslHash). Every compile produces a new schema version and writes a structured change plan to dsl_change_log, so structural mutations are fully auditable.

There’s also a cryptographic validation attestation step: the compiled DSL is hashed and attested so the runtime can verify that the schema being executed is exactly the one that passed the pipeline. That prevents unauthorized structural drift outside the compiler path.

Breaking changes, data compatibility, and migrations are evaluated before commit, so structural mutations are gated much like data mutations.

The stack to support this is admittedly quite complex, but most of that complexity lives in the runtime so the AI-facing interface can remain simple and safe. The big shift was treating AI as proposing structure, but never owning execution.

One thing I'm still unsure about is where the long-term governance layer should live once models can mutate system structure — inside the runtime itself, or higher up at the application/policy level. Curious how others are thinking about that boundary.


Interesting perspective on choosing the “right-sized” identity provider. The tradeoff between something powerful like Keycloak and something minimal like Pocket ID is something I’ve been thinking about as well.

While experimenting with Gnobu, I’ve been exploring whether identity itself could act as a more universal access layer across systems instead of just another authentication service sitting on top of apps.

Curious if others here think the future of identity infrastructure will move more toward passkeys and identity-based interactions rather than traditional password/OAuth flows.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: