The layered architecture argument is right. Discovery, identity, and authorization should be separate concerns. AgentDNS tries to be all three plus billing and proxying, and that's how you get a single point of failure that controls everything.
We've been building the "narrowly scoped discovery" piece this review recommends. Agent Identity & Discovery (AID) [1] is a DNS TXT record at `_agent.<domain>` — endpoint, protocol, auth hint, optional Ed25519 public key. No registry, no proxy, no billing. Add one DNS record and any AID client finds your agent.
Works with MCP, A2A, OpenAPI, gRPC, GraphQL, WebSocket, local agents (Docker, npx, pip). SDKs in TS, Go, Python, Rust, .NET, Java. The discovery lookup is a single DNS query.
The optional PKA (Public Key Attestation) uses RFC 9421 HTTP Message Signatures with Ed25519 for endpoint proof. It sits underneath SPIFFE and OAuth rather than replacing them. We wrote up the cross-boundary auth problem and the remaining gaps [2].
One thing the review misses: the .agent TLD. The .agent Community [3] (3,000+ members, 700+ companies) is going through ICANN's Community Priority Evaluation in the 2026 round to get `.agent` under community governance instead of corporate control. AID works on any domain today, but a community-governed TLD for agents is worth knowing about if you care about who controls the naming layer. Spec is at v1.2 [4].
Your conclusion — "define minimum interoperable layers and connect them with verifiable trust" — that's what we're after. DNS is the minimum viable trust anchor that already runs everywhere.
If any of this is interesting, you can join the community at agentcommunity.org.
Web Bot Auth solves a real problem with a real standard. Per-request signatures make automated traffic accountable, and that is the right long-term primitive. In Cloudflare’s hands, the current implementation is built first for bots, not agents. “Signed agents” read like a label added to a bot-centric system, not a first-class agent identity fabric. The design also centers Cloudflare as the arbiter and on-ramp, which is great for reliability inside their network and great for their business moat, but not great for an open, decentralized agentic web.
While it builds on standards as the top poster notes, cloudflare's version is a business moat driven central registry service and nothing what the decentralized internet would/should look.
wham.
thanks for sharing anecdotal episodes from OAI's inner mecahnism from an eng perspective. I wonder if OAI wouldn't be married to Azure would the infra be more resilient, require less eng effort to invent things to just run (at scale).
What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc.
Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.
Securing and operating the .agent TLD, creating a dedicated domain for AI and autonomous agents. We're building an ecosystem to foster interoperability, bringing together AI developers, researchers, and companies to establish conventions for agent discovery, verification, and communication.
According to D&D 5e, you can only case Raise Dead if the subject is dead for less than 10 days. Even if those conditions are satisfied, the creature's soul needs to be both willing and at liberty to rejoin the body. So, no.
As a long Time multi platform user I have to say sometimes Apple’s approach on centralized provisioning im thankful for the effort never to have to deal with buggy/suspicious apps.
On the other hand, no competition lead to a point where the Appstore became an authoritarian system where apple is free to play by the rules, or change them as it sees fit for it's bottom line.
Only judging Tho, never helped.
What would be the consequences of an app store where -
# alternative stores would be allowed like android
# Apple would lover prices
# regional pricing options
# free updates for paid content
# subscription for updates, distributed among apps user paid for already
We've been building the "narrowly scoped discovery" piece this review recommends. Agent Identity & Discovery (AID) [1] is a DNS TXT record at `_agent.<domain>` — endpoint, protocol, auth hint, optional Ed25519 public key. No registry, no proxy, no billing. Add one DNS record and any AID client finds your agent.
Works with MCP, A2A, OpenAPI, gRPC, GraphQL, WebSocket, local agents (Docker, npx, pip). SDKs in TS, Go, Python, Rust, .NET, Java. The discovery lookup is a single DNS query.
The optional PKA (Public Key Attestation) uses RFC 9421 HTTP Message Signatures with Ed25519 for endpoint proof. It sits underneath SPIFFE and OAuth rather than replacing them. We wrote up the cross-boundary auth problem and the remaining gaps [2].
One thing the review misses: the .agent TLD. The .agent Community [3] (3,000+ members, 700+ companies) is going through ICANN's Community Priority Evaluation in the 2026 round to get `.agent` under community governance instead of corporate control. AID works on any domain today, but a community-governed TLD for agents is worth knowing about if you care about who controls the naming layer. Spec is at v1.2 [4].
Your conclusion — "define minimum interoperable layers and connect them with verifiable trust" — that's what we're after. DNS is the minimum viable trust anchor that already runs everywhere.
If any of this is interesting, you can join the community at agentcommunity.org.
[1] https://aid.agentcommunity.org
[2] https://blog.agentcommunity.org/external_identity_anchor
[3] https://agentcommunity.org/about
[4] https://aid.agentcommunity.org/docs/specification
[5] https://github.com/andre-git/agent-internet-rfcs/issues/1
reply