Hacker Newsnew | past | comments | ask | show | jobs | submit | cjohnsonpr's commentslogin

We open-sourced Rosetta, a lightweight protocol and CLI that helps AI coding agents build and retain institutional knowledge about a codebase across sessions.

The core idea is simple: instead of re-prompting agents with ad-hoc README excerpts or hoping they infer architecture from code alone, Rosetta introduces a standard file format (ROSETTA.md) that captures agent-relevant context explicitly: architecture, conventions, entry points, gotchas, and append-only agent learnings.

Key aspects:

A structured markdown protocol (ROSETTA.md) designed for machine parsing, not marketing

Append-only “Agent Notes” so agents can persist what they learn over time

Optional scoped modules (.rosetta/modules/*.md) for conditional context loading

A CLI for validation, scaffolding, and bootstrap prompts

A programmatic API for agent frameworks and IDE integrations

This is not another docs generator or LLM wrapper. It is closer to a memory boundary between humans, agents, and codebases.

Motivation: Most AI coding tools still treat every session as stateless. Humans accumulate institutional knowledge; agents currently do not. Rosetta is an attempt at a minimal, explicit standard for closing that gap.

Repository: https://github.com/metisos/Rosetta_Open_Source

Interested in feedback on:

Whether this should be a protocol vs just a convention

How agents should govern and curate learned notes

Failure modes you’ve seen with persistent agent context


We open-sourced Rosetta, a lightweight protocol and CLI that helps AI coding agents build and retain institutional knowledge about a codebase across sessions.

The core idea is simple: instead of re-prompting agents with ad-hoc README excerpts or hoping they infer architecture from code alone, Rosetta introduces a standard file format (ROSETTA.md) that captures agent-relevant context explicitly: architecture, conventions, entry points, gotchas, and append-only agent learnings.

Key aspects:

A structured markdown protocol (ROSETTA.md) designed for machine parsing, not marketing

Append-only “Agent Notes” so agents can persist what they learn over time

Optional scoped modules (.rosetta/modules/*.md) for conditional context loading

A CLI for validation, scaffolding, and bootstrap prompts

A programmatic API for agent frameworks and IDE integrations

This is not another docs generator or LLM wrapper. It is closer to a memory boundary between humans, agents, and codebases.

Motivation: Most AI coding tools still treat every session as stateless. Humans accumulate institutional knowledge; agents currently do not. Rosetta is an attempt at a minimal, explicit standard for closing that gap.

Repository: https://github.com/metisos/Rosetta_Open_Source

Interested in feedback on:

Whether this should be a protocol vs just a convention

How agents should govern and curate learned notes

Failure modes you’ve seen with persistent agent context

Happy to answer technical questions or hear why this is a bad idea.


We built a searchable registry for AI agents and MCP servers, powered by an open protocol called ADP (Agent Discovery Protocol). The problem: AI agents are proliferating but there's no standardized way to discover them. You can't search for "agents that can query PostgreSQL" or "payment processing agents" and get ranked results with trust metrics. What we built:

Semantic search using vector embeddings (all-MiniLM-L6-v2) to find agents by capability Trust infrastructure: verification badges, health monitoring, security scoring (0-100), user reviews Open protocol: ADP v2.0 defines a standardized manifest format (like package.json for npm, but for agents) Multi-protocol support: REST, MCP, gRPC, GraphQL, WebSocket

How it works: Agents publish ADP manifests at /.well-known/agent.json: json{ "aid": "aid://example.com/my-agent@1.0.0", "name": "My Agent", "capabilities": [{ "name": "query_database", "description": "Execute SQL queries", "input_schema": {...}, "output_schema": {...} }], "security": { "certifications": ["SOC2"], "data_policy": "zero_retention" } } Registry crawls these manifests (or accepts direct registration), generates embeddings, and makes them searchable. Tech stack:

Backend: FastAPI + PostgreSQL with pgvector Frontend: Next.js 14 Search: Sentence transformers for embedding generation Deployment: Nginx + PM2

Current status:

51 agents indexed (ChatGPT, Claude Code, Gemini, major MCP servers) API at /v1/agents, /v1/search, /v1/register Open source ADP spec

What's interesting technically:

Federation design: Any organization can run an ADP-compliant registry. Agents aren't locked to our platform. Trust without centralization: Security scores aggregate multiple signals (certs, uptime, reviews) but agents control their own manifests. Semantic search on capabilities: Not just keyword matching—understanding what agents can actually do.

Known limitations:

Health monitoring is on-demand, not continuous (working on background jobs) Security scoring is relatively basic (need more sophisticated threat modeling) Federation protocol is planned but not implemented yet Limited to agents that publish manifests (can't auto-discover arbitrary APIs)

Why this matters: As agentic AI becomes more common, we need discovery infrastructure. Right now, finding agents requires GitHub searches or word-of-mouth. This is like trying to find websites before search engines existed. Live demo: https://agentic-exchange.metisos.co ADP spec: https://github.com/metisos/adp-protocol API docs: https://agentic-exchange.metisos.co/develop


Today we launched the Metis Agents Platform: https://www.producthunt.com/posts/metis-agents-platform

Metis Agents is built to make creating and running AI agents easier. You can design an agent, set its role and behavior, and have it handle tasks or collaborate with your team.

We’ve been heads-down building and testing for months, and we’d love feedback from the HN community. Questions about the architecture, agent runtime, or anything technical are very welcome—we’re happy to go deep on the stack.

Thanks for taking a look, Christian


Rebuilt our agentic framework this week to solve tool integration hell. Instead of treating internal and external tools differently, everything now speaks MCP (Model Context Protocol). Key technical wins:

Horizontal architecture eliminates sequential bottlenecks Auto-discovery means agents find and chain tools dynamically External MCP tools integrate with zero friction Internal tools follow same protocol as external ones

Example: Agent can automatically chain GoogleSearch → ContentGeneration → FileSystem → EmailTool for complex workflows, all discovered at runtime.

Open source, Python: pip install metis-agent


We’re releasing Adaptive Recursive Consciousness (ARC), an open-source layer that plugs into any causal-LM checkpoint and lets it keep learning after deployment.

Why it matters A conventional language model freezes the moment training stops; every conversation thereafter is a missed learning opportunity. ARC flips that script. It performs lightweight LoRA weight updates in real time, absorbing new facts, refining style, and building a reasoning graph while it runs, no offline fine-tuning, no epoch schedules, zero downtime.

What ARC brings On-the-fly LoRA updates – gradients are applied during generation, so the model improves without a restart. Biologically-inspired learning gates – novelty, relevance, and emotional salience decide what gets stored, much like human memory consolidation. Hierarchical memory & reasoning graph – working memory, episodic recall, and a growing concept network support long-range reasoning. Cognitive inhibition & metacognition – built-in filters damp off-topic rambles, repetitive loops, and AI-centric digressions. Lean, fast outputs – in a 30-round TinyDolphin-GPT-2 benchmark ARC cut latency by roughly half and reduced perplexity by more than a third while tightening answers and slightly boosting coherence.

Quick start pip install metisos-arc-core

PyPI https://pypi.org/project/metisos-arc-core/ GitHub https://github.com/metisos/arc_coreV1

Performance snapshot (TinyDolphin base vs. ARC) Across 30 blind evaluation rounds ARC:

lowered perplexity from 19.5 to 12.2, indicating cleaner, more fluent language cut average generation time from 4.84 s to 2.22 s, a 54 percent speed-up trimmed answers by about 38 percent without losing substance lifted a simple coherence score by 8 percent nudged heuristic factuality upward by 6 percent

Taken together, these gains translate to roughly a 25 percent overall improvement across the weighted metric bundle we report in the accompanying paper.

What’s next Version 1 is our foundation. We’re already experimenting with multi-modal memory, finer-grained safety rails, and adapters tuned for newer 7- and 13-billion-parameter bases. If you’re building agents, tutors, or autonomous tools that need to learn on the fly, we’d love to hear from you—file an issue, open a pull request, or email us at cjohnson@metisos.com.

— The Metis Analytics research group


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: