Hacker Newsnew | past | comments | ask | show | jobs | submit | unohee's commentslogin

OpenSwarm isolates context at the agent level — each worker is spawned via Claude Code’s -p flag, so there’s no shared conversation history between agents. The only shared state is written artifacts and a global work memory layer (CLAUDE.md + structured output). Each instance treats that as its single source of truth, rather than reading other agents’ raw context. One thing I’m actively formalizing: a CONFIDENCE-HALT mechanism. Currently it lives as a defined concept in CLAUDE.md, but the next revision will have OpenSwarm inject it explicitly into each worker context — so low-confidence streaks trigger a halt before they compound. Your {action, result, confidence: 0-3} logging pattern is basically the same instinct. Still early, but converging fast. Curious how you handle the structured log schema — do you version it across runs?

For the collaboration between two different models — I’d love to explore that. Expanding model compatibility to broader providers (Codex, Aider, and other API models) is already on my roadmap. I’m planning to add a reviewer feature that supports multiple models, configurable simply by adding an API key to the .env file. Thanks for the suggestion!

I’ve been running OpenClaw Docker agents in Slack in a similar setup, using Gemini 2.5 Flash Lite through OpenRouter for most tasks, then Opus 4.6 and Codex 5.3 for heavier lifts. They share context via embeddings right now, but I’m going to try parameterizing them like you suggested because they can drift prettyy hard once a hallucinated idea takes off. I’m trying to get to a point where I don’t have to babysit them. I’ve also been thinking about giving them some “democracy” under the hood with a consensus policy engine. I’ve started tinkering an open-source version of that called consensus-tools that I can swap between agentic frameworks. Checking out if it can work with openswarm to work for me too.

For the current build, OpenSwarm uses max retry count with an escalation scheme: the first worker starts with Haiku, and if the tester/reviewer blocks enough times, it escalates to Sonnet. Each pipeline step updates Linear's updates tab with iteration count and total cost, so there's a full audit trail per issue. Failed jobs stay as 'in progress' or 'in review' in Linear rather than being auto-closed. I'm currently working on an 'Auditor' layer that analyzes why jobs failed — and longer term, the goal is for OpenSwarm to maintain itself using its own agents. That said, not every failure should be resolved automatically. Some errors genuinely need human judgment, and the dashboard chat interface and Discord are there for exactly that. I think knowing when to hand off to a human is part of what makes an autonomous system actually trustworthy.

Kind of. My point is that agent orchestrators become actually useful when the framework is specific about what's safe to delegate to machines — things that reduce friction in CI/CD operations, not agents that shoot iMessages, click around in browsers, or delete files without approval.

The worker-reviewer pipeline typically runs 1–2 self-revision iterations. In my experience, agents handle most tasks fine, but they tend to miss quality gates — docstrings, minor business logic edge cases, that kind of thing. The reviewer catches what slips through on the code quality side. This is all based on observed behavior from daily Claude Code CLI usage, where I've added hooks specifically to catch systematic failure patterns. OpenSwarm is essentially a productized version of those scaffoldings from my actual workflow — packaged into a more reusable architecture. On context drift — good call, and yeah, that's exactly why the shared memory layer matters. LanceDB keeps the grounding consistent across the chain so each agent isn't just working off its own drifting interpretation. As for disagreements: right now the reviewer blocks and the worker retries with feedback, with a hard cutoff to prevent infinite loops. It's simple but it works — the revision depth rarely needs to go beyond 2 rounds. And when it does fail, that's actually the useful signal — especially when you're triaging larger projects, the points where agents break down are exactly where a human engineer needs to step in. At this point, what OpenSwarm really needs is broader testing from other users to validate these patterns outside my own workflow.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: