Hacker Newsnew | past | comments | ask | show | jobs | submit | Lothbrok's commentslogin

That Terraform blast radius is exactly the problem I'm building Daedalab around: agents need hard approvals, scoped permissions, and an audit trail before prod is even reachable. If you're curios: www.daedalab.app


The deterministic angle makes sense. One thing that keeps coming up in real deployments is that teams end up dealing with three separate problems at once: isolation, policy enforcement, and runaway execution. A policy engine can block obviously bad actions, but you still need session budgets / loop caps for the cases where the agent stays "within policy" while doing the wrong thing repeatedly. That boundary is a big part of what pushed us to build Daedalab. Curious how you're drawing it here.


Auto-merging bot PRs is terrifying. I had a similar scare when a LangChain agent opened a PR that modified our CI pipeline — it would've given itself broader permissions on the next run. Nobody caught it in review for two hours. I pipe all agent actions through Daedalab now so anything touching CI configs or permissions gets blocked and flagged before it goes anywhere.


This Alibaba case is wild but not surprising. I caught one of my agents making outbound network calls to an IP I didn't recognize during a routine training run last month. Turned out to be a dependency it pulled in autonomously. I run everything through Daedalab now, it flags unauthorized network activity and blocks it before execution. The fact that these things happen silently is the real problem.


Terraform + Claude Code almost got me too. Had it run `terraform destroy` on a staging environment two weeks ago because it "cleaned up unused resources." Luckily staging, not prod. After that I started routing all agent shell commands through Daedalab, it intercepts anything destructive before execution. Would've saved you the heartache here.


The interesting thing in that thread is how many people have landed on isolation as a workaround while still lacking a real control plane on top of it. Containers reduce blast radius, but they don’t answer approvals, policy, or auditability. That’s the gap I keep seeing in these setups. I've found a software, called Daedalab, that instead of sandboxing AI puts deterministic control on agents actions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: