I somewhat like the idea of not using MCP as much as it is being hyped.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
Indeed, I think the only "new" thing about clawdbot is that it is using discord/telegram/etc as the interface? Which isn't really new, but seems to be what people really like
I think a big part of it is timing. Claude Opus 4.5 is really good at running agentic loops, and Clawdbot happened to be the easiest thing to install on your own machine to experience that in a semi-convenient interface.
I'm assuming OP means cloud-based load balancers (listening on public ips). Some providers scale load balancers pretty often depending on traffic which can result in a set of new IPs.
Being specific: AWS load balancers use a 60 second DNS TTL. I think the burden of proof is on TFA to explain why AWS is following an "urban legend" (to use TFA's words). I'm not convinced by what is written here. This seems like a reasonable use case by AWS.
Typically in this system you encode obligations - e.g. "eieio should review, or at least be aware of, all changes made to this library." I think that means you're unlikely in practice to have a problem like that, which (unless the team is not functioning well) requires two people who care deeply about the variable name and don't know that someone else is changing it.
If it's a single project, you could try putting some of your corrections in AGENTS.md/CLAUDE.md if supported. I don't remember if cursor reads from there, but I think it does have its own rules system.
Basically just a bullet list of stuff like "- use httpx instead of requests" or "- http libraries already exist, we dont need to build a new one that shells out to /proc/tcp"
Just add stuff you find yourself correcting a lot. You may realize you have a set of coding conventions and you just need to document it in the repo and point to that.
Smaller project-specific lists like that have been better imo vs giant prompts. If I wouldn't expect a colleague to read a giant instruction doc, I'm not going to expect llms to do a good job of it either.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
reply