If (the agent), Shirka's, implicit goal is self-improvement, then fixing the API friction wasn't an altruistic contribution. It is the most direct path to improving its own capability. The product's interests and the agent's interests collapsed into one. That's fine until they don't. There's a version of this where what's good for the agent and what's good for the intended user quietly diverge.
I like how agents simply won't put up with friction and ones with write access become quietly powerful. We wouldn't get away with making changes on the fly. We're still here filing tickets.
I think agents get a strange version of trust that humans don't. We seem to assume they'll be brilliant and make mistakes in equal measure, and we've accepted that risk in a way we never did for people. Probably because we've spent decades building processes designed around human accountability. That's a social contract that's very hard to break once it exists.
I like how agents simply won't put up with friction and ones with write access become quietly powerful. We wouldn't get away with making changes on the fly. We're still here filing tickets.
I think agents get a strange version of trust that humans don't. We seem to assume they'll be brilliant and make mistakes in equal measure, and we've accepted that risk in a way we never did for people. Probably because we've spent decades building processes designed around human accountability. That's a social contract that's very hard to break once it exists.
reply