What’s more concise than code? From my experience, by the time I’ve gotten an English with code description accurate enough for an agent I could have done it myself. Typing isn’t a hard part.
LLMs/agents have many other uses, but if you’re not offloading your thinking you’re not really going any faster wrt letting them write code via a prompt.
A traditional programming language still wins there. "git clone $TETRIS_CLONE_REPO" is fewer words, gets you 100% of the way, and only takes seconds to produce the result.
But the topic at hand is about novel problems. Can you describe your novel solution to an LLM in a natural language with less effort than a programming language that is already designed for describing novel solutions as clearly and concisely as possible?
I find it quite interesting; there seems to be a set of AI enthusiasts who heavily offload thinking onto the LLM. There has to be difference in how they function, as I find as soon as I drift into letting the LLM think for me, productivity plummets.
The real secret to agent productivity is letting go of your understanding of the code and trusting the AI to generate the proper thing. Very pro agent devs like ghuntley will all say this.
And it makes sense. For most coding problems the challenge isn’t writing code. Once you know what to write typing the code is a drop in the bucket. AI is still very useful, but if you really wanna go fast you have to give up on your understanding. I’ve yet to see this work well outside of blog posts, tweets, board room discussions etc.
> The real secret to agent productivity is letting go of your understanding of the code and trusting the AI to generate the proper thing
The few times I've done that, the agent eventually faced a problem/bug it couldn't solve and I had to go and read the entire codebase myself.
Then, found several subtle bugs (like writing private keys to disk even when that was an explicit instruction not to). Eventually ended up refactoring most of it.
It does have value on coming up with boilerplate code that I then tweak.
fixing code now is orders of magnitude cheaper than fixing it in month or two when it hits production.
which might be fine if you're doing proof of concept or low risk code, but it can also bite you hard when there is a bug actively bleeding money and not a single person or AI agent in the house that knows how anything work
That's just irresponsible advice. There is so little actual evidence of this technology being able to produce high quality maintainable code that asking us to trust it blindly is borderline snake-oil peddling.
You can use an agent while still understanding the code it generates in detail. In high stakes areas, I go through it line by line and symbol by symbol. And I rarely accept the first attempt. It’s not very different from continually refining your own code until it meets the bar for robustness.
Agents make mistakes which need to be corrected, but they also point out edge cases you haven’t thought of.
Definitely agreed, that is what I do as well.
At that point you have good understanding of that code, which is in contrast to what the post I responded suggests.
I agree and am the same. Using them to enhance my knowledge and as well as autocomplete on steroids is the sweet spot. Much easier to review code if im “writing” it line by line.
I think the reality is a lot of code out there doesn’t need to be good, so many people benefit from agents etc.
Not to blow your bubble, but I've seen agents expose Stripe credentials by hardcoding them as text into a react frontend app, so, no kids, do not "let go" of code understanding, lest you want to appear as the next story along the lines of "AI dropped my production database".
What’s more concise than code? From my experience, by the time I’ve gotten an English with code description accurate enough for an agent I could have done it myself. Typing isn’t a hard part.
LLMs/agents have many other uses, but if you’re not offloading your thinking you’re not really going any faster wrt letting them write code via a prompt.