The things that ChatGPT or Claude spit out are impressive one-shots but hard to iterate on or integrate with other code.
And you can’t just throw Aider/Cursor/Copilot/etc at the original output without quickly making a mess. At least not unless you are nudging it in the right directions at every step, occasionally jumping in and writing code yourself, fixing/refactoring the LLM code to fit style/need, etc.
This is how I use Cursor Composer Agents. Detailed outline up front and see what it comes up with. I then use it to iterate on that idea. Sometimes it breaks things, so I'll have to reject/revert the change and then ask it again, but tell it not to change XYZ. If it starts going down the wrong path, I'll step it and code it myself. But I've ran into cases where the next question I ask it seems to be based on the state of the code form its last change, not the current state as I have changed it. So that can be frustrating.
I've really only done greenfield hobby projects with it so far. Hesitant to throw larger things at it that have been growing for 8/9 years. But, there's always undo or `git reset`. :P
Working != maintainable
The things that ChatGPT or Claude spit out are impressive one-shots but hard to iterate on or integrate with other code.
And you can’t just throw Aider/Cursor/Copilot/etc at the original output without quickly making a mess. At least not unless you are nudging it in the right directions at every step, occasionally jumping in and writing code yourself, fixing/refactoring the LLM code to fit style/need, etc.