> Like mine will keep forgetting about nullish coallescing (??) in JS, and even after I fix it up it will revert my change in its future changes. So of course I put that rule in and it won't happen again.
I'm surprised that this sort of pattern - you fix a bug and the AI undoes your fix - is common enough for the author to call it out. I would have assumed the model wouldn't be aggressively editing existing working code like that.
Yeah I have seen this a bunch of times as well. Especially with deprecated function calls. It generates a bunch of code. I get deprecation warnings. I fix them. Copilot fixes them back. I have to explicitly point out that I made the change for it to not reintroduce the deprecations.
I guess that while code that compiles is easier to train for but code with warnings less so?
I remember there are other examples of changes that I have to tell the AI I made to not have it change it back again, but can't remember any specific examples.
It’s due to a problem with Cursor not updating the state of the files that have been manually edited since the last time they were used in the chat, so it’ll thing the fix is not there and blindly output code that doesn’t have it. The ‘apply’ model is dumb, so it just overwrites the corrected version with the wrong one.
I think the changelog said they fixed it in 0.46, but that’s clearly not the case.
Yep I asked about this exact problem the other day: https://news.ycombinator.com/item?id=43308153 Having something like “always read the current version of the file before suggesting edits” in Cursor rules doesn’t help, the current file is only read by the agent sometimes. Guess no one has a reliable solution yet.
Cursor in agent mode + Sonnet 3.7 love nothing better than rewriting half your codebase to fix one small bug in a component.
I've stopped using agent unless its for a POC where I just want to test an assumption. Applying each step takes a bit more time but means less rogue behaviour and better long term results IME.
Reminds me of my old co-worker who rewrote our code to be 10x faster but 100x more unreadable. AI agent code is often the worst of both of those worlds. I'm going to give [0] this guy's strategy a shot.
If you stopped using agent mode, why use Cursor at all and not a simple plugin for VSCode? Or is there something else that Cursor can do, but a VSCode plugin can't?
I'm surprised that this sort of pattern - you fix a bug and the AI undoes your fix - is common enough for the author to call it out. I would have assumed the model wouldn't be aggressively editing existing working code like that.