Same result - I tried it for a while out of curiosity but the improvements were a false economy: time saved in one PR is time lost to unplanned work afterwards. And it is hard to spot the mistakes because they can be quite subtle, especially if you've got it generating boilerplate or mocks in your tests.
Makes you look more efficient but it doesn't make you more effective. At best you're just taking extra time to verify the LLM didn't make shit up, often by... well, looking at the docs or the source.. which is what you'd do writing hand-crafted code lol.
I'm switching back to emacs and looking at other ways I can integrate AI capabilities without losing my mental acuity.
> And it is hard to spot the mistakes because they can be quite subtle
aw yeah; recently I spent half a day pulling my hair debugging some cursor-generated frontend code just to find out the issue was buried in some... obscure experimental CSS properties which broke a default button behavior across all major browsers (not even making this up).
Velocity goes up because you produce _so much code so quickly_, most of which seems to be working; managers are happy, developers are happy, people picking up the slack - not so much.
I obviously use LLMs to some extent during daily work, but going full-on blind mode on autopilot gotta crash the ship at some point.
Just your run-of-the-mill hallucinations, e.g. mocking something in pytest but only realising afterwards that the mock was hallucinated, the test was based on the mock, and so the real behaviour was never covered.
I mean, I generally avoid using mocks in tests for that exact reason, but if you expect your AI completions to always be wrong you wouldn't use them in the first place.
Beyond that, the tab completion is sometimes too eager and gets in the way of actually editing, and is particularly painful when writing up a README where it will keep suggesting completely irrelevant things. It's not for me.
> the tab completion is sometimes too eager and gets in the way of actually editing
Yea, this is super annoying. The tab button was already overloaded between built-in intellisense stuff and actually wanting to insert tabs/spaces, now there are 3 things competing for it.
I'll often just want to insert a tab, and end up with some random hallucination getting inserted somewhere else in the file.
Seriously, give us our tab key back! I changed accept suggestion to shift TAB.
But still there is too much noise now. I don't look at the screen while I'm typing so that I'm not bombarded by this eager AI trying to distract me with guesses. It's like a little kid interrupting all the time.
Can tell it to check it before and after if it doesn't do something and it can improve.
Also telling it not to code, or not to jump to solutions is important. If there's a file outlining how you like to approach different kinds of things, it can take it into consideration more intuitively. Takes some practice to pay attention to your internal dialogue.
Makes you look more efficient but it doesn't make you more effective. At best you're just taking extra time to verify the LLM didn't make shit up, often by... well, looking at the docs or the source.. which is what you'd do writing hand-crafted code lol.
I'm switching back to emacs and looking at other ways I can integrate AI capabilities without losing my mental acuity.