Precisely. I think it helps with smaller/mundane tasks (that it has seen in its training), but the tasks that actually require a higher level reasoning and understanding of the bigger picture - are not something we can expect the current LLM's to do.
I agree with this mostly - but recently a bug was introduced into our app because of a copilot suggestion that wasn't checked thoroughly enough by the engineer (it suggested a property that was similarly named to another property but not the same).
Like you say, it makes the most sense repetitive or easy tasks.
My usage of Copilot is dramatically higher in strictly typed languages because of things like this. It's almost counter-productive if I have to very carefully analyze every variable name to make sure it's not subtly wrong.
Having a compiler do that validation of AI output helps dramatically so I only have to validate logic and not every freaking character in the function.
This is why I have Copilot write unit tests too :D
Actually it does the boring bit of generating the test data and the basic cases, I'll do a once over and add more if it's a something that warrants it.
> I can just tell Copilot to do it and check its work
Checking other entities' code is not trivial and very error-prone.
I get what you're saying but I have my doubts if me doing the whole work manually would be slower than asking an assistant + doing an extensive code review.
This is highly repetitive code where the options are pretty much either me copy-pasting a piece of code and changing a bit here and there or having an AI do it.
The latter won't make stupid small mistakes, I will (and have)
And I'm checking like 10 lines at a time, related to code in the context I've got in my head.
I need to review 100x bigger PRs done by humans of varying skill regularly - related to other parts of the project I'm not intimately familiar.
You tell me. You are the one who said that code was repetitive to generate. :)
So it turns out, not so repetitive after all then?
I remember devising my own mini DSL when I had to produce 250+ such endpoints and validators. Three days spent on that, then ran the command and I had working code 30 seconds later. Felt like a god.
Lately I have been using it to write print and logger statements. I type what I want as a sentence in a comment and then it handles all the special syntax characters. Given the error rate I’m not certain it saves time, but it is fun to play with.
I've got "just" Amazon Q at home (not paying Copilot prices for my personal projects) and just typing "log.Printf(" and waiting a second it usually gets what I'm trying to log either very close or exactly right.
It's not like we're breaking new ground in the field of computer science here. The LLMs have been taught with terabytes of code and me writing a Go API glue program is perfectly in their wheelhouse.
For seniors I think it depends on how much breadth you need. I find them very useful to explore/poke around new areas where I don't have domain knowledge. I agree that areas/problems that I worked in the past it just slows you down but as you move into more unknown territories they are kind of nice to have as a sparing partner.
Similar to my usage as well, it's a good start for unfamiliar territory to quickly get up to speed but you can hit its limits quite fast.
I've been toying around with embedded development for some art projects, it was invaluable to have a kickstart using LLMs to get a glimpse of the knowledge I need to explore, get some useful quick results but when I got into more complex tasks it just breaks down: non-compiling code, missing steps, hallucinations (even to variables that weren't declared previously), reformatting non-functioning code instead of rewriting it.
As complexity grows the tool simply cannot handle it, as you said it's a good sparing partner for new territory but after that you will rely on your own skills to move into intermediate/advanced stuff.
I find the ML completion used in Google codebase very useful. It knows the APIs that I'm going to use better than I do, and it also can infer how exactly I'm going to use them. So in my experience, it does make me more productive.
I've used Cody and Copilot and it just gets in the way because I know exactly what I need to write and neither really helped me.