Hacker News new | past | comments | ask | show | jobs | submit login

The way I see AI programming assistants is that it would help juniors be a bit more productive, but senior developers can do with out the assistant.

I've used Cody and Copilot and it just gets in the way because I know exactly what I need to write and neither really helped me.




Precisely. I think it helps with smaller/mundane tasks (that it has seen in its training), but the tasks that actually require a higher level reasoning and understanding of the bigger picture - are not something we can expect the current LLM's to do.

However as I was researching, there are a few interesting ideas in this space that might help these LLM's solve more complex problems in the future. Post here if interested: https://kshitij-banerjee.github.io/2024/04/30/can-llms-produ...


These are the cases where even senior developers should use AIs.

When I'm creating a CRUD API I know exactly what I want, I know exactly how it should look like.

Do I want to spend 15-30 minutes typing furiously adding the endpoints? No.

I can just tell Copilot to do it and check its work. I'll be done in 5 minutes doing something more engaging like adding the actual business logic.


I agree with this mostly - but recently a bug was introduced into our app because of a copilot suggestion that wasn't checked thoroughly enough by the engineer (it suggested a property that was similarly named to another property but not the same).

Like you say, it makes the most sense repetitive or easy tasks.


My usage of Copilot is dramatically higher in strictly typed languages because of things like this. It's almost counter-productive if I have to very carefully analyze every variable name to make sure it's not subtly wrong.

Having a compiler do that validation of AI output helps dramatically so I only have to validate logic and not every freaking character in the function.


This is why I have Copilot write unit tests too :D

Actually it does the boring bit of generating the test data and the basic cases, I'll do a once over and add more if it's a something that warrants it.


> I can just tell Copilot to do it and check its work

Checking other entities' code is not trivial and very error-prone.

I get what you're saying but I have my doubts if me doing the whole work manually would be slower than asking an assistant + doing an extensive code review.


This is highly repetitive code where the options are pretty much either me copy-pasting a piece of code and changing a bit here and there or having an AI do it.

The latter won't make stupid small mistakes, I will (and have)

And I'm checking like 10 lines at a time, related to code in the context I've got in my head.

I need to review 100x bigger PRs done by humans of varying skill regularly - related to other parts of the project I'm not intimately familiar.


Okay, but isn't your own code generator a better option in this case? You know, a for loop with some parameters that spits out code?


How can a "for loop" generate me 10 API endpoints in C# that call business logic functions with the parameters received?


You tell me. You are the one who said that code was repetitive to generate. :)

So it turns out, not so repetitive after all then?

I remember devising my own mini DSL when I had to produce 250+ such endpoints and validators. Three days spent on that, then ran the command and I had working code 30 seconds later. Felt like a god.


Lately I have been using it to write print and logger statements. I type what I want as a sentence in a comment and then it handles all the special syntax characters. Given the error rate I’m not certain it saves time, but it is fun to play with.


I've got "just" Amazon Q at home (not paying Copilot prices for my personal projects) and just typing "log.Printf(" and waiting a second it usually gets what I'm trying to log either very close or exactly right.

It's not like we're breaking new ground in the field of computer science here. The LLMs have been taught with terabytes of code and me writing a Go API glue program is perfectly in their wheelhouse.


For seniors I think it depends on how much breadth you need. I find them very useful to explore/poke around new areas where I don't have domain knowledge. I agree that areas/problems that I worked in the past it just slows you down but as you move into more unknown territories they are kind of nice to have as a sparing partner.


Similar to my usage as well, it's a good start for unfamiliar territory to quickly get up to speed but you can hit its limits quite fast.

I've been toying around with embedded development for some art projects, it was invaluable to have a kickstart using LLMs to get a glimpse of the knowledge I need to explore, get some useful quick results but when I got into more complex tasks it just breaks down: non-compiling code, missing steps, hallucinations (even to variables that weren't declared previously), reformatting non-functioning code instead of rewriting it.

As complexity grows the tool simply cannot handle it, as you said it's a good sparing partner for new territory but after that you will rely on your own skills to move into intermediate/advanced stuff.


I find the ML completion used in Google codebase very useful. It knows the APIs that I'm going to use better than I do, and it also can infer how exactly I'm going to use them. So in my experience, it does make me more productive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: