Hacker News new | past | comments | ask | show | jobs | submit login

I honestly feel that LLMs have been a net negative for me rather than a positive, *especially* with software development. Most of the time I end up coaxing it through a series of prompts, only for there to be some bug in the code it generated for me that takes me longer to debug than if I had just read the relevant docs and examples myself.

I’m sure there’s utility now, and I’m sure it’ll (ever more slowly, asymptotically) improve, but at the moment it (and the Google that performs worse than that of 5 years ago) has been costing more time than the alternative web search, study, then write it myself. And don’t get me started on non coding technical subjects.






Use it for non coding busy work. At my job we rely fully on AI commit messages, and pr descriptions. We can also automatically generate GitHub issues based off the context of a helpdesk ticket. Less clerical busy work is a nice win

Those aren't clerical busywork, they're important technical writing.

Yeah - that's the thing. I've seen enough hallucinations or other errors to need to carefully check the output from every prompt. If I need to check it thoroughly, then I suppose I could treat it as a fancy autocorrect? But then the productivity gain is much more marginal.

If the accuracy didn't matter, then what's the point of doing the work in the first place?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: