I think there's a vast ocean of different software engineers and the type of code they write on a daily basis, which is why you get such differing views on AI's effectiveness.
For me, AI has only ever been useful for very basic tasks and scripts:
If I need a quick helper script for something in Python, a language that isn't my daily drier, AI usually gets me there in a couple of prompts. Or maybe I am writing some powershell/bash and forgot the syntax for something and AI is quicker (or has more context) than a web search.
However, my main job is trying to come up with "elegant" architectures for complex business logic that interacts with an existing large code base. AI is just completely out of its depth in such cases due to lack of context but also lack of source material to draw from.
Even unit tests only work with the most basic of cases, most of the time the setup is so complex it just produces garbage.
I've also had very little luck getting it to write performant code. I almost have to feed it the techniques or algorithms before it attempts to write such code, and even then it's usually wrong or not as efficient as it could be.
Indeed, if you need a lot of boilerplate that's pretty similar to existing commonly available code, you're set. However...
That code is probably buggy, slow, poorly architected, very verbose, and has logical issues where the examples and your needs dont exact match.
Generally, the longer the snippet you want your LLM to generate, the more likely its going to go off the rails.
I think for some positions this can get you 90% of the code done. For me this usually means I can get started very fast on a new problem, but the last remaining "10%" actually takes significantly longer and more effort to integrate because I dont understand the other 90% off the top :)
> my main job is trying to come up with "elegant" architectures for complex business logic that interacts with an existing large code base
Right, that's surely the main job of nearly every experienced developer. It's really cool that LLMs can generate code for isolated tasks, but they can barely even begin to do the hard work, and that seems very unlikely to change in the foreseeable future.
However, my main job is trying to come up with "elegant" architectures for complex business logic that interacts with an existing large code base. AI is just completely out of its depth in such cases due to lack of context but also lack of source material to draw from. Even unit tests only work with the most basic of cases, most of the time the setup is so complex it just produces garbage.
I've also had very little luck getting it to write performant code. I almost have to feed it the techniques or algorithms before it attempts to write such code, and even then it's usually wrong or not as efficient as it could be.