The truth is that the tools are actually quite good already. If you know what you are doing they will 10-20x your productivity.
Ultimately not adopting them will religate you to the same fate as assembly programmers. Sure there are place for it, but you won't be able to get near as much functionally done in the same amount of time and there won't be as much demand for it.
Do you agree that the brain-memory activity of writing code and reading someone else’s code is totally different ?
The sand castle analogy is still valid here because once you have a x10 productivity or worse a x20 one, there is no way you can deeply understand the things the same way than if you wrote it from scratch. Without spending a considerable amount of time and getting productivity down, the understanding is not the same.
If no one is responsible because it’s crap software and you won’t be around enough time to bear responsibility… it’s ok I guess ?
if you are seeing 900% productivity gains, why did these controlled experiments only find 28%, mostly among programmers who don't know what they're doing? and only 8–10% among programmers who did? do you have any hypotheses?
i suspect you are seeing 900% productivity gains on certain narrow tasks (like greenfield prototype-quality code using apis you aren't familiar with) and incorrectly extrapolating to programming as a whole
I think you’re probably right, though I fear what it will mean for software quality. The transition from assembly to high level languages was about making it easier to both write and to understand code. AI really just accelerates writing with no advancement in legibility.
Ultimately not adopting them will religate you to the same fate as assembly programmers. Sure there are place for it, but you won't be able to get near as much functionally done in the same amount of time and there won't be as much demand for it.