I'm not just a programmer. I'm a designer. I design software to work with people, and I design implementation to have minimal technical debt.
Until I can see an "AI" go through a design process without a human guiding it, iterating until it decides to stop, I'll have this confidence.
Right now, LLM's are Clever Hans, stopping their process when a human says. So not only are they borrowing the intelligence of writers the world over, they're actually borrowing the intelligence of the person at the prompt.
Take that prompt away, and they'll fall flat.
For example, can they even think of a problem to solve on their own? No, they need a human to ask them to find a problem and solve it. Otherwise, they sit. Dumb, unmoving, devoid of agency, and incapable of even the smallest task without input.
> Until I can see an "AI" go through a design process without a human guiding it, iterating until it decides to stop, I'll have this confidence.
What use of your change of opinion then will be? The whole attempt here is to predict if something is possible - just saying "no" and waiting for the actual disproval to change your opinion worth seemingly not much.
In other words, the deeper you need to think to solve problems, the safer you are.