Per Kernighan, debugging is 2X harder. If the AI jockeys don't understand what they're being given, man, it's going to be humorous watching them putting out fires. And how can statistically fit models exceed their training set w/o going random? How is AI going to string together equations to do physics or engineering? They're bloody squiggly symbols and letters.
And the marketplace still isn't interested in fixing bugs over "oooh, shiny", so my concerns might never be addressed.
Imagine if the AI can provide both a program and a machine-verifiable proof of correctness though. Then all you have to manually verify is that the proof proves the right thing.
That sounds hard to do in a natural language. Maybe they could deliver the unambiguous specification in a formal language designed specifically for the purpose of specifying what programs are supposed to do.
I was half joking, because software development is largely the process of developing an unambiguous specification - just in the form of executable code.
You can't in general determine if an arbitrary program will halt. I believe it is however, possible to construct a program that does provably halt. I have no idea if you can write most useful programs that way though.
Almost all programs that people write are written so that they provably halt (or provably don't halt - as continuing forever is sometimes the desired behavior).
How do we do that? By using known-halting parts, and composing them in known-halting ways. How did we find those parts in the first place, well computer science.. But boiling that down to a codified process, that's where it gets murkier. I think we don't really know exactly how human ingenuity discovers new useful halting parts in the face of the halting problem.
And the marketplace still isn't interested in fixing bugs over "oooh, shiny", so my concerns might never be addressed.