The issue I have with program synthesis is the same issue as I have with no-code tools; writing code is the easy part of software development. The hard part is understanding precisely what the stakeholders really want. It sounds trivial, yet it's obviously not since nobody seems to know what the heck they want these days.
I've worked on so many software projects where the company director wanted things to be implemented a certain way, but once it's implemented that way, they suddenly realize that it (necessarily) affects some other functionality in a way that they did not anticipate... Then they decide they want it done another way; this kind of back-and-forth thinking process also happens when coding except it's more granular (and usually you try to foresee the tradeoffs before you start to implement the solution; since the tradeoffs should affect your choice of solution). Every decision counts and relates back to the requirements; AI could not generate good code unless it had a human-level understanding of the problem and requirements.
Coding is about precise decision-making. It would be easier for AI to replace managers and executives since these roles involve far less precise decision-making and they don't require as thorough of an understanding of the domain.
I think this is overly pessimistic. A compiler isn't too different from a synthesis engine that is provided with formal specification. Virtually nobody says "writing code is the easy part so why are we having tools write our asm programs for us?" There is clearly a spectrum where the techniques can be valuable or harmful depending on where they are applied.
"Hey, I synthesized a test case that will cover some uncovered program path", "hey, I synthesized some parser based on some yacc specification", and "hey, I synthesized a network topology that will break the moment requirements change" are all synthesis but have various degrees of unintentional impact.
I've worked on so many software projects where the company director wanted things to be implemented a certain way, but once it's implemented that way, they suddenly realize that it (necessarily) affects some other functionality in a way that they did not anticipate... Then they decide they want it done another way; this kind of back-and-forth thinking process also happens when coding except it's more granular (and usually you try to foresee the tradeoffs before you start to implement the solution; since the tradeoffs should affect your choice of solution). Every decision counts and relates back to the requirements; AI could not generate good code unless it had a human-level understanding of the problem and requirements.
Coding is about precise decision-making. It would be easier for AI to replace managers and executives since these roles involve far less precise decision-making and they don't require as thorough of an understanding of the domain.