I wonder how soon (or if it's already happening) that AI coding tools will behave like early career developers who claim all the existing code written by others is crap and go on to convince management that a ground up rewrite is required.
(And now I'm wondering how soon the standard AI-first response to bug reports will be a complete rewrite by AI using the previous prompts plus the new bug report? Are people already working on CI/CD systems that replace the CI part with whole-project AI rewrites?)
As the cost of AI-generated code approaches zero (both in time and money), I see nothing wrong with letting the AI agent spin up a dev environment and take its best shot. If it can prove with rigorous testing that the new code works is at least as reliable as the old code, and is written better, then it's a win/win. If not, delete that agent and move on.
On the other hand, if the agent is just as capable of fixing bugs in legacy code as rewriting it, and humans are no longer in the loop, who cares if it's legacy code?
But I can see it "working". At least for the values of "working" that would be "good enough" for a large portion of the production code I've written or overseen in my 30+ year career.
Some code pretty much outlasts all expectations because it just works. I had a Perl script I wrote in around 1995-1998 that ran from cron and sent email to my personal account. I quit that job, but the server running it got migrated to virtual machines and didn't stop sending me email until about 2017 - at least three sales or corporate takeovers later (It was _probably_ running on CentOS4 when I last touched it in around 2005, I'd love to know if it was just turned into a VM and running as part of critical infrastructure on CentOS4 12 years later).
But most code only lasts as long as the idea or the money or the people behind the idea last - all the website and differently skinned CRUD apps I built or managed rarely lasted 5 years without being either shut down or rewritten from the ground up by new developers or leadership in whatever the Resume Driven Development language or framework was at the time - toss out the Perl and rewrite it in Python, toss out the Python and rewrite it in Ruby On Rails, then decide we need Enterprise Java to post about on LinkedIn, then rewrite that in Nodejs, now toss out the Node and use Go or Rust. I'm reasonably sure this year's or perhaps next years LLM coding tools can do a better job of those rewrites than the people who actually did them...
Will the cost of AI-generated code approach zero? I thought the hardware and electricity needed to power and train the models and infer was huge and only growing. Today the free and plus plans might be only $20/month, once moats are built I assume prices will skyrocket a order of magnitude or few higher.
> Will the cost of AI-generated code approach zero?
Absolutely not.
In the short term it will, while OpenAI/Anthropic/Anysphere destroy software development as a career. But they're just running the Uber playbook - right now they're giving away VC money by funding the datacenters that're training and running the LLMs. As soon as they've put enough developers out of jobs and ensured there's no new pipeline of developers capable of writing code and building platforms without AI assistance, they will stop burning VC cash and start charging at rates that not only break even but also return the 100x the investors demand.
(And now I'm wondering how soon the standard AI-first response to bug reports will be a complete rewrite by AI using the previous prompts plus the new bug report? Are people already working on CI/CD systems that replace the CI part with whole-project AI rewrites?)