Don't you think the next step is a programming language that isn't even meant to be human readable? What's the point of using an LLM to generate python or Swift or whatever? The output of the LLM should be something that runs and does whatever it's been asked to do... why should the implementation be some programming language that was designed for humans to grok? Once that's true the idea of it being maintainable becomes moot, because no one will know what it is in the first place. I don't think we're there yet, but that seems like the eventual destination.
All good software is in a constant state of maintenance - users figure out new things they want to do so requirements are constantly changing.
A running joke we had at my startup years ago was "... and after we ship this feature we'll be finished and we will never have to write another line of code again!"
Good software is built with future changes in mind - that's why I care so much about things like automated test suites and documentation and code maintainability.
Letting LLMs generate black box garbage code sounds like a terrible idea to me. The idea that LLMs will get so good at coding that we won't mind is pure science fiction - writing code is only one part of the craft of delivering useful software.
That begs the question of what abstraction layer is necessary beyond an assembler, if any? If human handcrafted ASM outcompetes compiled C then why not give LLMs the wheel on ASM? Then another question is - are there enough good ASM publically available as examples?
It might even get further. I imagine a day where AI would generate an executable neural network that models (and is optimized for) a specific problem; i.e. kind of a model that runs on a neural network runtime or VM. Who cares what the NN is doing as long as it's doing its job correctly. The big catch, though, is the keyword "correctly" and I would add "deterministically" to it, in order for users to trust it.
Yeah it does seem like a game of telephone to train LLMs on code optimized for human cognition, then have them create behavior by parrotting that code back into a compiler. Could they just create behavior directly?
We're seeking angel investors for our startup that does this, we train models with "assembly" that does specific things and also complete "programs"; the end goal to prompt and it outputs executables. It's farther along than quantum computing at solving real problems, for instance, it can factor "15".
this is like the third time i've mentioned this on HN (over a year.) Apparently everyone else is too busy complaining or defending Claude Code to notice.