Even where a human in the loop is a legal obligation, it can be QA or a PM, roles as different from "developer" as "developer" is from "circuit designer".
A PM or QA can sign off only on process or outcome quality. They cannot replace the person who actually understands the architecture and the implications of technical decisions. Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.
If AI becomes powerful enough to generate entire systems, the person supervising and validating those systems is, functionally, a developer — because they must understand the technical details well enough to take responsibility for them.
Titles can shift, but the role dont disappear. Someone with deep technical judgment will still be required to translate intent into implementation and to sign off on the risks. You can call that person "developer", "AI engineer" or something else, but the core responsibility remains technical. PMs and QA do not fill that gap.
> They cannot replace the person who actually understands the architecture and the implications of technical decisions.
LLMs can already do that.
What they can't do is be legally responsible, which is a different thing.
> Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.
Legal responsibility and technical responsibility are not always the same thing; technical responsibility is absolutely in the domain of PM and QA, legal responsibility ultimately stops with either a certified engineer (which software engineering famously isn't), the C-suite, the public liability insurance company, or the shareholders depending on specifics.
Ownership requires legal personhood, which isn't the same thing as philosophical personhood, which is why corporations themselves can be legal owners.
If LLMs truly "understood architecture" in the engineering sense, they would not hallucinate, contradict themselves, or miss edge cases that even a mid-level engineer catches instinctively.
They are powerful tools but they are not engineers.
And its not about legal responsibility at all. Developers dont go to jail for mistakes, but they are responsible within the engineering hierarchy. A pilot is not legally liable for Boeing's corporate decisions, and the plane can mostly fly on the autopilot, but you still need a pilot in the cockpit.
AI cannot replace the human whose technical judgment is required to supervise, validate, and interpret AI-generated systems.