I see this argument a lot, people are really bad at extrapolating out into the future, but and maybe this is controversial, but I feel like most technologies don’t get _that_ much better as time goes on. I guess it’s like, people assume LLM’s are like the first CPU’s, where they doubled in speed every year, but maybe they’re more like the internet. A little more high fidelity but mostly just used to sell you shit and show you ads.
Like codebases or trees, organizations grow unevenly and end up overstaffed in places or under in others. AI is political cover to trim and prune a company to operate more lean without freaking everyone out.
No one is gonna fire ALL the devs. It’ll be a race: each company will have their devs managing more and more agents.
Every agentic coding agent incurs management overhead - yes, it’ll go down, but at the same rate for everyone.
More humans still means more capability.
To say nothing of the hard won knowledge in engineers’ heads; if we can’t get that out in the era of cheap and easy documentation (Notion etc) why would we be able to in the era of AI?
It's almost halfway through 2025 and AI can generate perfectly fine code from conversational language interactions.
The interactions are where it's at - prompting obviously - but also feedback and clarification after breaking.
I went through the process of trying to figure out what a coding agent that coded for someone who didn't code look like when it first got going and created this: https://github.com/kordless/evolvemcp
It's less than perfect, but works well. I've used it to "auto code" several MCP servers/tools (honestly the MCP language is super confusing) and I can see where an MCP proxy with a search engine attached to it will be super handy later (and if you understand that and build such a thing, you're welcome for the idea if you hadn't had it already).