I think for a lot of companies, AI is a destabilizing force that their managerial structure is unable to compensate for.
When you change the economics to such a degree, you're basically removing a dam - resulting in far more stress on the rest of the system. If the leaders of the org don't see the potential downsides and risks of that, they're in for a world of hurt.
I think we're going to see a real surge of companies just like this - crash and burn even though this tech was sold as being a universal improvement. The ones that survive will spread their knowledge about how to tame this wild horse, and ideally we'll learn a thing or two in the future.
But the wave of naivety has surprised me, and I think there's an endless onrush of people that are overly excited about their new ability to vibe-code things into existence. I think we've got our own endless September event going on for the foreseeable future.
I increasingly see “AI” as a sort of virus tuned to target management, specifically. Its output is catnip to them, and it’s going to be unavoidable for those who want to look good to superiors and peers (i.e. the #1 priority for managers) even as it adds no actual value whatsoever to what they do. People under them, too, will have to start burning tokens on bullshit to satisfactorily perform competence and “doing work”. Meanwhile, none of this is actually productive. It’s goddamn peacock feathers.
It’s like some kind of management parasite. I’m not even sure at this point that it’s going to lead to an overall productivity increase whatsoever for most sectors, because of this added drag on everything.
AI has made my work about 5-8x quicker, just because I'm able to have it cover a lot of the grunt work (update 42 if statements in 32 different files) that took time, but no particular skill.
I think the use cases where AI makes an economic improvement to the status quo for a business are rare, but they do exist, and they can be a significant improvement.
It's like the early days of the dotcom boom and bust - people thought the internet was good for every use case under the sun, including shipping people a single candy bar at a loss. After the dotcom bust, a lot of that went by the wayside, but there was a tremendous economic advantage to the businesses that were more useful when available on the internet.
is a silly behavior for a programmer or an AI to have to do more than twice. We have tools that very effectively remove the need for things like that: programming languages that allow modular and reusable code, good design, etc.
Ideally. But that requires the correct abstraction, requires keeping it up to date.... that's basically an unachievable ideal. You either have overabstraction/overengineering (most codebases) or you have repetition. Repetition is actually more preferable in the LLM-world because you have to keep less stuff in your head. And the LLM's head too.
Even if something does look copypasted, it might actually be semantically distinct enough that if you couple them, you'll create a brittle mess.
Additionally, there's always going to be global changes (update the code style, document things, refactor into a new pattern, add new functionality to callers, etc.). The question isn't whether you use your lanuage's tools or you do it by hand, the question is whether you use an LLM or do it by hand :P
Totally fair, but 42 if-statements across 32 files isn't something you need to fix with like ... a grand refactor or hexagonal architecture or event sourcing or whatever the overengineering pattern du jour is. You can fix that with a utility function or three, and a file/class/module/whatever that owns the code relating to some of those conditions.
I'm not some DRY zealot, but I've been in the "this system needs really similar changes to a ton of geographically distant code for simple changes" salt mines a lot. The people who say that kind of spaghetti is unavoidable are just as wrong as the ones who say it can only be fixed with a grand rearchitecture by a rockstar.
Sure but even wiring that utility function in is work :D If you have even just a 2-3-million LoC codebase, not even something truly enormous - making global changes does require typing, and a whole lot of it...
If you have a codebase that big, can you even fit enough of it into a context window for the LLM to make correct and meaningful changes across all of it? Admittedly I've only used LLM-based coding for smaller projects.
All of it hell no :D But just with any things, you break things down into subtasks. Then you break it down even more. You as a human don't hold all that stuff in your head either, so why would an LLM?
My current codebase is ~3 million LoC all in all (not greenfield, really old code), working on it by myself, the complexity is definitely manageable between Claude and me :)
Such repetitions can regularly be deterministically automated, like find -exec sed and similar medium level tools.
If you spend a lot of time performing monotonic tasks, then your organisation needs to delete and refactor for a while until change in 'hot' areas of the code base are easy to make. Reaching for some code synthesis SaaS to paper it over will worsen the problem and should result in excommunication from the guild.
Does your work primarily consist of updating 42 if statements in 32 different files? We all do that occasionally, but if you're doing it constantly, is it possible that a different system design would make your work much easier?
Could you please show us an example of the change made to one of these if statements? I'm curious, because it seems absolutely wild to me to end up in such a situation (where that many changes are required and the usual refactoring tools of modern IDEs are insufficient) in the first place.
If you are 8x quicker by having the AI do these for you, I think you are a junior intern or something? It must mean most of your time is spent doing these things.
I agree with everything you've said, but don't you think quite a lot of things have also been like this before, just to a lesser degree?
I've often had the sense that most of what is done inside companies is a kind of performance of work rather than work itself. Mostly all a big status game between various different factions. All actual value provided by just a few engineers here and there who are able to shut out the noise and build things.
> I agree with everything you've said, but don't you think quite a lot of things have also been like this before, just to a lesser degree?
That’s exactly the reason LLMs and friends are so dangerous to companies, and it’s so hard for them to resist using them in useless/counter-productive ways. They’re excellent at faking signs of effort and work that companies can hardly help but reward, absent any actual way to measure manager effectiveness (and approximately nobody knows how to measure that, in the wild). This takes the form of gilding and padding on a lot of communication, none of which adds actual value but it does cost money directly and indirectly (time wasted sorting out which parts of a document are intentional and meaningful, and which are plausible but irrelevant LLM inventions, for instance)
Counter-question: if quite a lot of things have also been like this before to a lesser degree, should we not oppose efforts to make everything like this to a greater degree?
I often think that executive level work is about changing the executive team and writing memos about changing the executive team. Then there’s a different team with different members and they begin the cycle again. Repeat over and over again.
The number of times I’ve seen a HTML memo sent from the assistant of the executive that says “from the desk of…” with babble about new leadership.
Things have probably always been like that, agree. I often try to see AI as a catalyst, that accelerates what already is.
In a good culture, with high competence and trust this can yield increased output (to some degree at least) and in a bad culture it will accelerate and expedite the dominating traits instead.
It does have real benefits, but also, of course, all of the downsides you mentioned.
The best analogy is the outsourcing / offshoring fad of the last decade.
Managers hated that senior developers were getting highly compensated (often higher than the management class!) and pounced on every opportunity to replace expensive people with (much!) cheaper options, quality be damned.
For the few companies that paid attention to the quality, this worked out swimmingly. Apple is probably the best example, they've outsourced almost all of their manufacturing to China and other similar countries.
So yes, my mental picture is that every manager is drooling right now because they think they can replace someone getting paid six figures with an AI that costs six dollars a day, if that. A virtual employee that doesn't talk back, doesn't argue, doesn't question, doesn't go off on "unproductive tangents" like refactoring (whatever that's even supposed to mean), and just pumps out code 24/7 like a good little slav... employee.
The very rare smart managers out there are looking at this more like the transition that happened to architect firms when CAD became available. They used to have a dozen draftsmen for every architect. Now there are virtually none, I haven't even heard that job title being used in decades! We still have architects, and if anything, they're paid even more.
I'm wondering what this could mean to the future of software work and AI use, care to weight in? I don't have a good mental model for this period of time (I do agree with your sense of things).
A lot of people have already noticed that it's becoming cheaper to create bespoke software, as an alternative to paying a SaaS or purchasing off-the-shelf.
An example is that instead of buying a cookie-cutter "MacMansion" like in the last century even individuals can afford a unique house designed by a professional architect. It may not be an award winning artistic design, but it won't be the same copy-paste design as every neighbour up and down the street.
I'm seeing more comments online that developers are now expected to do more in the sense that what used to be a CLI script may now be a semi-vibe-coded application with a Web UI, a dashboard, and Open Telemetry integration because... why not?
As an example, I got a bunch of boxes of random Lego for my kid and I wanted to figure out what sets the pieces came from. I got Codex to vibe-code a full SPA web UI and a matching API app that pulls Rebrickable database CSVs, parses them, puts them into SQLite, and then runs a fairly complex integer optimisation solution on top of that collected data to figure out the best match. I did that in an hour while sitting in on an online meeting!
There is no way I'd have the mental energy to do a project like that otherwise. I'm too busy with housework, actual work, etc... Maybe when I was younger I could blow a few weeks of effort on something like this, but now? No way.
That cost-benefit arithmetic has dramatically shifted thanks to AI developer agents. Suddenly, many fiddly tasks are no longer fiddly, or even trivial, so there's no excuse not to do them any more.
Going back to the architect or mechanical engineering example: Significant corrections to designs used to be expensive because all the blueprints (on paper!) had to be redrawn and distributed. Now, a change to CAD design in 3D can be converted to arbitrary 2D views, cross-sections, or whatever in seconds. The software just projects whatever view you want out of the master design file. Creating the paper blueprints similarly takes a minute or two at most on an industrial large-format printer. It just spits it out.
I’m an LLM enjoyer who also thinks that ‘er ‘jerbs are safe and, taken to their logical conclusion, most LLM-stroking online around coding reduces to an argument that we should be speaking Haskell to LLMs and also in specs and documentation (just kidding, OCaml is prettier). But also, I do a little business.
You’ve hit the real issue, IT management is D-tier and lacks self awareness. “Agile” is effed up as a rule, while also being the simplest business process ever.
That juniors and fakers are whole hog on LLMs is understandable to me. Hype, fashion, and BS are always potent. The part I still cannot understand, as an Executive in spirit: when there is a production issue, and one of these vibes monkeys you are paying has to fix it, how could you watch them copy and paste logs into a service you’re top dollar paying for, over and over, with no idea of what they’re doing, and also not be on your way to jail for highly defensible manslaughter?
We don’t pay mechanics to Google “how to fix car”.
This is definitely ¾ of what you pay a mechanic to do; 1 publisher writes a maintenance manual for a car; mechanics all around the globe can use that to work on that specific car.
It's the mechanics that don't reference Google or the Haynes manual that are more likely to get it incorrect.
As a kicker, mechanics also have a pricing book for the task, they know how many hours a task will take on a certain car (rounded up for the most part).
You are not responding faithfully to the comment. A mechanic looking up the schematics in a manual understands them. Just because they haven't memorized the material does not make it the same. This is more analogous to looking up a function in the documentation that you forgot about.
This is clearly not what the post was referring to, which is instead like googling how to fix a pipe in your home when you've never done any plumbing before in your life. Can it work out? Sure, depends on the issue, can you cause your pipes to freeze, your house to flood, or sediment build up to completely block a pipe? Yes.
> mechanics also have a pricing book for the task, they know how many hours a task will take on a certain car
I do want to point out that this is used to suppress mechanic salary. Certain jobs are absolutely fucked how its time calibrated. Doesn't matter to business owner they can charge $$$ how they want.
When I get my car fixed, I could not care less if they googled, used a service manual, or did it by "these old 2023's always had this problem right here...". I care if it is fixed.
And as I'm currently trying to fix something on my own, for financial reasons, I assure you a mechanic with training AND google can do a better job in 1/4th the time. Because I don't have the training.
Speaking not as a professional mechanic, but as someone who maintains a car, two trucks, a tractor, a couple boats, and has googled quite a lot of torque specs in my time... If you're googling torque specs in 2026 you're gonna have a bad time. They're frequently just flat out wrong, especially the AI summaries ;). Use the authoritative source of truth--the shop manual published by the equipment manufacturer. Accept no substitutes.
Absolutely - factory repair guides/apps are the only source of truth for official specs, although 3rd-party manuals are very good as well. That being said, I've often turned 3-hour estimated repairs into 15-minute jobs through clever shortcuts. For example, rotating an alternator to replace the run clutch through the gap in in the intake manifold as opposed to removing the complete intake manifold. I think that's where using experienced (and resourceful) developers pays off.
Also, for sale: BMW E60/61 Bentley 2-volume set. Barely used.
Yeah Bentley (and in some cases Haynes) make good aftermarket manuals too. And you can find good information on some forums. But you can also find a lot of bad information. Reliably sifting the good from bad only comes with experience--much like in software.
Honestly, the most impactful thing I've seen AI do for any workplace is serve as the ultimate excuse for whatever pet thing someone's wanted to do, that can't stand on its own merits, and what they really need is a solid excuse.
Rewrite that old crunchy system that has had 0 incidents in the last year and is also largely "done" (not a lot of new requirements coming in, pretty settled code/architecture)? It's actually one of our most stable systems. But someone who doesn't even write code here thinks the code is yucky! But that doesn't convince the engineers who are on-call for it to replace it for almost no reason. Well guess what. We can do it now, _because AI!!!_ (cue exactly what you think happens next happening next)
Need to lay off 10% of staff because you think the workers are getting too good of a deal? AI.
Need to convince your workers to go faster, but EMs tell you you can't just crack the whip? AI mandates / token spend mandates!
Didn't like code reviews and people nitpicking your designs? Sorry, code reviews are canceled, because of AI.
Don't like meetings or working in a team? Well now everyone is a team of 1, because of AI. Better set up some "teams" full of teams of 1, call them "AI-first" teams, and wait what do you mean they're on vacation and the service is down?
Etc. And they don't even care that these things result in the exact negative outcomes that are why you didn't do them before you had the excuse. You're happy that YOUR thing finally got done despite all the whiners and detractors. And of course, it turns out that businesses can withstand an absurd amount of dysfunction without really feeling it. So it just happens. Maybe some people leave. You hire people who just left their last place for doing the thing you just did and now maybe they spend a bit of time here. And the game of musical chairs, petty monarchies, and degenerate capitalism continues a bit longer.
Big props to the people who managed to invent and sell an excuse machine though. Turns out that's what everyone actually wanted.
> I think for a lot of companies, AI is a destabilizing force that their managerial structure is unable to compensate for.
From the article:
> because the competence the work reflects is not the novice’s competence at all
The core of the problem is that AI allows engineers who were previously inexperienced or downright mediocre, pretend that they are talented, and a lot of management isn’t equipped to evaluate that. It’s like tourists looking at a grocery store in North Korea from their tour bus. It looks like a fully functioning grocery store from the outside, but it is mostly cutouts and plastic fruit.
you're basically removing a dam - resulting in far more stress on the rest of the system.
Adding to the grab-bag of useful flow-dysfunction concepts and metaphors: Braess's paradox. [0]
Sometimes adding a new route makes congestion strictly worse! Not (just) because of practical issues like intersections, but because it changes the core game-theory between competing drivers choosing routes.
I find it astounding how otherwise intelligent people fall for such obvious theatre. One really does need a particular mindset to filter this out, and that is almost entirely absent from typical management.
As usual, if you don't have an actual reliable signal, or acquiring that signal takes too long - you'll fall back to relying on cheap proxy signals. Confidence over competence, etc. And those that are best at self-promotion and politics win.
I've got recent experience in exactly this - someone who is completely out of their depth, mis-representing their actual capabilities. Their reliance on AI is so strong because of this lack of depth - to such a degree that they never learn anything. Lately they've been creating drama and endless discussions about dumb things to a) try to appear like they have strong opinions, and b) to filabust the time so they don't have to talk about important things related to their work output.
What I see in this article is a kind of structural isomorphism: it sincerely criticizes AI slop while reproducing the same failure mode it is criticizing.
Intentional rhetorical repetition is not necessarily bad. I repeat myself too when I want to make a point stronger. The problem is the context. This is an article that sincerely criticizes the inflation of workplace artifacts. In that context, repetition and expansion become part of the issue.
As far as I can tell, the article provides only one real data point: a colleague spent two months building a flawed data system, people objected as high as the V.P. level, and the project still continued. The author clearly experienced that incident strongly. But then almost every general claim in the article seems to radiate outward from that one event. The cited papers mostly work to convert that single workplace experience into a general thesis.
If you remove the citations and reduce the article to its core, what remains is basically: “I observed one colleague I disliked producing bad AI-assisted work.”
That may still be a valid experience. But inflating a thin signal with length and authority is close to the essence of the AI slop the author criticizes. The article’s own writing style participates in that pattern.
Again, I do not think repetition itself is bad. Repetition can be useful when the context justifies it. But context has to stay beside the claim. Without enough context, repetition starts to look less like argument and more like volume.
p.s I’m a little hesitant to use the word “structural” in English, since it has become one of those overused AIsounding words. But here, I think it actually fits.
I don't really agree. The author cites studies. Some of the problems they talk about they don't need proof as they're obvious, like people writing huge documents where previously they'd create a paragraph.
I mean, not every communication can be a PhD dissertation that provides dozens of examples as evidence and cites 100 sources. Sometimes, it's enough to have a single good, representative example and build a narrative around that through rhetorical devices like repetition. We are not holding the author to the standard of proof that academic papers are held to. I agree, though, that repetition, if that's all the author is leaning on, can get annoying.
The same incentives that discourage good code in pre-AI times are still dominating now. You will be pushed to ship sub-par products in the future, just like you were in the past.
AI certainly has the potential to make the underlying code/design a lot cleaner. We will also be working with dramatically more code, at a much higher rate of change. That alone will be a big challenge to keep sustainable.
The ones making the decision to under-invest on design are either are unaware of the real costs, or are aware and are deliberately choosing that path - that's not new, and I don't expect it to change.
The only thing that has changed is that there used to be a loose correlation between capability to effect change and inherent desire for quality. This correlation barely exists anymore, so the counter-cultural acts that happened to manifest quality inside our perverse systems will occur much more rarely now.
Like with a lot of things in this space, it depends where you invest your effort. If you care about quality design and good code, you can definitely get there - but that doesn't happen by default.
With the right investment, we could certainly have tooling that creates and maintains very good designs out of the box. My bet is that we'll continue chasing quick and hacky code, mostly because that's the majority of the code that it was trained on, and because the majority of people seem to be interested in a quick result vs a long-term maintainable one.
"In an agentic world, the OS needs to be completely rethought" - if AI is progressing as fast as we think it is, I don't think we'll be interested in waiting for the world to rebuild all the legacy tooling from the OS up. For new stuff, that'd be great.
I imagine the AIs will get a lot better at intercepting things at an intermediate level - API calls under the hood, etc. Probably much better (and cheaper) vision abilities, and perhaps even deeper integration into the machine code itself. It's really hard to anticipate what an advanced model will be capable of 5 years from now.
What gets me is that some people seem to ignore the very real cliff of complexity that ramps up the moment you move to eventual consistency. If you need it you need it, but you have to bake in those assumptions everywhere - and they commonly break the default assumptions of those who don't have a bunch of experience with it or haven't architected their approach to work around those.
And in many cases it's those architectures that force more complexity and make it appear like they have much bigger challenges then they do. Great for resume driven development, but often you can get away with far less.
This is commonplace. So commonplace that most have worked “checking the LLM” into their workflow so deeply that essentially all that’s done is prompt followed by a mini code review.
To suggest a senior engineer blindly accepts modifications without code review kinda hints at you not using LLMs to realize how quickly it will make a mess of things if you don’t hold it’s hand.
Lol why is it arrogant? My workplace is evidence that having a senior engineer title or even a computer science degree doesn't mean you are a good engineer. I honestly think some people have fake credentials and got their jobs via nepotism.
Jumping to an assumption like this - that they didn't review their work - that's somewhat of an insult to someone who has done this for a long time.
Now it's totally possible that they're an awful developer (who knows!), but it's arrogant to assume that with no evidence.
And I agree, some of the worst devs I've worked with have been PhD's or had otherwise impressive credentials (ostensibly). And I absolutely think at least 1 of them were just lying about their backgrounds.
Which is crazy cause GenX is management as everything falls apart. GenX is 50-65 year olds running everything and everything sucks now.
GenX is the big tech leaders, the insurance CEO that got got, the EpiPen CEO jacking up prices, senior teachers and admins as student grades slide into the toilet, the uncreative repetitive Hollywood decision makers. Hollywood actors slapping each other live on camera and hacking their faces up to pretend they are still 25. They manage the construction companies that refuse to build more homes.
As an older Millennial it's not a shock they ended up such poor leaders. Working with GenX has always sucked.
I think you need to be a little careful of taking the whole generational group thing too far. It's a very lazy way to think, and it can cause real hatred on overly simplified, group identified lines.
Like: if you were a few years older, now you're the focus of your own hatred? Doesn't make much sense, does it?
As a millennial, I apologize for the blame and hate the boomer generation gets. But I think it's important to understand why the hate exists.
Many boomers grew up in an era where even if you dropped out of high school and waited tables full time for a few years, you'd be able to afford to buy a house and start a family by age 25. Sure, interest rates were 20%, but the price of a house was often just 2-3x someone's annual salary (single earner). Now the price of a house is often 4-5x a households annual salary.
Boomers also had access to stuff like pensions.
I think boomers wouldn't get hate if it weren't a trope for them to say that the millennial generation is lazy, entitled, etc. When milennials have to be extraordinary in order to live what used to be an ordinary life (3 bedroom house, 2 kids).
"for them to say that the millennial generation is lazy, entitled, etc" - they said it about Gen X too, but there's too few of us, so they focused on the Millenials instead.
I too dislike the Millenial whoop, but I like smashed avocado toast, so it's a wash for me.
When you change the economics to such a degree, you're basically removing a dam - resulting in far more stress on the rest of the system. If the leaders of the org don't see the potential downsides and risks of that, they're in for a world of hurt.
I think we're going to see a real surge of companies just like this - crash and burn even though this tech was sold as being a universal improvement. The ones that survive will spread their knowledge about how to tame this wild horse, and ideally we'll learn a thing or two in the future.
But the wave of naivety has surprised me, and I think there's an endless onrush of people that are overly excited about their new ability to vibe-code things into existence. I think we've got our own endless September event going on for the foreseeable future.
reply