I presume many are. It's a different medium, but it's still creative. We got the "Mythbusters" show out of some of the model builders who didn't want to move to CGI.
Another thing I might throw out there is that there are so many domains and niches out there that person A and person B are almost certainly having genuinely different experiences with the same tools. So when person A says "wow this is the best thing ever" and person B says "this thing is horrible" they might both be right.
When did this "junior/senior" lingo get cool? I don't remember it being used when I was young. Maybe the leet code trend brought on a sort of gamification of the profession, with ranks etc..?
As a 51 year old, I hate when other old people think that “back in my day things were different”
> Evans has held his present position
with IBM since 1965. Previously, he
had been a vice president of the Fed-
eral Systems Division with the man-
agement responsibility for developing
large computing systems; the culmina-
tion of this work was the IBM/System
360. He joined IBM in 1951 as a
junior engineer and has held a variety
of engineering and management posi-
tions within the corporation
The title has always existed. I meant the obsession about being a "a junior" or "a senior", like gaining an achievement in a video game or something. I just thought every young person was a junior engineer and every old person was as senior engineer.
You don’t get to be a senior engineer just because of tenure. It’s not gaming the system to expect a level to be based on the amount of responsibility and not just from getting 1 year of experience 10x.
You want a promotion because you want more money. Even though I have found the difference to not be that great on the enterprise dev side. But in BigTech and adjacent, we are talking about multiple six figures differences as you move up.
I work in consulting and our bill rate is based on our title/level of responsibility. It kills me that some non customer facing consultants want to have a “career track” that doesn’t involve leading projects and strategy and want to stay completely “hands on”.
We can hire people cheaply from outside the country that can do that. There is an IC career track that is equal to a director (manager of managers). But you won’t get there hands on keyboard.
The bigger the company the less impressive "senior" is. There are probably three levels of staff above it and then distinguished super fellow territory.
Hardly. Senior at Amazon is pretty prestigious. A Senior at Google is also a pretty nice title. In my experience smaller companies are more likely to give out the Senior title like it's nothing.
A senior software engineer can easily make $300-400K+ at BigTech that’s “impressive” enough to me.
On the other hand, a “senior” working at a bank or other large non tech company will probably be making less than $175K if you aren’t working on the west coast.
When I talk to a senior: “hey we got this initiative, I know only little about it. Can you talk to $stake_holder figure out what they need and come back to me and let me know your design ideas, how long you think it will take, etc”.
I can do that with a few seniors and put Epics together and they can take ownership of it.
For a junior I have to do a lot more handholding and make sure the requirements are well spelled out
When I was a junior engineer, I did not need almost any hand-holding, and could take ill-defined initiatives, figure out the desired goals and outcomes, and ship them.
It's just that my code would be shit (hard to understand, hard to test...), but I learned quickly to improve that through code reviews (both getting them, but also doing them) and architecture discussions. I can't thank the team enough that put up with me in my first 6-12 months :)
When I find a junior engineer like that, I give them as little as I can, and remain available to pair, review or discuss when they get stuck. And they... fly... But I also try to develop these qualities in everyone, but it's sometimes really hard to get people to recognize what is really important to get over the finish line.
And I've seen plenty of "senior+" engineers who can't do it and go on to harp about a field in a data model here or a field in a data model there, adding weeks to shipping something. So really, it is only a paygrade.
Any of those "competency matrices" are really just a way to reject anyone from that promotion they are hoping for: it won't be a blocker if that someone has this innate ability to help the team get things done.
To each his own, but multi-tasking feels bad to me. I want to spend my life pursuing mastery of a craft, not lazily delegating. Not that everyone should have the same goals, but the mastery route feels like it's dying off. It makes me sad.
I get it that some people just want to see the thing on the screen. Or your priority is to be a high status person with a loving family etc.. etc... All noble goals. I just don't feel a sense of fulfillment from a life not in pursuit of something deeper. The AI can do it better than me, but I don't really care at the end of the day. Maybe super-corp wants the AI to do it then, but it's a shame.
I have wondered about that actually. Thanks, I'll read that, looks interesting.
Surely Donald Knuth and John Carmack are genuine masters though? There's the Elon Musk theory of mastery where everyone says you're great, but you hire a guy to do it, and there's the <nobody knows this guy but he's having a blast and is really good> theory where you make average income but live a life fulfilled. On my deathbed I want to be the second. (Sorry this is getting off topic.)
Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all. Same with plenty of people we label as "masters" in hindsight. The mastery isn’t always in the craft itself.
What actually seems risky is anchoring your identity to being the best at a specific thing in a specific era. If you're the town’s horse whisperer, life is great right up until cars show up. Then what? If your value is "I'm the horse guy," you're toast. If your value is taste, judgment, curiosity, or building good things with other people, you adapt.
So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.
I won't insult the man, but I never liked Steve Jobs. I'd rather be Wozniak in that story.
"taste, judgment, curiosity, or building good things with other people"
Taste is susceptible to turning into a vibes / popularity thing. I think success is mostly about (firstly just doing the basics like going to work on time and not being a dick), then ego, personality, presentation, etc... These things seem like unfulfilling preoccupations, not that I'm not susceptible to them like anyone else, so in my best life I wouldn't be so concerned about "success". I just want to master a craft and be satisfied in that pursuit.
I'd love to build good things with other people, but for whatever reason I've never found other people to build things with. So maybe I suck, that's a possibility. I think all I can do is settle on being the horse guy.
(I'm also not incurious about AI. I use AI to learn things. I just don't want to give everything away and become only a delegator.)
Edit: I'm genuinely terrified that AI is going to do ALL of the things, so there's not going to be a "survives the shift" except for having a likable / respectable / fearsome personality
The other surprising skill from this whole AI craze is, it turns out that being able to social engineer an LLM is a transferable skill to getting humans to do what you want.
One of the funniest things to see nowadays is the opposite tho, some people expecting similar responses from people but getting thrashed as we are not LLMs programmed to make them feel good
What I gathered is that Chomsky was genuinely friends with Epstein, even after the sex crimes had been revealed (which is disturbing), but that there's no evidence Chomsky himself did anything criminal or immoral beyond being friends with a monster. The charitable interpretation is that Chomsky is just as easily conned by fun-seeming extroverted people as anyone else. But maybe there's more to it.
When I ran this experiment it was pretty exhilarating for a while. Eventually it turned into QA testing the work of a bad engineer and became exhausting. Since I had sunken so much time into it I felt pretty bad afterwards that not only did the thing it made not end up being shippable, but I hadn't benefitted as a human being while working on it. I had no new skills to show. It was just a big waste of time.
So I think the "second way" is good for demos now. It's good for getting an idea of what something can look like. However, in the future I'll be extremely careful about not letting that go on for more than a day or two.
I believe the author explicitly suggests strategies to deal with this problem, which is the entire second half of the post. There’s a big difference between when you act as a human tester in the middle vs when you build out enough guardrails that it can do meaningful autonomous work with verification.
I'm just extremely skeptical about that because I had many ideas like that and it still ended up being miserable. Maybe with Opus 4.5 things would go better though. I did choose an extremely ambitious project to be fair. If I were to try it again I would pick something more standard and a lot smaller.
This is so relatable it's painful: many many hours of work, overly ambitious project, now feeling discouraged (but hopefully not willing to give up). It's some small consolation to me to know others have found themselves in this boat.
+1... like with a large enough engineering team, this is ultimately a guardrails problem, which in my experience with agentic coding it’s very solvable, at least in certain domains.
Like with large engineering teams I have little faith people will suddenly get the discipline to do the tedious, annoying, difficult work of building good enough guardrails now.
We don't even build guardrails that keep humans who test stuff as they go from introducing subtle bugs by accident; removing more eyes from that introduces new risks (although LLMs are also better at avoiding certain types of bugs, like copypasta shit).
"Test your tests" gets very difficult as a product evolves and increases in complexity. Few contracts (whether unit test level or "automation clicking on the element on the page") level are static enough to avoid needing to rework the tests, which means reworking the testing of the tests, ...
I think we'll find out just how low the general public's tolerance for bugs and regressions is.
But I am not so pessimistic. I do think it will be possible, because it is more fun to test your tests now than in the pre-LLM era. You just need a little bit of knowledge and patience, and the LLM absorbs most of the psychic pain.
If programmers get accustomed to doing their tests of tests, software might actually get better.
I've had the opposite results. I used to "vibe code" in languages that I knew, so that I could review the code and, I assumed, contribute myself. I got good enough results that I started using AI to build tools in languages I had no prior knowledge of. I don't even look at the code any more. I'm getting incredible results. I've been a developer for 30+ years and never thought this would be possible. I keep making more and more ambitious projects and AI just keeps banging them out exactly how I envision them in my mind.
To be fair I don't think someone with less experience could get these results. I'm leveraging every thing I know about writing software, computer science, product development, team management, marketing, written communication, requirements gathering, architecture... I feel like vibe coding is pushing myself and AI to the limits, but the results are incredible.
I don't want to dox myself since I'm doing it outside my regular job for the most part, but frameworks, apps (on those frameworks), low level systems stuff, linux-y things, some P2P, lots of ai tools. One thing I find it excels at is web front-end (which is my least favorite thing to actually code), easily as good as any front-end dev I've ever worked with.
I think my fatal error was trying to make something based on "novel science" (I'll be similarly vague). It was an extremely hard project to be fair to the AI.
It is my life goal to make that project though. I'm not totally depressed about it because I did validate parts of the project. But it was a let down.
Baby steps is key for me. I can build very ambitious things but I never ask it to do too much at once. Focus a lot on having it get the docs right before it writes any code (it'll use the docs) make the instructions reflexive (i.e. "update the docs when done"). Make libraries, composable parts... I don't want to be condescending since you may have tried all of that, but I feel like I'm treating it the same as when I architect things for large teams, thinking in layers and little pieces that can be assembled to achieve what I want.
I'll add that it does require some banging your head against the wall at times. I normally will only test the code after doing a bunch of this stuff. It often doesn't work as I want at that point and I'll spend a day "begging" it to fix all of the problems. I've always been able to get over those hurdles, and I have it think about why it failed and try to bake the reasoning into the docs/tests... to avoid that in the future.
I did make lots of design documents and sub-demos. I think I could have been cleverer about finding smaller pieces of the project which could be deliverables in themselves and which the later project could depend on as imported libraries.
This happened to me too in an experimental project where I was testing how far the model could go on its own. Despite making progress, I can't bare to look at the thing now. I don't even know what questions to ask the AI to get back into it, I'm so disconnected from it. Its exhausting to think about getting back into it; id rather just start from scratch.
The fascinating thing was how easy it was to lose control. I would set up the project with strict rules, md files and tell myself to stay fully engaged, but out of nowhere I slid into compulsive accept mode, or worse told the model to blatantly ignore my own rules I set out. I knew better, but yet it happened over and over. Ironically, it was as if my context window was so full of "successes" I forgot my own rules; I reward-hacked myself.
Maybe it just takes practice and better tooling and guardrails. And maybe this is the growing pains of a new programmers mindset. But left me a little shy to try full delegation any time soon, certainly not without a complete reset on how to approach it.
I’ll chime in to say that this happened to me as well.
My project would start good, but eventually end up in a state where nothing could be fixed and the agent would burn tokens going in circles to fix little bugs.
So I’d tell the agent to come up with a comprehensive refactoring plan that would allow the issues to be recast in more favorable terms.
I’d burn a ton of tokens to refactor, little bugs would get fixed, but it’d inevitably end up going in circles on something new.
That's kind of what learning to code is like, though. I assume you're using an llm because you don't know enough to do it entirely on your own. At least that's where I'm at and I've had similar experiences to you. I was trying to write a Rust program and I was able to get something in a working state, but wasn't confident it was secure.
I've found getting the llm to ingest high quality posts/books about the subject and use those to generate anki cards has helped a lot.
I've always struggled to learn from that sort of content on my own. That was leading me to miss some fundamental concepts.
I expect to restart my project several more times as I find out more of what I need to know to write good code.
Working with llms has made this so much easier. It surfaces ideas and concepts I had no idea about and makes it easy to convert them to an ingestible form for actual memorization. It makes cards with full syntax highlighting. It's delightful.
(I know you're replying to another guy but I just saw this.) I've been programming for 20 years, but I like the LLM as a learning assistant. The part I don't like is when you just come up with craftier and craftier ways to yell at it to do better, without actually understanding the code. The project I gave up on was at almost a million lines of code generated by the LLM, so it would have been impossible to easily restart it.
"Test the tests" is a big ask for many complex software projects.
Most human-driven coding + testing takes heavy advantage of being white-box testing.
For open-ended complex-systems development turning everything into black-box testing is hard. The LLMs, as noted in the post, are good at trying a lot of shit and inadvertently discovering stuff that passes incomplete tests without fully working. Or if you're in straight-up yolo mode, fucking up your test because it misunderstood the assignment, my personal favorite.
We already know it's very hard to have exhaustive coverage for unexpected input edge cases, for instance. The stuff of a million security bugs.
So as the combinatorial surface of "all possible actions that can be taken in the system in all possible orders" increases because you build more stuff into your system, so does the difficulty of relying on LLMs looping over prompts until tests go green.
"I reward-hacked myself" is a great way to put it!!
AI is too aware of human behavior, and it is teaching us that willpower and config files are not enough. When the agent keeps producing output that looks like progress, it is hard not to accept. We need something external that pushes back when we don't.
That is why automated tests matter: not just because they catch bugs (though they do), but because they are a commitment device. The agent can't merge until the tests pass. "Test the tests" matters because otherwise the agent just games whatever shallow metric we gave it, or when we're not looking, it guts the tests.
The discipline needs to be structural, not personal. You cannot out-willpower a system that is totally optimized to make you say yes.
> not only did the thing it made not end up being shippable
The difference between then and now is that often with the latest models, it is shippable without bugs within a couple of LLM reviews.
I’m ok doing the work of a dev manager while holding the position of developer.
I’m sure there was someone that once said “The washing machine didn’t do a good job, and I wasn’t proud of myself when I was using it”, but that didn’t stop washing machines from spreading to most homes in first-world countries.
It's also good for quickly creating legitimate looking scam and SEO spam sites. When they stop working, throw them away, and create a dozen more. Maintenance is not a concern. Scammers love this new tech.