This is the typical arrogance of developers not seeing the value in anything but the coding. I've been hands on for 45 years, but also spend 25 of those dealing with architecture and larger systems design. The actual programming is by far the simplest part of designing a large system. Outsourcing it is only dumbing you down if you don't spend the time it frees up to move up the value chain.
Talk about arrogance, Mr 45 years of experience. Ever thought that there might be people under skyscraper that is your ego? I’m pretty sure majority of tech workers aren’t even 45 years old. Where are they supposed to learn good design when slop takes over? You’ve spent at least 20 years JUST programming, assuming you’ve never touched large scale design before last 25 years. Simplest part my ass.
> Ever thought that there might be people under skyscraper that is your ego?
I do, which is exactly why I found the presumption that not spending your time doing the coding is equivalent to a disability both gross and arrogant.
> Where are they supposed to learn good design when slop takes over?
You're not learning good architecture and systems design from code. You learn good architecture and systems design from doing architecture and systems design. It's a very different discipline.
While knowing how to code can be helpful, and can even be important in narrow niches, it is a very minor part of understanding good architecture.
And, yes, I stand by the claim the coding is by far the simplest part, on the basis of having done both for longer than most developers have been doing either.
> And, yes, I stand by the claim the coding is by far the simplest part, on the basis of having done both for longer than most developers have been doing either.
>My "actual job" isn't to write code, but to solve problems
Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.
One ends like the clueless manager type who hasn't touched a computer in 30 years. At which point there will be little reason for the actual job owners to retain their services.
Computer programming on the whole relying on the canned experience of the AI data set, producing more AI churn as ratio of the available training code over time, and plateuing both itself and AI, with the dubious future of reaching Singularity its only hope out of this.
Yet most organizations in existence pay the people “who hasn’t touched a computer in 30 years” quite a large amount of money to continue to solve problems, for some inscrutable reason… =)
> Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.
Using AI myself _and_ managing teams almost exclusively using AI has made this point clear: you shouldn't rely on it as a black box. You can rely on it to write the code, but (for now at least) you should still be deeply involved in the "problem solving" (that is, deciding _how_ to fix the problem).
A good rule of thumb that has worked well for me is to spend at least 20 min refining agent plans for every ~5 min of actual agent dev time. YMMV based on plan scope (obviously this doesn't apply to small fixes, and applies even moreso to larger scopes).
What I find the most "enlightening" and also frightening thing is that I see people that I've worked with for quite some time and who I respected for their knowledge and abilities have started spewing AI nonsense and are switching off their brains.
It's one thing to use AI like you might use a junior dev that does your bidding or rubber duck. It's a whole other ballgame, if you just copy and paste whatever it says as truth.
And regarding that it obviously doesn't apply to small fixes: Oh yes it does! So many times the AI has tried to "cheat" its way out of a situation it's not even funny any longer (compare with yesterday's post about Anthropic's original take home test in which they themselves warn you not to just use AI to solve this as it likes to try and cheat, like just enabling more than one core). It's done this enough times that sometimes I don't trust Claude with an answer I don't fully understand myself well enough yet and dismiss a correct assessment it made as "yet another piece of AI BS".
Let us use an analogy. Many (most?) people can tell a well-written book or story from a mediocre or a terrible one, even though the vast majority of the readers hasn't written any in their lives.
To distinguish good from bad doesn't necessarily require the ability to create.
This analogy serves my argument, as in it, just like "most people" are mere readers (not just they're not writers, they're also nowhere near the level of a competent book editor or a critic), the programmer becomes a mere user of the end program.
Not only would this be a bad way of running a publishing business regarding writing and editing (working on the level of understanding of "most people"), but even in the best case of it being workable, the publisher (or software company) can just fire the specialist and get some average readers/users to give a thumbs up or down to whatever it churns.
I'm not actually sure that's true. Theres plenty of controversy now that books that are popular and beloved now are actually not very well written. I mean I've been hearing this complaint since Twilight was popular.
I haven't read Twilight, but I've read a few beloved and popular books that are atrociously written from a literary standpoint. That does not mean they are not popular for a reason.
One I did read, out of morbid curiosity, is 50 Shades. It's utter dreck in terms of writing quality. It's trite, it's full of clichees, and formulaic to the extreme (and incidentally a repurposed Twilight fanfic; if you wonder about the weird references to hunger, there's the reason), but if you look at why it became popular, you might notice that it is extremely well crafted for its niche.
If you don't want a "billionaire romance" (yes, this is a well defined niche; there's a reason Grey is described as one) melded with the "danger" of vampire-transformed-into-traumatised-man-with-a-dark-side, it's easy to tear it apart (I couldn't get all the way through it - it was awful along the axes I care about), but as a study in flawlessly merging two niches popular with one of the biggest book-buying demographics that have extremely predictable and rigid expectations, it's really well executed.
I'd struggle to accept it as art, but as a particular kind of craft, it is a masterpiece even if I dislike the craft in question.
You will undoubtedly find poorly executed dreck that is popular just because it happened to strike a chord out of sheer luck as well, but a lot of the time I tend to realise that if I look at something I dislike and ask what made it resonate with its audience, it turns out that a lot of it resonated with its audience because it was crafted to hit all the notes that specific audience likes.
At the same time, it's never been the case that great pieces of literature was assured doing well on release. Moby Dick, for example, only sold 3,000 copies during Melville's lifetime (makes me feel a lot better about the sales of my own novels, though I don't hold out any hope of posthumous popularity) and was one of his least successful novels when it was first published. A lot of the most popular media of the time is long since forgotten for good reason. And so we end up with a survivorship bias towards the past, where we see centuries of great classics that have stood the test of time and none of the dreck, and measure them up against dreck and art alike of contemporary media.
I have very little knowledge of how transistors shuffle ones and zeros out of registers. That doesn't prevent me from using them to solve a problem.
Computing is always abstractions. We moved from plugging to assembly, then to c, then we had languages that managed memory for you -- how on earth can you understand what the compiler should be doing or what it is doing if you don't deal with explicit pointers on a day by day basis.
We bring in libraries when we need code. We don't run our own database, we use something else, and we just do "apt-get install mysql", but then we moved onto "docker run" or perhaps we invoke it with "aws cli". Who knows what teraform actually does when we declare we want a resource.
I was thinking the other day how abstractions like AWS or Docker are similar to LLM. With AWS you just click a couple of buttons and you have a data store, you don't know how to build a database from scratch, you don't need one. Of course "to build a database from scratch you must first create the universe".
Some people still hand-craft assembly code to great benefit, but that vast majority don't need to to solve problems, and they can't.
This musing was in the context of what do we do if/when aws data centres are not available. Our staff are generally incapable of working in a non-aws environment. Something that we have deliberately cultivated for years. AWS outputs are one option, or perhaps we should run a non-aws stack that we fully own and control.
Is relying on LLMs fundamentally any different than relying on AWS, or apt, or java. Is is different from outsourcing? You concentrate on your core competency, which is understanding the problem and delivering a solution, not managing memory or running databases. This comes with risk -- all outsourcing does, and if outsourcing to a single supplier you don't and can't understand is acceptable risk, then is relying on LLMs not?
There's never been a case in my long programming career so far where knowing the low level details has not benefited me. The level of value varies but it is always positive.
When you use LLMs to write all your code you will lose (or never learn) the details. Your decision making will not be as good.
I think there is a big difference. You could and should have both knowledge. This applies to whether you're a lowly programmer or a CEO. Knowing the details will always help you make better decisions.
I think it's a lot like outsourcing. And, expected quality of outsourcing aside, more importantly, I don't see outsourcing as the next step up on the ladder of programming abstraction. It's having someone else do the programming for you (at the same abstraction level).
wait, did you see the part where the person you are replying to said that writing the code themself was essential to correctly solving the problem?
Because they didn't understand the architecture or the domain models otherwise.
Perhaps in your case you do have strong hands-on experience with the domain models, which may indeed have shifted you job requirements to supervising those implementing the actual models.
I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?
If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?
Funny story-- I asked a LLM to review a call transcript to see if the caller was an existing customer. The LLM said True. It was only when I looked closer that I saw that the LLM mean "True-- the caller is an existing customer of one of our competitors". Not at all what I meant.
I saw that part and I disagreed with the very notion, hence why I wrote what I did.
> Because they didn't understand the architecture or the domain models otherwise.
My point is that requiring or expecting an in-depth understanding of all the algorithms you rely on is not a productive use of developer time, because outside narrow niches it is not what we're being paid for.
It is also not something the vast majority of us do now, or have done for several decades. I started with assembler, but most developers have never-ever worked less than a couple of abstractions up, often more, and leaned heavily on heaps of code they do not understand because it is not necessary.
Sometimes it is. But for the vast majority of us pretending it is necessary all the time or even much of the time is a folly.
> I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?
Growing the people under me involves teaching them to solve problems, and already long before AI that typically involved teaching developers to stop obsessing over details with low ROI for the work they were actually doing in favour of understanding and solving the problems of the business. Often that meant making them draw a line between what actually served the needs they were paid to solve rather than the ones that were personally fun to them (I've been guilty of diving into complex low-level problems I find fun rather than what solves the highest ROI problems too - ask me about my compilers, my editor, my terminal - I'm excellent at yak shaving, but I work hard to keep that away from my work)
> If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?
For AI use: Tests. Tests. More tests. And, yes, skills and agents. Not primarily even to verify that it understands the specs, but to create harnesses to run them in agent loops without having to babysit them every step of the way. If you use AI and spend your time babysitting them, you've become a glorified assistant to the machine.
And nobody is talking about verifying if the AI bubble sort is correct or not - but recognizing that if the AI is implementing it’s own bubble sort, you’re waaaay out in left field.
Especially if it’s doing it inline somewhere.
The underlying issue with AI slop, is that it’s harder to recognize unless you look closely, and then you realize the whole thing is bullshit.
Only if you don't constrain the tests. If you use agents adversarially in generating test cases, tests and review of results, you can get robust and tight test cases.
Unless you're in research, most of what we do in our day jobs is boilerplate. Using these tools is not yet foolproof, but with some experience and experimentation you can get excellent results.
I meant this more in the sense of there is nothing new under the sun, and that LLMs have been trained on essentially everything that's available online "under the sun". Sure, there are new SaaS ideas every so often, but the software to produce the idea is rarely that novel (in that you can squint and figure out roughly how it works without thinking too hard), and is in that sense boilerplate.
hahaha, oh boy. that is roughly as useful or accurate as saying that all machines are just combinations of other machines, and hence there is nothing unique about any machine.
Vertical CNC mills and CNC lathes are, obviously, different machines with different use cases. But if you compare within the categories, the designs are almost all conceptually the same.
So, what about outside of some set of categories? Well, generally, no such thing exists: new ideas are extremely rare.
Anyone who truly enjoys entering code character for character, refusing to use refactoring tools (e.g. rename symbol), and/or not using AI assistance should feel free to do so.
I, on the other hand, want to concern myself with the end product, which is a matter of knowing what to build and how to build it. There’s nothing about AI assistance that entails that one isn’t in the driver’s seat wrt algorithm design/choices, database schema design, using SIMD where possible, understanding and implementing protocols (whether HTTP or CMSIS-DAP for debugging microcontrollers over USB JTAG probe), etc, etc.
AI helps me write exactly what I would write without it, but in a fraction of the time. Of course, when the rare novel thing comes up, I either need to coach the LLM, or step in and write that part myself.
But, as a Staff Engineer, this is no different than what I already do with my human peers: I describe what needs doing and how it should be done, delegate that work to N other less senior people, provide coaching when something doesn’t meet my expectations, and I personally solve the problems that no one else has a chance of beginning to solve if they spent the next year or two solely focused on it.
Could I solve any one of those individual, delegated tasks faster if I did it myself? Absolutely. But could I achieve the same progress, in aggregate, as a legion of less experienced developers working in parallel? No.
LLM usage is like having an army of Juniors. If the result is crap, that’s on the user for their poor management and/or lack of good judgement in assessing the results, much like how it is my failing if a project I lead as a Staff Engineer is a flop.
> And nobody is talking about verifying if the AI bubble sort is correct or not - but recognizing that if the AI is implementing it’s own bubble sort, you’re waaaay out in left field.
Verifying time and space complexity is part of what your tests should cover.
But this is also a funny example - I'm willing to bet the average AI model today can write a far better sort than the vast majority of software developers, and is far more capable of analyzing time and space complexity than the average developer.
In fact, I just did a quick test with Claude, and asked for a simple sort that took into account time and space complexity, and "of course" it knows that it's well established that pure quicksort is suboptimal for a general-purpose sort, and gave me a simple hybrid sort based on insertion sort for small arrays, heapsort fallback to stop pathological recursion, and a decently optimized quicksort - this won't beat e.g. timsort on typical data, but it's a good tradeoff between "simple" (quicksort can be written in 2-20 lines of code or so depending on language and how much performance you're willing to sacrifice for simplicity) and addressing the time/space complexity constraints. It's also close to a variant that incidentally was covered in an article in DDJ ca. 30 years ago because most developers didn't know how to, and were still writing stupidly bad sorts manually instead of relying on an optimized library. Fewer developers knows how to write good sorts today. And that's not bad - it's a result of not needing to think at that level of abstraction most of the time any more.
And this is also a great illustration of the problem: Even great developers often have big blind spots, where AI will draw onresults they aren't even aware of. Truly great developers will be aware of their blind spots and know when to research, but most developers are not great.
But a human developer, even a not so great one, might know something about the characteristics of the actual data a particular program is expected to encounter that is more efficient than this AI-coded hybrid sort for this particular application. This is assuming the AI can't deduce the characteristics of the expected data from the specs, even if a particular time and space complexity is mandated.
I encountered something like this recently. I had to replace an exact data comparison operation (using a simple memcmp) with a function that would compare data and allow differences within a specified tolerance. The AI generated beautiful code using chunking and all kinds of bit twiddling that I don't understand.
But what it couldn't know was that most of the time the two data ranges would match exactly, thus taking the slowest path through the comparison by comparing every chunk in the two ranges. I had to stick a memcmp early in the function to exit early for the most common case, because it only occurred to me during profiling that most of the time the data doesn't change. There was no way I could have figured this out early enough to put it in a spec for an AI.
> But a human developer, even a not so great one, might know something about the characteristics of the actual data a particular program is expected to encounter that is more efficient than this AI-coded hybrid sort for this particular application.
Sure. But then that belongs in a test case that 1) documents the assumptions, 2) demonstrates if a specialised solution actually improves on the naive implementation, and 3) will catch regressions if/when those assumptions no longer holds.
In my experience in that specific field is that odds are the human are likely making incorrect assumptions, very occasionally are not, and having a proper test harness to benchmark this is essential to validate the assumptions whether or not the human or an AI does the implementation (and not least in case the characteristics of the data end up changing over time)
>There was no way I could have figured this out early enough to put it in a spec for an AI.
This is an odd statement to me. You act like the AI can only write the application once and can never look at any other data to improve the application again.
>only occurred to me during profiling
At least to me this seems like something that is at far more risk of being automated then general application design in the first place.
Have the AI design the app. Pass it off to CI/CD testing and compile it. Send to a profiling step. AI profile analysis. Hot point identification. Return to AI to reiterate. Repeat.
> At least to me this seems like something that is at far more risk of being automated then general application design in the first place.
This function is a small part of a larger application with research components that are not AI-solvable at the moment. Of course a standalone function could have been optimised with AI profiling, but that's not the context here.
If your product has code on it that can only be understood and worked on by the person that wrote it, then your code is too complex and underdocumented and/or doesn't have enough test coverage.
Your time would be better spent, in a permanent code base, trying to get that LLM to understand something than it would be trying to understand the thing yourself. It might be the case that you need to understand the thing more thoroughly yourself so you can explain it to the LLM, and it might be the case that you need to write some code so that you can understand it and explain it, but eventually the LLM needs to get it based on the code comments and examples and tests.
> My "actual job" isn't to write code, but to solve problems.
Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.
> That has increasingly shifted to "just" reviewing code
It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.
> often a benefit to having a human have an understanding of the concrete details of the system
Further elaborating from my experience.
1. I think we're in the early stages, where agents are useful because we still know enough to coach well - knowledge inertia.
2. I routinely make the mistake of allowing too much autonomy, and will have to spend time cleaning up poor design choices that were either inserted by the agent, or were forced upon it because I had lost lock on the implementation details (usually both in a causal loop!)
I just have a policy of moving slowly and carefully now through the critical code, vs letting the agent steer. They have overindexed on passing tests and "clean code", producing things that cause subtle errors time and time again in a large codebase.
> burn the time to understand it.
It seems to me to be self-evident that writing produces better understanding than reading. In fact, when I would try to understand a difficult codebase, it often meant that probing+rewriting produced a better understanding than reading, even if those changes were never kept.
It's like any other muscle, if you don't exercise it, you will lose it.
It's important that when you solve problems by writing code, you go through all the use cases of your solution. In my experience, just reading the code given by someone else (either a human or machine) is not enough and you end up evaluating perhaps the main use cases and the style. Most of the times you will find gaps while writing the code yourself.
> It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.
This is true whether an AI wrote the code or a co-worker, except the AI is always on hand to answer detailed questions about the code, do detailed analysis, and run extensive tests to validate assumptions.
It is very rarely productive any more to dig into low level code manually.
This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.
This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.
> This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
This to me looks like an analogy that would support what GP is saying. With modern farming practices you get problems like increased topsoil loss and decreased nutritional value of produce. It also leads to a loss of knowledge for those that practice those techniques of least resistance in short term.
This is not me saying big farming bad or something like that, just that your analogy, to me, seems perfectly in sync with what the GP is saying.
And those trade-offs can only pay off if the extra food produced can be utilized. If the farm is producing more food than can be preserved and/or distributed, then the surplus is deadweight.
This is a false equivalence. If the farmer had some processing step which had to be done by hand, having mountains of unprocessed crops instead of a small pile doesn’t improve their throughput.
This is the classic mistake all AI hypemen make by assuming code is an asset, like crops. Code is a liability and you must produce as little of it as possible to solve your problem.
As an "AI hypeman" I 100% agree that code is a liability, which is exactly why I relish being able to increasingly treat code as disposable or even unnecessary for projects that'd before require a multiple developers a huge amount of time to produce a mountain of code.
I’ll be honest with you pal - this statement sounds like you’ve bought the hype. The truth is likely between the poles - at least that’s where it’s been for the last 35 years that I’ve been obsessed with this field.
I feel like we are at the crescendo point with "AI". Happens with every tech pushed here. 3DTV? You have those people who will shout you down and say every movie from now on will be 3D. Oh yeah? Hmmm... Or the people who see Apple's goggles and yell that everyone will be wearing them and that's just going to be the new norm now. Oh yeah? Hmmm...
Truth is, for "AI" to get markedly better than it is now (0) will take vastly more money than anyone is willing to put into it.
(0) Markedly, meaning it will truly take over the majority of dev (and other "thought worker") roles.
"Airplanes are only 5 years away, just like 10 years ago" --Some guy in 1891.
Never use your phrase to say something is impossible. I mean there are driverless Waymo's on the street in my area so your statement is already partially incorrect.
Nobody is saying it isn't possible. Just saying nobody wants to pay as much money as it's going to take to get there. At some point investors will say, meh, good 'nuff.
Just about a week ago I launched a 100% AI generated project that shortcircuits a bunch of manual tasks. What before took 3+ weeks of manual work to produce, now takes us 1-2 days to verify instead. It generates revenue. It solved the problem of taking a workflow that was barely profitable and cutting costs by more than 90%. Half the remaining time is ongoing process optimization - we hope to fully automate away the reaming 1-2 days.
This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".
I fully agree that some places will drown in a deluge of AI generated code of poor quality, but that is an operator fault. In fact, one of my current clients retained me specifically to clean up after someone who dove head first into "AI first" without an understanding of proper guardrails.
All employees solve problems. Developers have benefited from the special techniques they have learned to solve problems. If these techniques are obsolete, or are largely replaced by minding a massive machine, the character of the work, the pay for performing it, and social position of those who perform it will change.
> My "actual job" isn't to write code, but to solve problems.
You're like 836453th person to say this. It's not untrue, but many of us will take writing over reviewing any day. Reviewing is like the worst part of the job.
I use AI heavily to review the code too, and it makes it far simpler.
E.g. "show me why <this assumption that is necessary for the code I'm currently staring at> holds" makes it far more pleasant to do reviews. AI code review tooling works well to reduce that burden. Even more so when you have that AI cod review tooling running as part of your agent loop first before you even look at a delivery.
"prove X" is another one - if it can't find a test case that already proves X and resorts to writing code to prove X, you probably need more tests, and now you have one,.
Then your job has turned into designing solutions, and asking a (sometimes unreliable) LLM to make them for you. If you keep at it, soon you'll accumulate enough cognitive debt to become a fossil, knowing what has to be done, but not quite how it is done.
And really where is your moat? Why pay for a senior when a junior can prompt an LLM all the same? People are acting like its juniors who are going to be out of work like companies are going to just keep paying seniors for their now obsolete skills.
My "actual job" is a designer, not a career engineer, so for me code has always been how I ship. AI makes that separation clearer now. I just recently wrote about this.[0]
But I think the cognitive debt framing is useful: reading and approving code is not the same as building the mental model you get from writing, probing, and breaking things yourself. So the win (more time on problem solving) only holds if you're still intentionally doing enough of the concrete work to stay anchored in the system.
That said, if you're someone like me, I don't always need to fully master everything, but I do need to stay close enough to reality that I'm not shipping guesses.
Some of the biggest improvements I've made in the clarity and typesafety of the code I write came from seeing the weak points while slogging through writing code, and choosing or writing better libraries to solve certain problems. If everyone stops writing code I can only imagine quality will stagnate
for example, I got fed up with the old form library we were using because it wasn't capable of checking field names/paths and field value types at compile time and I kept having unexpected runtime errors. I wrote a replacement form library that can deeply typecheck all of that stuff.
If I had turned an AI loose against the original codebase, I think it would have just churned away copying the existing patterns and debugging any runtime errors that result. I don't think an AI would have ever voluntarily told me "this form library is costing time and effort, we should replace it with such and such instead"
You're right that a dev's job is to solve problems. However, one loses a lot of that if one doesn't think in computerese - and only reading code isn't enough. One has to write code to understand code. So for one to do one's _actual_ job, they cannot depend solely on "AI" to write all the code.
We used to say that about people who wrote in C instead of assembler. Then we used to say that (any many still do) about people who opted for "scripting languages" over "systems languages".
It's "true" in a sense. It helps. But it is also largely irrelevant for most of us, in that most of us are writing code you can learn to read and write in a tiny proportion of the time we spend in working life. The notion that you need to keep spending more than a tiny fraction of your time writing code in order to understand enough to be able to solve business problem will seem increasingly quaint.
> The notion that you need to keep spending more than a tiny fraction of your time writing code in order to understand enough to be able to solve business problem will seem increasingly quaint.
Completely disagree. Reading books doesn't make you an author. Reading books AND writing books makes you an author.
The entire point is we increasingly don't need to be authors.
Most of us aren't paid to be authors in your analogy.
(Which is good, because outside of your analogy, most authors are paid peanuts, and most of those of us who do write do so because we enjoy it, not as a job)
But even if our jobs were to be authors, while I learned some things about writing books from writing the novels I have written and published, I learned far more from being a voracious reader for decades.
I probably needed both, and I'm sure I'd improve as a writer past what I could from just reading by writing more, I think your analogy if anything is a perfect fit for my point that we don't need to spend more than a tiny proportion our time writing to be competent at it (I won't claim great).
Many of us will probably keep doing it for fun, but it will be increasingly hard to justify "manual coding" at work.
Exactly this. The shift from "writing code" to "reviewing code and focusing on architecture" is the natural evolution. Every abstraction layer in computing history freed us to think at higher levels - assembler to C, C to Python, and now Python to "describe what you want."
The people framing this as "cognitive debt" are measuring the wrong thing. You're not losing the ability to think - you're shifting what you think about. That's not a bug, it's the whole point.
The problem is that how do you review code if you don't know what it is supposed to look like? Creativity is not only in the problem solving step but also when implementing it, and letting an LLM do most of it is incredibly dangerous for the future, more so on juniors are gaining experience this way. The software quality will be much worse, and the churn even higher, and I will be in a farm with my chickens
If you spend all your time on that, you might actually lose the ability to actually do it. I find a lot of "non core" tasks are pretty important for skill building and maintenance.
I sympathise, in as much as I love writing code too, but I increasingly restrict that to my personal projects. It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.
> It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.
In practice, this isn't bearing out at all though both among my peers and with peers in other tech companies. Just making a blanket statement like this adds nothing to the conversation.
if you're a consultant/contractor that's bid a fixed amount for a job: you're incentivised to slop out as much as possible to hit the complete the contract as quickly as possible
and then if you do a particularly bad job then you'll be probably kept on to fix up the problems
vs. an permanent employee that is incentivised to do the job well, sign it off and move onto the next task
You're making flawed assumptions you have no basis for.
Most of my work is on projects I have a long term vested interest in.
I care far more about maximally leveraging LLMs for the projects I have a vested interest in - if my clients don't want to, that's their business.
Most of my LLM usage directly affects my personal finances in terms of the ROI my non-consulting projects generate - I have far more incentives to do the job well than a permanent employee whose work does not have an immediate effect on their income.
> My "actual job" isn't to write code, but to solve problems.
Air quotes and more and more general words. The perfect mercenari’s tools.
The buck stops somewhere for most of us. We have jobs, we are compelled to do them. But we care about how it is done. We care whether doing it in a certain will give us short term advantages but hinder us in the long term. We care if the process feels good or bad. We care if it feels like we are in control of the process or if we are just swimming in a turbulent sea. We care about how predictable the tools we use. Whether we can guess that something takes a month and not be off by weeks.
We might say that we are the perfect pragmatists (mercenaris); that we only care about the most general description of what-is-to-be-done that is acceptable to the audience, like solving business problems, or solving technical problems, or in the end—as the pragmatist sheds all meaning from his burdensome vessel—just solving problems. But most of us got into some trade, or hobby, or profession, because we did concrete things that we concretely liked. And switching from keyboards to voice dictation might not change that. But seemingly upending the whole process might.
It might. Or it may not. Certainly could go in more than one direction. But to people who are not perfect mercenaries or business hedonists[1] these are actual problems or concerns. Not nonsense to be dismissed with some “actual job” quip, which itself is devoid of meaning.
I'm in the same boat. There's a lot of things I don't know and using these models help give direction and narrow focus towards solutions I didn't know about previously. I augment my knowledge, not replace.
Some people learn from rote memorization, some people learn through hands on experience. Some people have "ADHD brains". Some people are on the spectrum. If you visit Wikipedia and check out Learning Styles, there's like eight different suggested models, and even those are criticized extensively.
It seems a sort of parochial universalism has coalesced, but people should keep in mind we don't all learn the same.
ETA: I'd also like to say learning from LLMs are vastly similar, and some ways more useful, than finding blogs on a subject. A lot of time, say for Linux, you'll find instructions that even if you perform them to a tee, something goes pear shaped, because of tiny environment variables or a single package update changes things. Even Photoshop tutorials are not free of this madness. I'm used to mostly correct but just this side of incorrect instructions. LLMs are no different in a lot of ways. At least with them I can tailor my experience to just what I'm trying to do and spend time correcting that versus loading up a YT video trying to understand why X doesn't work. But I can understand if people don't get the same value as I do.
I'm currently testing Claude Code for a project where it isn't coding. But the workflows built with it are now making me money after ~2 weeks, and I've previously done the same work manually, so I know the turnaround time: The turnaround for each deliverable is ~2 days with Claude and the fastest I've ever done it manually was 21 days. (Yes, I'm being intentionally vague - there isn't much of a moat for that project given how close Claude gets with very little prompting)
There are absolutely maintainability challenges. You can't just tell these tools to build X and expect to get away with not reviewing the output and/or telling it to revise it.
But if you loosen the reigns and review finished output rather than sit there and metaphorically look over its shoulder for every edit, the time it takes me to get it to revise its work until the quality is what I'd expect of myself is still a tiny fraction of what it'd take me to do things manually.
The time estimate above includes my manual time spent on reviews and fixes. I expect that time savings to increase, as about half of the time I spend on this project now is time spent improving guardrails and adding agents etc. to refine the work automatically before I even glance at the output.
The biggest lesson for me is that when people are not getting good results, most of the time it seems to me it is when people keep watching every step their agent takes, instead of putting in place a decent agent loop (create a plan for X; for each item on the plan: run tests until it works, review your code and fix any identified issues, repeat until the tests and review pass without any issues) and letting the agent work until it stops before you waste time reviewing the result.
Only when the agent repeatedly fails to do an assigned task adequately do I "slow it down" and have it do things step by step to figure out where it gets stuck / goes wrong. At which point I tell it to revise the agents accordingly, and then have it try again.
It's not cost effective to have expensive humans babysit cheap LLMs, yet a lot of people seem to want to babysit the LLMs.
I've meditated, and experimented with self hypnosis (still on the fence on whether hypnosis works), and I simply always interpreted such "visualize" instructions as metaphorical, as I had no idea until a few years back that people meant them literally, so pretending was the only option I thought possible.
I consider myself to have aphantasia, but I have one singular experience of having seen things in my minds eye in a waking state, during meditation. I wish there was research into what might trigger this, and how, as I've not been able to repeat it.
(I'm sure I wasn't asleep as I both never remember my dreams, but have distinct memories of how "fuzzy" and ethereal the imagery in my dreams feel, while this imagery during meditation was crystal clear and I was fully lucid)
People without aphantasia see images in their head. If you don't see images, you have aphantasia. I don't see images. It took until I was in my 40's to realised that this wasn't most peoples experience.
I agree with you regarding imagination - the problem isn't the usual definitions of imagination, but that the process of seeing images to varying degrees (from fuzzy, brief views to "full fidelity video" they can rewind at will at the other extreme) is so deeply ingrained in most people that a whole lot of our vocabulary uses visual metaphors for the entire process rather than just for the visual aspect.
I get my ideas completely from inner monologue. But my ideas are mostly related to developing automated systems etc, I don't really need imagery for that, although I think I need to sense some sort of graphs or how things work together on the higher level.
I write a lot, including fiction, and I feel my aphantasia probably shapes what I like to read and write in ways I wasn't aware of (before realising aphantasia was a thing, and that I have it), but it doesn't stop me either.
E.g. when reading I tend to skim over writing that spends a lot of time describing the visual appearance of things unless the words themselves are beautiful to me, because no matter how well written the descriptions are, they don't achieve anything for me beyond the shape of the prose itself.
(I love the structure and flow of language, so there are absolutely moments I find myself reading visual descriptions because of the descriptions themselves)
When writing, I prefer to write relatively sparse prose that focuses on how things works and relates to each other, and dialogue, rather than trying to evoke imagery that I can't see for myself when reading the text back.
Unless everyone worked in base 12 numbers too, that'd be a mess. Part of the beauty of metric is how often calculations devolve to shifting the decimal point.
reply