This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.
This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.
> This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
This to me looks like an analogy that would support what GP is saying. With modern farming practices you get problems like increased topsoil loss and decreased nutritional value of produce. It also leads to a loss of knowledge for those that practice those techniques of least resistance in short term.
This is not me saying big farming bad or something like that, just that your analogy, to me, seems perfectly in sync with what the GP is saying.
And those trade-offs can only pay off if the extra food produced can be utilized. If the farm is producing more food than can be preserved and/or distributed, then the surplus is deadweight.
This is a false equivalence. If the farmer had some processing step which had to be done by hand, having mountains of unprocessed crops instead of a small pile doesn’t improve their throughput.
This is the classic mistake all AI hypemen make by assuming code is an asset, like crops. Code is a liability and you must produce as little of it as possible to solve your problem.
As an "AI hypeman" I 100% agree that code is a liability, which is exactly why I relish being able to increasingly treat code as disposable or even unnecessary for projects that'd before require a multiple developers a huge amount of time to produce a mountain of code.
I’ll be honest with you pal - this statement sounds like you’ve bought the hype. The truth is likely between the poles - at least that’s where it’s been for the last 35 years that I’ve been obsessed with this field.
I feel like we are at the crescendo point with "AI". Happens with every tech pushed here. 3DTV? You have those people who will shout you down and say every movie from now on will be 3D. Oh yeah? Hmmm... Or the people who see Apple's goggles and yell that everyone will be wearing them and that's just going to be the new norm now. Oh yeah? Hmmm...
Truth is, for "AI" to get markedly better than it is now (0) will take vastly more money than anyone is willing to put into it.
(0) Markedly, meaning it will truly take over the majority of dev (and other "thought worker") roles.
"Airplanes are only 5 years away, just like 10 years ago" --Some guy in 1891.
Never use your phrase to say something is impossible. I mean there are driverless Waymo's on the street in my area so your statement is already partially incorrect.
Nobody is saying it isn't possible. Just saying nobody wants to pay as much money as it's going to take to get there. At some point investors will say, meh, good 'nuff.
Just about a week ago I launched a 100% AI generated project that shortcircuits a bunch of manual tasks. What before took 3+ weeks of manual work to produce, now takes us 1-2 days to verify instead. It generates revenue. It solved the problem of taking a workflow that was barely profitable and cutting costs by more than 90%. Half the remaining time is ongoing process optimization - we hope to fully automate away the reaming 1-2 days.
This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".
I fully agree that some places will drown in a deluge of AI generated code of poor quality, but that is an operator fault. In fact, one of my current clients retained me specifically to clean up after someone who dove head first into "AI first" without an understanding of proper guardrails.
I do see this as a bad thing and an abdication of taking responsibility for one's own life. As was recently put to me after the sudden death of a friend's father (who lived an unusually rich life): everyone dies, but not everyone truly lives.
Ah... we found the person who thinks they can pass judgement on how people choose to live their lives. I didn't say that my friend doesn't love his job (he does) - I said that he'll probably die before retiring.
Stephen Hawking, Einstein, Marie Curie, and Linus Pauling never retired. Did they not "truly live"?
At the end of life, Maslow became convinced that self-transcendence was the pinnacle of the hierarchy. Strong identification with work will not get one to that final step. I am not sure if ai is a path to self transcendence or self annihilation, but it's interesting to ponder in the case of some like Brin.
I truly believe that the cult of c performance optimization has done more harm than good. It is truly evil to try and infer, or even worse, silently override programmer intent. Many if not most of the optimizations done by llvm and gcc should be warnings, not optimizations (dead code elimination outside of LTO being a perfect example).
How much wasted work has been created by compiler authors deciding that they know better than the original software authors and silently break working code, but only in release mode? Even worse, -O0 performance is so bad that developers feel obligated to compile with -O2 or more. I will bet dollars to donuts that the vast majority of the material wins of -O2 in most real world use cases is primarily due to better register allocation and good selective inlining, not all the crazy transformations and eliminations that subtly break your code and rely on UB. Yeah, I'm sure they have some microbenchmarks that justify those code breaking "optimizations" but in practice I'll bet those optimizations rarely account for more than 5% of the total runtime of the code. But everyone pays the cost of horrifically slow build times as well as nearly unbounded developer time loss debugging the code the compiler broke.
Of course, part of the problem is developers hating being told they're wrong and complaining about nanny compilers. In this sense, compiler authors have historically been somewhat similar to sycophantic llms. Rather than tell the programmer that their code is wrong, they will do everything they can to coddle the programmer while behind the scenes executing their own agenda and likely getting things wrong all because they were afraid to honestly tell the programmer there was a problem with their instructions.
Honestly, I found this piece depressing. Life is too short and precious to waste on crappy software.
So often the question ai related pieces ask is "can ai do X?" when by far the more important question is "should ai do X?" As written, the piece reads as though the author has learned helplessness around c++ and their answer is to adopt a technology that leaves them even more helpless, which they indeed lament. I'd challenge the author to actually reflect on why the are so attached to this legacy software and why they cannot abandon it if it is causing this level of angst.
Sure, but I prefer to work on projects that are fundamentally sound and high impact. Indeed, I have certainly noticed a pattern that very often ai enthusiasts exalt its capabilities to automate work that appears to be of questionable value in the first place, apart from the important second order property of keeping the developer sheltered and fed.
I agree the build might take a bit extra but its for sure not much for smaller clis. Making jbang native added 1-2 minutes and its all done in github action runners so in practice I don't see this as a problem as it does not affect the end user.
This logic is both too broad and rigid to be of much practical use[1]. It needs to be tightened to compare languages that are identical except for static type checks, otherwise the statically typed language could admit other kinds of errors (memory errors immediately come to mind) that many dynamic languages do not have and you would need some way of weighing the relative cost to reliability of the different categories of errors.
Even if the two languages are identical except for the static types, then it is clearly possible to write programs that do not have any runtime type errors in the dynamic language (I'll leave it as an exercise to the reader to prove this but it is very clearly true) so there exist programs in any dynamic language that are equally reliable to their static counterpart.
[1] I also disagree with your definition of reliability but I'm granting it for the sake of discussion.
The claim was about reliability and lack of empirical evidence. Once framed that way, definitions matter. My argument is purely ceteris paribus: take a language, hold everything constant, and add strict static type checking. Once you do that, every other comparison disappears by definition. Same runtime, same semantics, same memory model, same expressiveness. The only remaining difference is the runtime error set.
Static typing rejects at compile time a strict subset of programs that would otherwise run and fail with runtime type errors. That is not an empirical claim; it follows directly from the definition of static typing. This is not hypothetical either. TypeScript vs JavaScript, or Python vs Python with a sound type checker, are real examples of exactly this transformation. The error profile is identical except the typed variant admits fewer runtime failures.
Pointing out that some dynamic programs have no runtime type errors does not contradict this. It only shows that individual programs can be equally reliable. The asymmetry is at the language level: it is impossible to deploy a program with runtime type errors in a sound statically typed language, while it is always possible in a dynamically typed one. That strictly reduces the space of possible runtime failures.
Redefining “reliability” does not change the result. Suppose reliability is expanded to include readability, maintainability, developer skill, team discipline, or development velocity. Those may matter in general, but they are not variables in this comparison. By construction, everything except typing is held constant. There is literally nothing else left to compare. All non-type-related factors are identical by assumption. What remains is exactly one difference: the presence or absence of runtime type errors. At that point, reliability reduces to failure count not as a philosophical choice, but because there is no other dimension remaining.
Between two otherwise identical systems, the one that can fail in fewer ways at runtime is more reliable. That conclusion is not empirical, sociological, or debatable. It follows directly from the setup.
Sometimes someone genuinely has a clear vision that is superior to the status quo and is capable of executing it, improving quality, performance and maintainability. The challenge is distinguishing these cases from the muddled abstractions that make everything worse. This argument feels a bit like "no one gets fired for buying IBM." Blanket advice like this is an invitation to shut down thinking and stymie innovation. At the same time, the author is not wrong that imposing a bad abstraction on an org is often disastrous. Use your powers of reason to distinguish the good and bad cases.
I had a roommate who failed out of college because he was addicted to Everquest (yes, Everquest, and yes I am middle-aged). Your last paragraph is barely even hyperbolic. Do you think unemployed young men who live at home with their parents, do little to no physical activity, spending most of their time playing videogaming and/or trolling on the internet are not stuck destroying their bodies (and minds) in a spiral of deadly addiction? Maybe you are a functional gamer, but there are many, many gamers who are not and this technology is maybe a quasi effective cope for our punishing society writ large, but from the outside, gaming addicts appear to be living a sad and limited life.
Or to put it more succinctly, would you want your obituary to lead with your call of duty prowess?
reply