I'm sure you could go back 40 years earlier and find programmers complaining about using FORTRAN and COBOL compilers instead writing the assembly by hand.
I think that the assembler->compiler transition is a better metaphor for the brain->brain+AI transition than Visual Studio's old-hat autocomplete etc.
After working with Cursor for a couple of hours, I had a bunch of code that was working according to my tests, but when I integrated it, I found that Claude had inferred a completely different protocol for interacting with data structure than the rest of my code was using. Yeah, write better tests... but I then found that I did not really understand most of the code Claude had written, even though I'd approved every change on a granular level. Worked manually through an solid hour of wtf before I figured out the root cause, then Clauded my way through the fix.
I can picture an assembly developer having a similar experience trying to figure out why the compiler generated _this_ instead of _that_ in the decade where every byte of memory mattered.
Having lived through the dumb editor->IDE transition, though, I _never_ had anything like that experience of not understanding what I'd done in hours 1 and 2 at the very beginning of hour 3.
This feels very similar to me as the "Tutorial Hell" effect. Where I can watch videos/read books, and fully feel like I understand everything. However, when hand touches keyboard I realize I didn't really retain any of it. I think that's something that makes AI code gen so dangerous. Even if you think you understand and can troubleshoot the output. Is your perception accurate?
> Where I can watch videos/read books, and fully feel like I understand everything. However, when hand touches keyboard I realize I didn't really retain any of it.
Always type everything down from a tutorial when you follow it. Don't even copy and paste, literally type it down. And make small adjustments here and there, according your personal taste and experience.
That doesn't even really work for me. I can type on autopilot. I've found the best way for me is to implement the tutorial thing in a different programming language. Something about translating between languages requires just enough mental work to help make it stick.
I've called this out before around LLMs - when the act of development becomes passive (versus active) - there is a significant risk around not fully being cognizant of the code.
Even copy pasta from Stack Overflow would require more active effort around grabbing exactly what you need, replacing variable names, etc. to integrate the solution into your project.
I have done that too much. When learning now, when I read the solution, I always make sure that I am able to implement myself, else I don't consider I learned it. I apply the same for LLM code.
I was programming 40 years ago and was very happy to be able to use "high-level languages" and not write everything in assembly. The high-level languages enabled expressiveness that was hard with lower levels.
In 1985, any time I needed any level of performance on my 1Mhz Apple //e, I still had to use assembly or when BASIC didn’t expose the functionality I needed. Mostly around double hires graphics and sound.
I think sending your LLM all relevant data structures (structs, enums, function signatures etc) is mandatory for any code-related queries. It will avoid problems like this; it seems required in many cases to get integratable results.
Embedded programming is still like this. Most people just don't inspect the assembly produced by their compiler. Unless you're working on an extremely mainstream chip with a bleeding edge compiler, your assembly is going to be absolutely full of complete nonsense.
For instance, if you aren't aware, AVR and most other uCs have special registers and instructions for pointers. Say you put a pointer to an array in Z. You can load the value at Z and increment or decrement the pointer as a single instruction in a single cycle.
GCC triples the cost of this operation with some extremely naive implementations.
Instead of doing 'LD Z+' GCC gives you
```
inc Z
ld Z
dec Z
```
Among other similar annoyances. You can carefully massage the C++ code to get better assembly, but that can take many hours of crazy-making debugging. Sometimes it's best to just write the damn assembly by hand.
In this same project, I had to implement Morton ordering on a 3D bit field (don't ask). The C implementation was well over 200 instructions but by utilizing CPU features GCC doesn't know about, my optimized assembly is under 30 instructions.
Modern sky-high abstracted languages are the source of brain rot, not compilers or IDEs in general. Most programmers are completely and utterly detached from the system they're programming. I can't see how one could ever make any meaningful progress or optimization without any understanding of what the CPU actually does.
And this is why I like embedded. It's very firmly grounded in physical reality. My code is only slightly abstracted away from the machine itself. If I can understand the code, I understand the machine.
And this is appropriate for your domain and the jobs you work on.
If your job was to build websites, this would drive you insane.
I think I'm coming around to a similar position on AI dev tools: it just matters what you're trying to do. If it's a well known problem that's been done before, by all means. Claude Code is the new Ruby on Rails.
But if I need to do some big boy engineering to actually solve new problems, it's time to break out Emacs and get to work.
The vast majority of time spent building software has little to do with optimization. Sky-high abstracted brain rot languages are useful precisely because usually you don’t need to worry about the type of details that you would if you were optimizing performance
And then if you are optimizing for performance, you can make an incredible amount of progress just fixing the crappy Java etc code before you need to drop down a layer of abstraction
Even hedge funds, which make money executing trades fractions of milliseconds quicker than others, use higher level languages and fix performance issues within those languages if needed
My assertion is that this is a very bad thing. This is why we have electron and pay Amazon $900/mo to host what should be a static website on a Pentium 4.
Since optimization is not a concern, waste is not a concern. Sure, send the user a 200MB JavaScript blob for the visit counter, who cares?
AT&T's billing website bounces you back and forth between four different domains where each downloads 50MB of scripts to redirect you to the next before landing on what looks like a Flash app from 2008. Takes five minutes to get my invoice because nothing matters. God alone knows how much it costs them to serve all of that garbage, probably millions.
This is a bad way to make software. It's not even a good way to make bad software.
The brain rot is the disconnect between the software and reality. The reality is that software has to run somewhere and it costs someone real money per unit resource. You can either make good software that consumes fewer resources or bad software that wastes everyone else's time and money. But hey, as long as you ship on time and get that promotion who cares, right?
As a long time embedded programmer, I don't understand this. Even 20 years ago, there is no way I really understood the machine, despite writing assembly and looking at compiler output.
10 years ago, running an arm core at 40 Mhz, I barely had the need to inspect my compiler's assembly. I still could roughly read things when I needed to (since embedded compilers tend to have bugs more regularly), but there's no way I could write assembly anymore. I had no qualms at the time using a massively inefficient library like arduino to try things out. If it works and the timing is correct, it works.
These days where I don't do embedded for work, I have no qualms writing my embedded projects in micropython. I want to build things, not micro optimize assembly.
> As a long time embedded programmer, I don't understand this
I think you both should define what your embedded systems look like. The range is vast after all. It ranges from 8 bit CPU [0] with a few dozen kilobytes of RAM to what almost is a full modern PC. Naturally, the incentives to program at a low level are very different across the embedded systems range.
I was trying to bit-bang 5 250KHz I2C channels on a 16MHz ATTiny while acting as an I2C slave on a 6th channel.
This is really not something you can do with normal methods, the CPU is just too slow and the assembly is too long. No high level language can do what I want because the compiler is too stupid. My inline assembly is simple and fast enough that I can get the bitrate I need.
In my view, there's two approaches to embedded development: programming á la mode with arduino and any unexamined libraries you find online, or the register hacker path.
There are people who throw down any code that compiles and moves on to the next thing without critical thought. The industry is overflowing with them. Then there are the people who read the datasheet and instruction set. The people painstakingly writing the drivers for I2C widgets instead of shoving magic strings into Wire.Write.
I enjoy micro-optimizing assembly. I find it personally satisfying and rewarding. I thoroughly examine and understand my work because no project is actually a throwaway. Every project I learn something new, and I have a massive library of tricks I can draw from in all kinds of crazy situations.
If you did sit down to thoroughly understand the assembly of your firmware projects, you'd likely be aghast at the quality of code you're blindly shipping.
All that aside for a moment, consider the real cost of letting your CPU run code 10x slower than it should. Your CPU runs 10x longer and consumes a proportional amount of energy. If you're building a battery powered widget, that can matter a lot. If your CPU is more efficient, you can afford a lighter battery or less cooling. You have to consider the system as a whole.
This attitude of "ship anything as quickly as possible" is very bad for the industry.
If you're active on social media (twitter), you will still see people, like the FFmpeg account, still bashing higher level languages (C) and praising hand written assembly.
Absolutely it does. In the same way Socrates warned us about books, VS externalizes our memory and ability. and makes us reliant on a tool to accomplish something we have the ability to do without. This reliance goes even further to make us dependent on it as our natural ability withers from neglect.
I cannot put it more plainly that it incentives us to make a part of us atrophy. It would be like us giving up the ability to run a mile because our reliance on cars weakened our legs and de-conditioned us to the point of making it physically impossible.
If your job for 30 years is to move bags of cement 50 miles each day, is it not more productive to use a truck than your legs? Even if you use the truck so much you could not move a bag of cement with your legs anymore due to atrophy?
You could make the same argument about most tools. Do calculators rot the brain?
Yes, they unequivocally make people worse at mental math. Now whether that's bad or not is debatable. Same with any tool, it's about tradeoffs and only you can determine whether the tradeoffs make sense for you.
>they unequivocally make people worse at mental math. Now whether that's bad or not is debatable.
Not debatable because as long as calculators exist and are available, nothing is lost. You can put that mental energy towards whatever is relevant. There's nothing special about math that elevates doing it in your own mind more valuable than other tasks. Any of us that do software for a living understand that mental work is real work and you are limited in your capacity per day. If your boss wants to pay you to do longhand division instead of getting projects done, I'd call that man a liar.
The people that make these arguments that we "lose" something are usually academics or others that aren't actually in the trenches getting things done. They think we're getting weaker mentally or as a society. I'm physically tired at the end of the day, and I sit at a desk. I'll do math when the calculators are gone, I have a lot of tasks and responsibilities otherwise.
> Not debatable because as long as calculators exist and are available, nothing is lost.
This is a vacuous argument. Of course nothing is lost if we are never in the situation where the downsides of calculator reliance are apparent. Nobody said otherwise. The point is that people do find themselves in those situations, and then struggle to do math because they've never developed the skill.
I don't know if I'd agree with the phrasing that it rots the brain, but broadly I would expect that if you rely on it for arithmetic your mental arithmetic will atrophy over time.
Unfortunately though it may not be such an exaggeration after all. It's probably too early to say, but there is definitely mounting evidence that some of these useful tools are literally, genuinely rotting our brains.
> Unfortunately though it may not be such an exaggeration after all. It's probably too early to say, but there is definitely mounting evidence that some of these useful tools are literally, genuinely rotting our brains.
I think you may have fallen into a bit of confirmation bias there. The article you linked to words itself like so:
"According to research published in the journal PLOS One, navigating with a map and compass might help stave off cognitive decline and symptoms associated with Alzheimer’s Disease."
This means that the study found that there may be positive benefits to cognitive health when one routinely engages in navigation related activities using a compass and a map.
That is a very very VERY different statement than: "Using Google Maps causes cognitive decline."
In order to demonstrate any kind of causality between using Maps and cognitive decline, you would have to start with an additional premise that everyone engaged in navigation using a compass and map on such a regular basis that the switch represents a switch to a less healthy lifestyle.
I think that statement is specious at best.
I'm old enough to remember living without Google Maps. And the amount of time that I reached for a paper map and a compass was so infrequent that Google Maps represented something new for me more than it represented something different. That is to say, it wasn't replacing something I did regularly so much as it gave me a reason to start using any kind of a "map" in the first place.
Most people I know would say the same thing. We had some maps skills when we needed them ... but we tried to avoid needing them more often than not.
So yeah, the study might have merit. But I don't think it suggests what you think it suggests.
> I think you may have fallen into a bit of confirmation bias there. The article you linked to words itself like so:
> This means that the study found that there may be positive benefits to cognitive health when one routinely engages in navigation related activities using a compass and a map.
> That is a very very VERY different statement than: "Using Google Maps causes cognitive decline."
It is a different statement, but it is not a giant logical leap. The two statements are connected by a pretty rational extrapolation. That said, I'm not suggesting this study is claiming anything like that, just that there is growing evidence that these things might be bad for our brain health, and studies like this are part of it.
However, it's going to be really, really hard to prove that something like Google Maps is ultimately responsible for cognitive decline. In fact, I think saying that would be a step too far. However, when you combine many of these tools, like calculators, AI autocomplete, and GPS navigation together, I definitely think there is a stronger case that it is not good for our brain health, but I doubt it'll be easy to make a case for that using actual data, and I don't expect such a study to come around soon (and even if it did, I'm not sure people would feel enticed to trust it anyways.)
> I'm old enough to remember living without Google Maps. And the amount of time that I reached for a paper map and a compass was so infrequent that Google Maps represented something new for me more than it represented something different. That is to say, it wasn't replacing something I did regularly so much as it gave me a reason to start using any kind of a "map" in the first place.
I'm surprised. I thought all of us who lived before Google Maps were using Map Quest, at least for a few years. That's what I was doing. Before Map Quest, when I was growing up, we definitely did try to avoid paper maps as much as possible, but it was frequently necessary to go somewhere we didn't already know how to get to, so it wasn't unusual to have to get directions and navigate using a map.
Anyhow, I'm sure everyone experiences life differently here, but I think not needing to navigate very often is definitely something I would've considered a luxury.
I am the MapGuy(TM). I had paper maps of every destination and triptix from AAA. I would fly with a pocket Rand McNally Road Atlas and be able to identify where we were from features on the ground. I still do it but I use a GPS. I don't trust them so I still check maps before I use a GPS in unfamiliar areas. I find badly optimized routes 5% of the time and outright mistakes 1%.
I was the one who got people unlost when they got them selves lost because they did not listen to the map geek.
I hope this gives me at least 30 more years before mental decline starts. That will get me to my late 90's.
The real danger of LLMs lie in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer, rather than taking even the briefest of moments to quietly ponder the problem on your own.
That act of manipulating the problem in your head—critical thinking—is ultimately a CRAFT. The only way to become better at it is by practicing it in a deliberate, disciplined fashion.
However if you ultimately believe that critical reasoning isn't worth cultivating, then I suppose we're at an impasse.
How different is this argument from "does Rust rot the mind"? After all, Rust means that the part of the brain that is paranoid about careful memory safety issues no longer has to be engaged.
Like most things, the answer is that it depends. I think autocomplete is great, and I've come to appreciate LLMs doing boilerplate editing, but I'm quite worried about people using LLMs to do everything and pumping out garbage.
Great point, to me it feels more like running barefoot versus with padded shoes. Barefoot running will build your foot muscle based on your natural form over time, whereas padded shoes may interfere with your natural form and at worst encourage bad form which may lead to injuries as your mileage increases.
Over the years in building software, I tend to spend more time thinking about how all the pieces (i.e. from package manager) fit together rather than building them from scratch and fitting two pieces together require deeper understanding of both the pieces and how the combination of the two behaves in the environment.
Completely agree. Though I think this statement could benefit from pointing out that cars also help people go much faster, and do things they otherwise couldn't.
Relatedly, people who rely too much on GPS for navigation (i.e. online automated route planning), especially real-time, turn-by-turn instruction, seem to have poor spatial awareness, at least at the local geographic level. I doubt the loss of that skill is a meaningful impediment in modern life[1], but I personally would not want to lose it. Tools like Google Maps are extremely useful, but I use them to augment my own navigation and driving skills. I'll briefly study the route before departing, choose major thoroughfares to minimize complexity, and try to memorize the last few turns (signage and a practiced understanding of how highways and roads are laid out is sufficient for getting close enough).
[1] No impediment for them. It's an impediment for me when the car in front of me is clearly being driven by somebody blithely following the computer's instructions, with little if any anticipation of successive turns, and so driving slowly and even erratically.
Yes. You can see a difference between the person who learned to do a process "by hand" and then uses technology to make it faster or easier, versus the person who never learned to do it without the tech at all.
The ICE and more generally the automobile has been a great technology and has lots of benefits. But we did all huff alarming amounts of lead for a generation and built our cities around them to our detriment.
> But we did all huff alarming amounts of lead for a generation and built our cities around them to our detriment.
And yet, this has nothing to do with the ICE itself, and everything to do with the greed of the Ethyl Corporation and the generation of corrupted minions that knowingly enabled their disastrous scheme.
Cars allow people to travel longer distances more conveniently, access remote areas, transport goods efficiently, and have greater independence in their daily lives. They also enable emergency services to respond quickly, support economic growth by facilitating trade, and provide opportunities for leisure travel that were previously impractical.
They let us do things efficiently. Sure, you could have moved a truckload of goods up the coast using 20 wagons, 40 mules, 20 drivers, 10 security, and a week's time. Today, one dude can move a truckload of goods up the coast same day.
(Numbers have been entirely fabricated, feel free to send adjustments.)
Travel [long distance], relatively safely, in [short time].
LA to NY is a long roadtrip, but you can do it in 2-3 days with a few friends to rotate drivers. Walking that is a months long journey with a very real risk of death if you don't have a support vehicle.
I got yelled at once at the bank in my office's parking lot for going to the drive up ATM instead of going inside to a teller. So they definitely agree with that. Not allowed to walk through a drive through.
I think "carrying something heavy without slavery being involved" is a new capability compared to "enslaving people and making them carry something heavy".
But also gives us back time and mental capacity to do other things which were previously out of reach because of what we had to focus our minds and time on.
In some cases, maybe even many or the majority of cases, that trade isn't a bad one. It really depends on how you use it.
Socrates just argued about semantics. If you take anything away from those dialogues it should be that they were confused and didn't really have many good ideas.
There should be an updated article, "Does Visual Studio Code Rot the Mind?"
In the old days, when we set our development environments up by hand, you usually ended up learning quite a bit about your setup. Or you at least had to know things like what's installed, what paths are where, blah blah.[1]
With Visual Studio Code being what almost everyone uses these days, everything is a click-to-install plugin. People don't realize Python isn't part of Visual Studio Code. The same for gcc, ssh, etc. You end up with tickets like "Can't run Python", or "Can't connect to the server", but it's all Visual Studio Code issues.
[1]: And yes, I realize a lot of us didn't know what we were doing back then, and the shotgun approach of "`apt-get install foobar` fixed this for me!" was a standard 'solution'
My background is heavily biased towards C++ but I don't really feel like you can make it work in VS Code without understanding, at minimum, where your clangd is actually located. The C++ plugins don't install what you need where you need them, and the launch.json really does not just work. My ex was unable to set up a C++26 toolchain for VS Code because without my help he couldn't configure the settings to connect to the right version of clang tools, and I don't think he ever got the GDB debug adapter working at all.
> One of the definite positives of AI is that kind of stuff is now fairly easy to solve.
I agree that 'AI' feels like a fancier/faster 'Google' (I've done the thing where I find a Github issue from 4 years ago that explains the problem), but when we will see a local AI agent that looks at your current configuration/issue and solves the problem for you, and then gives you a summary of what it did?
I get that and I also couldn’t get a boilerplate Vue/TypeScript library project started using AI - bear in mind any guide I could find was at least a year old and by now things have changed enough that none of them work either, but it was overall frustrating for something that should be simple
The number of times I've had to help a colleague troubleshoot some VSCode thing seems to bear this out. I'm regarded as a 'wizard' because I actually know how to use git/build tools without a gui. It's kind of shocking to me how many developers' knowledge ends at the run button.
It doesn't bother me too much, because I like being needed, lol. But it would probably make our team more productive if people bothered to learn how our tooling works.
Yeah I remember early in my career, all the older people lived on the command line. I'd learned enough command line stuff an university to keep up and eventually become like that, but I now see people older than me clicking their way through everything.
my version of this was "do convenient linux distros and binary package managers rot the mind?" after setting up a freebsd box with everything built from source and everything configured by hand. then again from building an arch-based Linux system from the wiki. you learn a lot from doing things by hand... but eventually you need to get stuff done.
Seems like Visual Studio rots the mind so bad nowadays that people would rather clamor support for useless garbage like LSP in their editors, and if it can't be supported very well it's foremost the editor's fault, not the fault of the crap they want to tack on to an otherwise fine product. People have been doing this with Emacs, saying Emacs is slow because its LSP implementations are slow and unresponsive, not that LSP is badly designed, or that an editor shouldn't need an asynchronous network protocol to provide IDE features. It also demonstrates the mentality that people find it increasingly preferable to have their computing served to them on a silver platter by a third party than being able to control it locally.
Throwing user stories at an LLM and hope it builds the right thing. It's like letting a product manager try to generate code without paying attention to the details. It's as terrible as it sounds, but debatably okay for fast prototypes.
"There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."
I lasted about 10 seconds of that video before I had to stop it as I was beginning to develop a very unhealthy urge to inflect violence on the presenter and his whiny overly performative voice.
I don't think Intellisense and autocomplete bring brain-rot. AI could, because it tries to solve the problem FOR you, while intellisense and autocomplete merely help you find stuffs. These two are completely different.
As much as I look up to Charles Petzold, I believe one of the best qualities of an engineer is to know how to prioritize things in life. We only have limited brain power, so it's best to use it in areas that we do care about, and keep the others to the lowest standard the customer can accept as possible. I'd argue that as long as you don't care about memorizing class names or variable names or API details, and as long as you don't become a slave of intellisense and auto-complete, you are perfectly fine.
I'd argue that this is even fine if you only use AI to bootstrap yourself or "discuss" with it as with a rubber duck.
So I actually would disagree with this. I think from the time ides have shown up quality has gotten worse. Photoshop 3 used to be much more performant than the modern day Photoshop. Slack is one of the worst performing apps I've ever had the misfortune of using. Overall, I think software quality has definitely declined from the earlier times.
The surface area of apis would be limited because you would be constrained by what you can keep in your head, but now you're free to just add hundreds of different methods.
One of my theories is that Java actually has so much sprawl is because how good the IDEs were for Java so I think overall, it definitely adds to brain rot.
> One of my theories is that Java actually has so much sprawl is because how good the IDEs were for Java
Not sure what came first - chicken or the egg - on that one, but I remember years ago discussing vim vs an IDE for writing code with workmates. We were writing PHP and I used a plain text editor at first then moved to an IDE, while the older devs stuck to vim. I pointed out that they probably couldn't do that if we were writing Java.
For those trying to remember, microsoft intellisense (autocompletion within the IDE) was a big subject of debate going back to ~1999 on whether it'll make programmers lazier or more productive.
Some of this stuff would be found in old slashdot discussions but it seems harder to find, im finding an old John Carmack interview as one mention: https://m.slashdot.org/story/7828
I see where this is going in the context of tools like Cursor. The common wisdom is that if you don't use it, you lose it. Muscles atrophy without stimulus (regular exercise).
On the other hand, this seems to echo the transition from assembly -> higher level languages. The complaint was that we lose the skill or the context to understand what the machine is really doing.
I've written a whole bunch of LLM-assisted code in the past few weeks so I sympathize. If I'm generous, I have a high level understanding of the generated code. If I have to troubleshoot it, I'm basically diving into an unknown code base.
Anyway, I'm just rambling here. More tools is better, it gives everyone an opportunity to work the way they want to.
I feel like with tools like cursor, while you may be able to cobble together a todo app or something else simple without any programming knowledge, for anything bigger you will still need to know how the underlying thing works.
With everyone now using LLM's, the question goes form "who can code the best" to "who can prompt the best" but that still comes down to who knows more about the underlying code.
Yes, better understanding of the stack makes you a more efficient prompt engineer. and yes I don't say this ironically, prompt engineering / vibe coding is not easy, as evidenced by the amount of people who think it's only good greenfield yeet prototypes.
A lot of my programming skills have atrophied over the last two years, and a lot of skills have become that much sharper (architecture, algorithm and design pattern knowledge, designing user facing tools and dev tools, setting up complex infrastructures, frontend design, ...) because they are what allow me to fly with LLMs.
Yes. Intellisense is great for discovery and terrible for recall. I think it comes out ahead though. Why remember things when I can CTRL+SPACE and search?
Huh, I've honestly never felt that autocomplete was an impediment to top-down coding like the author describes (and I also did my first few years of coding in feature-free text editors).
It seems like part of the problem here, is that Microsoft chose aggressive settings to force users to use their new tool as often as possible. Sounds familiar ...
This could be extrapolated to “does Cursor rot the mind?” Maybe. But programmers’ obsession with minutiae is a massive waste of time.
People try to dress it up as “attention to detail” or “caring about the craft” when it’s just unnecessary bikeshedding most of the time.
Hemingway didn’t need some bullshit Hemingway Writer to craft his style. Michelangelo wasn’t posting meta takes on Hacker News about the tools he used to carve David. Dante didn’t waste time blogging about the ink he used to write Inferno.
All this tabs vs. spaces, Vim vs. Emacs, Java vs. Go nonsense—it’s just another way people trick themselves into feeling productive. If some Gen Z dev wants to use Cursor or Vibe CoPilot to get shit done, more power to them.
> Hemingway didn’t need some bullshit Hemingway Writer to craft his style. Michelangelo wasn’t posting meta takes on Hacker News about the tools he used to carve David. Dante didn’t waste time blogging about the ink he used to write
Inferno.
The greats have been as particular and passionate about the tools as the rest of us. Whether this is productive or unnecessary is a different argument. But don't revise history to fit your argument.
Every time I write something without an IDE - which is very often - I'm reminded how much more convenient they are. And limiting. And the limiting is convenient. And I feel less productive because I don't have it.
Consider a typical blank IDE project (any IDE): you select File > New Project, you enter a name, you create some source files according to the language and then you press "compile and run" or Ctrl+F5 and it runs. If I want some obscure feature, I have to go three levels deep in an options menu or maybe I just can't have the feature at all. But if what I want is very typical (as it usually is), it's extremely convenient.
Now consider a typical Makefile project (which is really a lot like compiling your project with a shell script with some optimizations). You can compile anything you like, any way you like. It's extremely flexible. You can compile a domain-specific compiler, that you wrote, and then use that to compile the rest of your code. You can walk over all your files, extract certain comments and generate a data file to be #included into another file. You can mix and match languages. You can compile the same file several times with different options. You can compile anything any way anywhen anywhere.
But you can't load it into an IDE, because the IDE has no way to represent that.
At best, you can load it as a "makefile project", where the build process is completely opaque to the IDE, and autocomplete won't work right because it doesn't know all your custom -I and -D flags. If your IDE is really really smart, it'll attempt to dry-run the makefile, determine the flags, and still get them wrong. If your makefile is really really smart, it can't be dry-run. And it still won't be able to see your generated header files until you build them. Nor can it syntax-highlight or autocomplete your custom DSL (obviously).
The design of any language or protocol involves a tradeoff: how much you can do in the language, versus how much you can do with the language. Some configuration files are simple static data. Easy to load and process, but there are no shortcuts to repeat values or generate them from a template. Other configuration files are Turing-complete. You can write the configuration concisely concise, but good luck manipulating it programmatically and saving it back to the file, without ruining it. Project files are no exception to this rule.
IDEs, like Visual Studio, Netbeans, Eclipse (you can tell I haven't used IDEs a lot recently) tend to fall on the fixed-function, easy-to-manipulate end of the spectrum - although Netbeans writes and executes an Ant build file. Command-line build tools like Make, CMake, Ant, autoconf (you can tell I do not do web dev) tend to fall on the fully-flexible end of the spectrum. There are exceptions: command-line package managers like Cabal and NPM seem to provide the worst of both worlds, as I'm locked into a rigid project structure, but I don't get autocomplete either. And then there's Gradle which provides the worst of both worlds in the opposite way: the project structure is so flexible, but also so opaque that unless I've spent a year studying Gradle I'm still locked into the default - I'm the one who has to parse and modify the Turing-complete configuration that someone else wrote (and I still don't get autocomplete). (I'm aware that Android Studio writes and executes Gradle files just like Netbeans writes and executes Ant files; the IDE's project structure is supreme in these cases.)
---
Another thing the article talks about is that GUI designers generate ugly code. With GUI designers, the graphical design is the code - reading the textual code generated by a GUI designer is like reading an RTF file (formatted text) directly instead of reading it in Wordpad. Use the right tool for the job, and the job will be easier.
However, GUI designers suffer from the same power dilemma. If you want some type of dynamically generated GUI layout, you won't find it in a GUI designer. You may be able to copy the code generated by the GUI designer, or edit it directly, and then you'll be legitimately annoyed by the ugly code. And if you edit it directly and then reopen the file in the GUI designer tool, it will be annoyed at having to process your non-conforming code! (Looking at you, Eclipse WindowBuilder.) There's just something fundamental here, where you can't use the really nice and convenient tool if you want to do the complicated thing.
In Visual Basic (pre-.NET), there was no GUI code at all. There was just the GUI, and the code. You could refer to things in the GUI by name in the code, but they weren't declared as fields or anything. Of course, in this case you can't make a dynamic layout by using what the GUI builder generated as a template. All layouts are static - strictly how they looked on the designer's screen.
---
Although writing pure C code in the command line is educational and more flexible and so on, it's strictly less productive. It's like planting a field by hand so you can see what the machine is giving you. It's better at making you gain understanding of the process, and appreciation for the machine - both of which are very valuable - than it is at actually planting the field. (Codeless Code 122)
---
This message was delayed by the Hacker News rate limit.
I'm sure you could go back 40 years earlier and find programmers complaining about using FORTRAN and COBOL compilers instead writing the assembly by hand.
I think that the assembler->compiler transition is a better metaphor for the brain->brain+AI transition than Visual Studio's old-hat autocomplete etc.
After working with Cursor for a couple of hours, I had a bunch of code that was working according to my tests, but when I integrated it, I found that Claude had inferred a completely different protocol for interacting with data structure than the rest of my code was using. Yeah, write better tests... but I then found that I did not really understand most of the code Claude had written, even though I'd approved every change on a granular level. Worked manually through an solid hour of wtf before I figured out the root cause, then Clauded my way through the fix.
I can picture an assembly developer having a similar experience trying to figure out why the compiler generated _this_ instead of _that_ in the decade where every byte of memory mattered.
Having lived through the dumb editor->IDE transition, though, I _never_ had anything like that experience of not understanding what I'd done in hours 1 and 2 at the very beginning of hour 3.
reply