After almost 20 years of writing the best code I can muster as much as I can, I’ve come to understand that most people won’t ever really appreciate it for the effort you put in. Many won’t even notice; they’re too preoccupied with their own lives to open their eyes to what is there. Nor do they give a damn about mastery. But, that doesn’t mean it’s a waste.
I don’t think I’d do much differently in retrospect, except be far more cautious about where and who I spend time around.
Even this very thread has some pretty bad vibes: “who can say what good code is?” “good code can’t exist with deadlines.”
I feel really bad for junior/intermediate devs that read this stuff and internalize it.
I’ve hunted down contact info and emailed a dev who’d left the company to thank them for the clean code they left behind.
I encountered their code while debugging weird behavior in a legacy feature. The code structure was immediately intuitive and comprehensible. Just the right amount of detailed but not superfluous commenting. Beautiful. It made my day so I let them know!
I never had code where I was like, oh this has too many comments.
I never get this sentiment from developers. I've had reviews were developers wanted me to slim down & rewrite comments.
It's often hard in the moment to understand what's obvious or not. Just always writing a comment even if it's obvious can save a lot of pain in the future.
The issue I’ve seen in older codebases is comments not being updated when the corresponding code updates. This usually is more of a problem for comments that are not local to the code, whether they are in docs or in a separate code file. Once enough of that creeps in you start distrusting comments and thus stop reading them.
> You have never seen a comment that helpfully explains that getUsername() returns a string with the username of the user that the method is invoked on?
Yes, but that's not all that annoying when the opposite is you stumbling upon a service pattern with uncommented methods along the lines of CVKUser getCVKUser(int cvkId, ...) or VFUser getVFUser(int cktId, ...), without having any idea what the person writing the code 4 years ago was thinking.
You've no idea what CVK or VF is supposed to be and have to spend an hour looking at both of those classes, as well as everything that surrounds them in the codebase, as well as what cvkId and cktId are supposed to be and why cktId supposedly matches VFUser, to do enough code archaeology, so you're able to figure things out confidently.
That said, there is little use in code comments that explain WHAT the code does for trivial bits of code, instead of comments that explain WHY it works like that (like a more sane version of having to dig through Jira tickets, or years of commit history, or old pull requests, or whatever) or what other considerations there are to take into account (since separate docs won't be read as much and Jira tickets won't always have the technical information).
I have seen plenty of it, but it doesn't bother me at all, especially with IDE syntax highlighting. The near-zero-documentation policy seen in some codebases bothers me 10x more.
in a collaborative environment, it's rarely 30 seconds to make such a change though. usually you have to make a commit with message, make a PR, submit the PR and get someone to review it (many orgs require code review for every change), then merge it. And each of those steps usually require you to context switch from whatever else you were actually trying to get done.
Now, if you're already changing that part of the code/file, then sure it's pretty low overhead. But for something as simple as that function, I'd probably delete the comment entirely.
If something like that is generating a lot of bureaucracy and time-wasting, then there are several other pressing problems that are much worse than that.
My company is heavily regulated (finance) and highly bureaucratic, but I still just merge those PRs without waiting for the CI or without even bothering clicking approve, I just look at it. When I find one myself I just merge to master or edit in Github directly. No auditor has ever batted an eye to that.
For something as small as a code comment change, it's super fast, and honestly not worth the local branch stage -> commit -> push flow. Assuming GitHub, just edit it within the browser (if you find it in your editor, press the hotkey for opening the source in GitHub, if you're already in the browser, great). Make the change, hit commit (which defaults to a feature branch, you can change this), and then GitHub automatically opens up a PR template for your to then hit "create." Add reviewer(s), enable auto-merge, then forget about it.
yeah that's an issue, but the much easier to update a comment then an extensive suite of unit tests. And in my experience the payoff of good comments is about the same, minus the cost.
Obvious is and will be what the code does, not obvious is and will be why the code does it. I hate when people write comments explaining what the code does, I can just read the code and I will know. Why write it twice and open the possibility for inconsistencies? Is the comment wrong - forgotten to be updated maybe - as the code does something different then the comment says or does the code have a bug and it should actually do what the comment says? Also if your code can not be easily understood, then improve the code instead of trying to explain it in comments. Good reasons for comments explaining what the code does are few and far between. On the other hand I want people to comment why they do what they do and why they do it the way they do it, that are things that are hard or impossible to infer from the code. Unfortunately, at least in my experience, people tend to write way to many what and way to few why comments.
I've seen comments that basically amount to "here we set the variable to 5". Comments also come with a maintenance burden, it's easy to forget to update them when the code changes.
> I never had code where I was like, oh this has too many comments.
It happens. Even worse is when the code has been updated but the comments have not. I've picked up the habit of double checking comments with git blame.
Usually the situation is someone has meticulously crafted some perfect system which can be extended and reused in many coherent ways. And then that person leaves the company and no one else truly understands how it works or was meant to be used so it gets a series of hacks and patches applied to it which violate the original design which is known by no one at the company. For most projects this is not that big of a deal, the component gradually becomes less coherent and the original requirements change so much that one day someone will decide that the whole thing will be rewritten from scratch.
> Usually the situation is someone has meticulously crafted some perfect system which can be extended and reused in many coherent ways.
Very rarely is that good code, even if you actually do understand the design. Most of the time, when you come to extend it, it turns out to be in a slightly different way than the original author had expected. Now you've got the complexity of solving the original problem (inevitably) and the new problem (also inevitable) and also some other problem that might have needed solving but didn't after all (not inevitable).
Most of the time, code that is easiest to modify or reuse is the code that does its current job in the simplest way possible. That doesn't mean you can't create reusable abstractions - doing so often does make the code simpler even for just its current job. And that usually ends up aligning quite nicely with what turns out to be (almost) what is needed for other tasks anyway.
I agree wholeheartedly. Code which is "designed to be extensible towards future use cases" very often expects the wrong kinds of extensions and actually makes it more difficult to extend it because it's necessarily more complicated.
> Most of the time, code that is easiest to modify or reuse is the code that does its current job in the simplest way possible.
The exception here I think is the choice of data structure. If you know that code will be extended to accept additional fields it makes sense to be sure to use an extensible data structure or to ensure that side effects are contained correctly (a good idea anyway) and in a way that allows extensibility when they must be changed.
> If you know that code will be extended to accept additional fields
Well, you rarely know, more often "guess".
There's an important distinction between public and private interfaces. With public interfaces, I contend that it does make sense to think about future use cases and put in some flexibility even at the cost of a small overhead.
For anyone interested in a discussion about the concept of systems not being understood by newcomers, I can recommend Peter Naur's Programming as Theory Building: https://pages.cs.wisc.edu/~remzi/Naur.pdf
Right to the spot, thanks for sharing this! I've always thought that having a decision log of justifying design/implementation choices which are somehow surprising would be healthy thing to do.
That's often the rationalization for most "perfect" and "ultimately reusable" code I see, however what I've seen too many times is code that ended up turning into shit even when the creator remained the sole maintainer.
Blame often gets passed upwards to some product manager or sales coming up with some outlandish business requirement, but honestly the real reason is that the code wasn't that flexible to begin with.
Which is perfectly fine, not all code can be flexible and infinitely reusable. Perhaps we should stop forcing every single file to be.
This is true if you assume that the abstractions identified by the original author is correct and adequate for the life of that app. Usually, it's not the case. There are slight deviations that makes previous abstractions unusable for new requirements and the new dev has to start from the beginning
And this is why Go lang exists... The trade off is screw brilliant engineering just make it simple so other people can build on it easily. Not saying it's my favorite, but it has it's merits long term.
Go isn't my favourite language, but I disagree with that: a language being being simple does wonders to reduce the possibilities of accidental complexity added by complicated features that are unnecessary for the business.
But of course it's still possible to make a mess: I've seen someone who created an ad-hoc object model (with its own structs, plus multiple inheritance and all) on top of Golang and wrote a few apps in that style. But not having the features in the first place can help.
I find Go bizarre... 90% of the code written in it seems to consist of checking for errors and passing them around. And yet, every time I've had to interoperate with somebody's Go code, it falls apart immediately because of significant processing errors that it seems like somebody should've caught, like a null-pointer dereference, or a wrong lookup into a map, or a (de)serialization error.
_Every time_ the programmer responsible seems blissfully unaware of the possibility of said error and I need to provide a test case for them to reproduce it. Their mental model of the code they think they wrote is wrong compared to the actual code.
The culprit is usually something like an `interface{}`, granted, which people then excuse as being un-Go-like... but if all the language's supposed ease and robustness fails as soon as it encounters anything outside of its own ecosystem, then it's not worth much. All the busywork around errors starts to look like a cargo cult to make the juniors feel like they're accomplishing things when they're just glueing things together.
No other language I've seen has it so bad. Rust is particularly good in this area, and seems to view it as its own responsibility to interoperate cleanly, providing sane and powerful ways to opt-in/out of Rust's semantics at the edges. Even TypeScript makes it easy to partially or gradually type when you need to, and this seems much safer in practice to me.
I personally prefer rust over go as well. That said, some rust code is genuinely scary in a way where you do have to wonder if anyone else could maintain it without rewriting it themselves. There's a lot of complexity that although is visible in the syntax is too much for anyone to keep a mental model of - at a certain level of complexity.
Kindly written rust is fine, same goes for a lot of languages. The issue is, sometimes really smart people have no interest in kindness. There are a lot of social factors that go into people writing monstrosities "gotta look smarter!" "Gotta have job security!" "Don't want anyone to call me out on this".
I write dumb code 99 percent of the time. I'll never get promoted for it, but people know exactly what it's doing.
Then why not assembly is the ideal program language? Ad absurdum, that’s the simplest, isn’t it? Each line does something completely trivial.
Abstraction is the only tool we have to even hope to manage complexity. Unfortunately there is no way to force good abstractions, finding out the perfect abstraction is the deal of the art. Maybe I’m wrong on this, but I’m on the elitist view that the industry really shouldn’t try to replace a single senior developer with 10 juniors that monkey patch based on stackoverflow/chatgpt. I don’t even see how that makes financial sense, if one values product quality.
Because, obviously, too little abstraction is as bad as too much abstraction. The way to combat excess is not with more excess.
There are no automated to enforce good abstractions, but languages and tools can serve as a way to shepherd programmers into avoiding or preferring certain patterns. This is an argument as old as the programming language field itself.
For some people Golang strikes a good balance. I'm not one of those people, but I have learned to respect it.
Assembly is a lot harder to write correctly/safely than go lang is. Comparing the two seems disingenuous.
I do agree with you to some extent about not replacing senior engineers. But a "bad" senior engineer who is "too smart" for their own good can do more damage then juniors following a template.
I too have met many developers who have developed the mindset that good code is impossible. That is really sad.
Imagine a carpenter that says straight cuts aren't possible, just because they never maintained their tools and the last 10 cuts came out wrong.
Of course good code is possible. But not every person is going to write it and it is not going to happen under all possible circumstances and in all given project structures.
Our job as professional software people is to be able to judge how to create the circumstances and the time frames under which good code can be written — and to say No to projects that cannot be written by us and our team. This isn't always a simple thing, but it must be done.
The saddest aspect is that you see many experienced software developers who never experienced things working well. And as someone who sometimes writes code for fun I cannot understand that at all. What I get is that throughout your professional life you can never had the chance to be in a project where you could write good code. However you can write good or even perfect code at home where nobody pressures you. And if it is possible there, why wouldn't it be in a professional context?
But what is "good code", even? For some people that's thousands of two-line methods in hundreds of classes where each part is easy to understand. For others it is more direct code that is optimized for debugging, but not as aesthetically pleasant. Very often those two groups people dislike each other's code a lot. And there's even more other "groups" than those obvious two.
Does that question really matter for this discussion? Every developer will have their own flavour of what constitutes good or bad code, but independent of their flavour isn't a software engineer who doesn't believe in the possibility of achieving good code a somewhat sad thing? Imagine being a cook that doesn't believe in the possibility of cooking good food — sure what constitutes good food is a matter of discussion, but if you have a cook that doesn't believe in it that alone is remarkable in itself.
But we are talking about self-judgment here. What does it matter which flavour of good you expect your code to be if you actually don't believe good code by your definition can be realistically achieved?
I agree with the sentiment, never said anywhere that it couldn't be achieved in a personal level. Just that defining "good code" is difficult. If you narrow down "good code" to mean "good to myself" then of course it is sad that someone doesn't believe in it.
On the other hand, I find that someone saying that "universally good code" is not possible as understandable.
I personally believe it is only really achievable in certain domains, for some others (especially for code handling a lot of essential complexity) it is borderline impossible to not have at least some uneven corners. And there's only so much you can sweep under the carpet of indirection.
Now imagine a carpenter who can build the most beautiful, elegant piece of furniture in 30 days, but you force them to make it in 10. Or you forbid them from using a certain tool. Or force them to use a tool you prefer. Or …
Yes, not every programmer cares about good code. But of those that do, also not all of them can agree on what good code is.
And not all good programmers can write good code, for one reason or another.
It is way less black and white than good vs bad/mediocre programmers.
> Now imagine a carpenter who can build the most beautiful, elegant piece of furniture in 30 days, but you force them to make it in 10. Or you forbid them from using a certain tool. Or force them to use a tool you prefer. Or …
Some of these are not like the others. Not allowing a carpenter to use the most suitable tools is petty. Not allowing them 30 days may make sense depending on the business context. What if it's a stool that's going to be sat on once and then thrown away? Or there's only 10 days salary in the bank but then an invester will see it and may pay for 100 days work?
Good code usually takes more time than bad code (depending on the reason!) and often it is worth that extra time (and the fight to get it). But sometimes it's better not to and that's OK.
Yeah sure, but in the context of my argument (it is sad that there are aoftware developers who don't believe the product of their craft can ever be good) it doesn't actually matter. Sure, maybe that just means coders have impossible standards that cannot be reached, ever. Or it could mean that typical projects tend to be manages in a way that leads to code that those who wrote it don't consider good (to such a degree where they don't believe good code is actually possible).
Personally I believe good code is an reliable, maintainable, transparent and well designed solution for a problem (or a set of problems). And that means for a small problem aome ad hoc script can be good, while for a big complicated set of problems an elaborate well coded project with a build system can be good code.
But what I believe is that there are ideal solutions from an engineering standpoint, and there are ideal solutions from an business standpoint. And those sometimes don't align with each other. E.g. when you solve a core problem that should really be dealt with properly with a quick fix that might be a cheap way to reach the short term business goals, but it could shape a lot of engineering decisions in the future in a bad way and maybe even become a business problem in the long term.
Well put, about the engineering solutions and business solutions. There is definitely big faps between business people and developers. They usually both think they are right. :)
I am not denying that at all. I know my share of carpenters and if they are in a pickle, they might take on a job like that, and swear about it like there is no tomorrow all the way to the bank.
What I found remarkable wasn't that there are badly managed projects, but that there are software devs who seem to think there is no other way than doing it like that.
Yep, I have met a few developers like this. A lot of them would not even question the situation or explain it is not a food idea doing things a certain way. They would just take the path of least resistance.
I think the abundance of jobs - maybe not anymore? - also added to the problem.
I’ve received countless CVs where the developer didn’t stay at the same job for more than a year, max two. Now, it could be that some of the jobs sucked, bit it is highly unlikely that all of them sucked.
Imagine that the carpenter already gets the wood cut from a far far away land, and has to make due to the way they were prepared.
After nailing a couple of them together, it has to send them to another far far away land, for some glue job.
Afterwards he gets them back, he applies varnish over the wood pieces, and sends them yet again to another far far away land for the paint job.
Those people on the far far away lands don't have any carpentery training and do whatever they feel like to meet the description of what they are supposed to deliver back.
While the carpenter tries to rescue what is possible from each delivery, so that the table and chairs only look half as bad, people can still seat on them without falling, and the tables have four legs about the same size.
This is a very concise description of a lot of modern manufacturing.
I just took apart a bunch of stuff on a VW van: the bus itself was nominally manufactured in Poland. According to some of the labels the guts of the servo that I took apart and repaired was made in Switzerland. But the servo itself was made in Germany. When you look at the guts parts of it were made in China, parts in Japan and probably some parts in Switzerland but it's unclear which those would be.
The parts of that car have collectively already traveled more than the car ever will by the time it gets delivered to the customer (in the Netherlands).
Let's share our gratitude indeed; it's like a warming fire.
I have had extraordinary coworkers who wrote incredibly solid code that was a joy to work with at every turn. When some of those coworkers have moved on, I've bought them thank you presents and taken them out for a meal to say thank you for making a difference. It's a token gesture in the face of how much I've learned from them and their wonderful work.
My message is to all of us who work away in the trenches of our day-to-day, where we stare through straws and move grains of sand; please let us shout and celebrate the granules of gold that our peers produce!
One of the saddest moments was hearing a smart person with barely a few years of experience insist that code always degrades in quality and there's nothing you can really do about it...
> I don’t think I’d do much differently in retrospect, except be far more cautious about where and who I spend time around.
Another angle here is that code has become seen as something with the shelf life of a banana. People today talk of code merely a year old as legacy that needs to be rewritten ASAP. If one grew up in a world where code is scheduled to be thrown away as soon as it gets to production, I guess I can see how quality of code and documentation doesn't matter at all.
That's not a happy world though.
I grew up with the codebases of Unix (particularly SunOS, later Solaris). Code lives on for decades. One matures it to perfection and then it's perfect for a long, long time. That's a much more pleasant way to honor the craft.
I too have met many developers who have developed the mindset that good code is impossible. That is really sad.
Imagine a carpenter that says straight cuts aren't possible, just because they never maintained their tools and the last 10 cuts came out wrong.
Of course good code is possible. But not every person is going to write it and it is not going to happen under all possible circumstances and in all given project structures.
Our job as professional software people is to be able to judge how to create the circumstances and the time frames under which good code can be written — and to say No to projects that cannot be written by us and our team.
> but that sentiment usually comes with burnout from too many quick deadlines and vast piles of tech debt.
That would be my question. We are not talking about people saying that within their context good code is not possible, we are talking about people who say good code is not possible, period. Maybe burnout has to do with it, maybe also depression. But I even if I was stuck in a job that made me churn put half broken code all day I'd still remember my hobby projects in which I experienced myself that perfectly fine code can be written by a single determined person — and quite frankly: it doesn't even have ro take that much longer, it just has to be a trusted, concentrated low pressure environment.
Agreed. At the same time, my customers are downright joyous when the code works and solves their problems. I've had numerous occasions where people were thrilled to meet me and gave me more hugs and praise than I was comfortable with once they heard that I created certain tools they use.
So while I do try to write good code, if you are looking for appreciation, it comes from what the code does, not how it is written.
other colleagues not caring is one thing, your company not recognizing the value and giving you the adequate compensation is another.. I don't mind if colleagues don't appreciate as long as I don't have to either lower my skills or be underpaid
I interned somewhere where I wrote a 56k modem codec decoder. That particular codec relies on decompression by streaming bits out, which is something awful. Was able to deliver it by summer’s end.
After I graduated college in 2004, they hired me at 50k.
which is one the important point yet a very difficult information to get
what is mind blowing is how ignorant but somehow subconsciously proud (dunnig-kruger kind of way) will bully themselves into higher salaries (I've seen them repeatedly) while the good working, caring craftman will not and stagnate at lower pay.
worst of all is when the low skill guy becomes your boss..
As someone who has gone back and read my old code as well as a lot of others old code, there is no such thing as good code. IMHO, the problem is one of cultural context which is often not shared between generations of coders. Languages and best practices can change so violently that best-practices one decade are often anti-patterns in the next.
As a codebase outlives its best-practices, do you stick with them and extend with those same anti-patterns or do you implement new things with a different mindset than the rest of the code?
-- If you break with tradition then you are making it harder to understand the codebase as a whole.
eg: why do we have a mix of procedural/OO/functional/... methodologies with spaghetti code at the core?
-- If you refactor everything then you are definitely breaking something.
eg: you have a 10M+ line java codebase with transformations bringing you into the 2020s from coders who originally knew C and Java 1.4.
-- If you keep with tradition then are you writing the best code possible?
eg: would you willingly use goto of some variant in new code if you jumped into a Cobol 74 codebase?
I think what the article really wants to talk about is "clear" code or "understandable" code and not "good" code.
Reminds me of this good old rant from Peter Welch, Programming Sucks
...
Every programmer occasionally, when nobody’s home, turns off the lights, pours a glass of scotch, puts on some light German electronica, and opens up a file on their computer. It’s a different file for every programmer. Sometimes they wrote it, sometimes they found it and knew they had to save it. They read over the lines, and weep at their beauty, then the tears turn bitter as they remember the rest of the files and the inevitable collapse of all that is good and true in the world.
This file is Good Code. It has sensible and consistent names for functions and variables. It’s concise. It doesn’t do anything obviously stupid. It has never had to live in the wild, or answer to a sales team. It does exactly one, mundane, specific thing, and it does it well. It was written by a single person, and never touched by another. It reads like poetry written by someone over thirty.
...
Then there is a long-winded explanation of how they pack bits into an array of longs. Okay, why not make this a reusable module of code? Because C is a weak language, or because the Linux kernel is spaghetti with a dozen implementations of bit maps?
\*
* Note that this can drive nr *below* what we had passed if sysctl_nr_open
* had been set lower between the check in expand_files() and here. Deal
* with that in caller, it's cheaper that way.
Huuuurk.
Sorry… did I just see a data race just casually commented as “okay because it is faster if it’s horrifically unsafe?
The issue so very many people fall into is assuming some foreign-looking codebase is awful.
What people typically mean to say is something along the lines of "I haven't a clue what this does, haven't enough experience with the language to understand it, or how people use it".
Go look at code written in "safe" languages like Rust and tell me you understand what it's doing any better than well written C code.
You have to know the language to understand what is sane or not, what the conventions are, what the common issues are, etc.
Same with all the Functional stuff out there. To a OO person, it looks bat poo insane. But if you learn FP, it's natural.
Some of your complaints are valid from the perspective of someone who doesn't do kernel work. For the people who do work on the kernel, the macros, abbreviations, etc are common place and well understood, and a complete non-issue.
I wish people see this more. It's so frustrating to see people pass judgement without even understanding the whole context. Maybe it's due to the lack of documentation sometimes. But many times it's just people not reading them.
You’ve circled back to the same point the person you’re replying to was supporting:
“ the problem is one of cultural context which is often not shared between generations of coders. “
There is no such thing as absolutely good code. It’s context dependent and context tends to be ephemeral. The Linux kernel is in some ways an exception, but also appreciated by a pretty small group. The rant against the kernel code is meant to be illustrative - if you don’t know the conventions, abbreviations, if you’re not operating in the context of an early 21st century OS, where memory is handled manually, none of this looks particularly great.
Code is cultural and cultures are niche and ever changing. Particularly these days. The same technology that enabled the fast rise of, say, Ruby, or Go, or Rust makes your code and my code depreciate quite quickly.
I bet you a nickel that -if you're a working programmer- most of the functions that you write don't check __every__ possible thing that could go wrong, but -instead- have some assumptions about validations that the caller must run before calling them (e.g. "My callers will hand me well-formed XML"), and impose some constraints on the shape and range of the data that they return to their caller (e.g. "Callers must be prepared to handle a graph that contains cycles.").
> ...did I just see a data race...
You did see a data race.
That race is pretty clearly unavoidable, as it would be triggered by a sysadmin setting the value of a particular sysctl knob "low enough" in between the check mentioned in another function in the chain of function calls that this function is part of being run and the commented code being run.
(You can tell that this function part of a chain of function calls because that comment refers to a check in a function that is not called by the function that contains the comment.)
> ...okay because it is faster if it’s horrifically unsafe...
I mean, here's the call site of the "objectionable" function and the site at which the issue gets detected: <https://github.com/torvalds/linux/blob/master/fs/file.c#L174...>. At both ends, there's a comment that says "Hey, it's faster to do this check here, rather than over there.".
It's clear that the folks who worked on this code were concerned about the performance, and had discovered that relying on callers to discover if that race had resulted in them getting a smaller allocation than they wanted was needed.
Personally I don't think that's what makes a code "good". All code will get out of date with time. As with everything in this world. Doesn't mean that they are not good. My old gas stove doesn't have the bells and whistles that newer ones does. But it was a really good engineered product that have lasted for decades. It's really well documented so it's easy to do maintenance. It's also well understood by people.
Many people will say that an induction stove is better in safety, convenience and many more. But a gas stove still has its own place.
Just because the kernel code doesn't use the latest "best practices" doesn't mean it's bad. It could be due to many reasons from performance, compatibility, etc. Software is so easy to update anyway compared to other things that I'd bet there's more "good" software than "good" anything else.
While I don't think this code is the best, these complaints are like disliking Shakespeare because his writing is archaic. Yeah, no shit.
Good writing is relative to the norms and expectations of the audience, which are other kernel developers in this case.
It’s written in an unsafe language.
So is every other mainstream kernel.
Double underscores everywhere, which are the bad alternative to namespaces seen in weak languages like C.
It's the designated way to avoid symbol conflicts for system software. Cultural norms.
Constants defined with lowercase and not actually marked as const.
There aren't any constants in fs/file.c though? Do you mean the sysctls? Those are modifiable at runtime.
WTF does “BITBIT_NR(nr)” do!?
This is genuinely obscure without background knowledge. fs/file.c maintains a bitmap of bitmaps for performance optimization reasons, hence "bitbit". "nr" has been a standard abbreviation in this part of the kernel for decades.
Terse identifiers are just a cultural norm in kernel code. old file descriptor table, new file descriptor table, file descriptors, filesystem, etc...
Then there is a long-winded explanation of how they pack bits into an array of longs. Okay, why not make this a reusable module of code? Because C is a weak language, or because the Linux kernel is spaghetti with a dozen implementations of bit maps?
Because this code is actually very tricky, performance sensitive, and basically unique in the kernel. The kernel has many things that could be de-duplicated, but I'm confident someone's tried refactoring this and failed for some reason or another, probably performance.
Sorry… did I just see a data race just casually commented as “okay because it is faster if it’s horrifically unsafe?
You're seeing one of the many design tradeoffs that are made to get good performance in the real world. The VFS code this file is part of is one of the most performance-critical components in the kernel and gets involved with all the other filesystem operations that happen, which on a unix system is essentially everything. The code (and cache footprint) are smaller if the safety burden is pushed off to other people here, which is more important than maintaining an ideal interface.
This is some of the most battle-tested code in the world. It's fine if you don't want to modify it, but it's solid code that people have literally bet their lives on given the mildly terrifying use of Linux in safety-critical systems.
> disliking Shakespeare because his writing is archaic
But... his writing is archaic. Not just quaintly archaic, like a novel from the 1800s, but literally requiring translation archaic.
Worse still, the vast, vast majority of people "teaching" Shakespeare pronounce it wrong, which means most of the humour is lost: https://www.youtube.com/watch?v=YiblRSqhL04
The jokes don't work any more!
The rhymes sound wrong!
The whole thing is a farce. Theatre. We all pretend it is great English, whe in fact it isn't even English any more.
It's taught because our teachers were taught it. Those teachers... and on.
It's like the idiots still formatting cloud SSD virtual disks as-if they are physical RAID controllers with spinning rust.
It's like the Hungarian notation identifier style people copied from the NT kernel code, even though the NT kernel people realised it was a mistake and moved on.
The Linux kernel code would be crazy bad if it wasn't for tens of thousands of people beating on it until it has become merely mediocre.
It will never be perfect, it'll never even be "good" code. It has too much inertia, too much history for that to ever happen.
There's room for conflicting viewpoints of course, but Shakespeare is enjoyed in the original by English speakers in the 21st century, and academic consensus calls his language "Modern English" - as opposed to Middle English, which is a struggle to understand, or Old English, which very few people can grok today.
So no, it requires no translation.
I disagree completely that the humour is lost. There may be jokes I don't get, but that's more true for Futurama with all its 90s American telly references.
Complains about the craftmans tools rather then the product.
IT is rappidly becoming an unusable mess even though every generation of programing languages and frameworks promises revolutions. Honestly i think programing is doomed as man cannot look past the symbolic. Hooked like a lost poet, only complaining about the words used never seing the beauty.
Short names actually help with readability when they are commonly used. It's a lot quicker to go over the code when you don't need to read as much, similar to how one would write something like x+y=z in math instead of putting descriptive name for each variable in each step or using things "1€/kg" instead of "one euro per kilogram" (or let alone including kilogram's definition from SI standard).
Despite this, mathematical proofs still use a bunch of plain words to define things. In a programming context, if one has an “user” from the database, “user” is usually a better name than “u”.
It really is a context and frequency thing. For example:
users.map(u => u.lastName)
Nobody is going to have any questions about what 'u' is here. Do that in a codebase that has a 100% consistent and very frequently used User type, and it starts to feel quite reasonable to just use 'u'. It's as familiar as Apple π.
I believe a programming language could actually codify rules around how long names are allowed to be based on scope.
In a local scope, aliasing to single-letters is OK, and many codebases are littered with i,j,k,b,v,x,y,z arguments and iterators. But putting that into a nested scope has produced many errors for me as I accidentally declare a new n or v and then try to access the outer one. So I'd enforce uniqueness when nesting.
After some experience with Pascal, I thought the idea of implicitly reserving "result" as the name for the returned value was a good idea, but also a bit long. Taking a page from TI-BASIC I have settled on "ans" for "answer".
Global variables generally have to be a bit longer to not collide. In my own investigations, this happens at four characters: while you can devise some two and three-character abbreviations that work at language level, at four you start getting more complete English words(and English is a bit part of this whole estimation, more densely encoded languages will have their own metrics).
Types, functions and classes often cause woes, because they need to be accessed across modules, and modules will use names in linguistically dense ways, but they should not expose that as the default interface in most cases. Here I think the goal should be to have full namespace qualification as the default, and then gradual relaxation near the callsite through explicit aliasing and redeclaration.
That last part makes me think that I can go further with what I alias in some languages. Eg. in Haxe, I have the freedom to throw in "typedef Foo = com.corporation.big.fancy.Package.Classname.Method" wherever I like.
In some languages, that leads to terribly long lines that you then have to break down, add indentation, and it just becomes harder to read. There is happy middle.
I wrote some code that was years ahead of a distinguished engineer's existing code. He immediately saw where I was going and adjusted his code accordingly (by dropping my branch and authoring a new branch. I assume because he didn't understand git, but I believe he was protecting his future performance review). Some people won't admit someone else can write better than them.
Suits me. Us programmers being such a negative bunch, surely we can easily build a checklist of bad code attributes and then just call any code with few ticks on the list "good code". Does it need to be much more complicated than that?
I think about this all the time when it comes to evolving systems. I work on a programming language, and these tensions are core to my job. In the context of a language, I think it can be:
* Modern, where the language's features and idioms reflect the best way we know to write good, clean code today.
* Compatible, where code written in the language years ago continues to run the same as when first written.
* Simple, where the language has relatively few concepts that compose in clean ways and where there are few ways to accomplish the same goal.
good code solves the problem at hand.
good code is as simple as possible.
good code is testable and tested.
good code is minimal.
good code is performant.
good code is maintainable.
good code is understandable.
yes it matters somewhat that you have some consistency of the way you code. that's a secondary thing
I disagree. Good code is code that is indistinguishable in style and convention from all adjacent code. I always advise juniors two things:
1) your code will be written once but read a thousand times
2) if you want to make your mark with style and convention start an OSS project of your own, otherwise make your mark through the simplicity and elegance of your logic, not the way your code contrasts from the code base.
Style changes over time, but factoring is evergreen.
Something could be written using Java 1.4 idioms with good organization and factoring, or using the latest functional hoohas with everything all mashed together. Or the inverse, of course.
If the structure of the code matches the structure of the problem, the implementation style is a much smaller deal.
We can all be passionate about what we care about and strive to be better. For me, this is what looking back at my old code means. When I see that I could have done something better, it means that I have grown. It also means that whatever I produce now could probably be improved upon. The biggest trap (for me) in this endeavor is to not be a perfectionist; One can spend a lifetime improving/optimizing/refactoring/etc. By accepting that there is no single encompassing definition of "good code," I am freed to choose the attributes that I think matter most for a particular piece of code.
maybe there's no good code but there's degrees of terribleness. It can make all the difference in the ability to safely add to it in a timely manner or not.
In my opinion, good code is like a Brita filter. It does its job, will one day need to be replaced, and it should be easy to replace.
More importantly: metaphors are not a healthy way to understand an idea. Ideas are more nuanced than a metaphor could possibly account for. Abstaining from metaphors might not make for a catchy headline though.
There's always a possibility that a bad metaphor can be made, but this doesn't mean that metaphors themselves are bad. They require at least some consideration in order to communicate and relate the relevant details of something to another thing. It would be like criticizing multiplication because sometimes people forget to carry extra values
good code is like vintage wine: you don’t recognise the packaging, no-one agrees on how exactly to describe why it’s good, and the only person who still remembers all the details of production retired ten years ago
Metaphors are a fine way to understand an idea if one accepts that their purpose is to leverage an existing understanding of something, and that they're lossy. So I guess my issue with your comment's second half is that I don't think metaphors are being used as a shortcut to a full and complete understanding of a topic. They're an aid and I think that is mostly how they're used, IME. I don't think they should be written off.
Unrelated, but lately I find that I have a much harder time learning with metaphors than if educators and explainers would just speak directly about things. I had very terrible experiences learning physics because teachers would always resort to weird metaphors, about music, about electricity being like a river, and more.
Yep. Yesterday I ran into a job opening asking for "Java developer who delivers 3x faster than the other developers" in one of the bullet points. It's such a weird point to emphasize that you want speed with no concern to trade-offs, and the maintainability and scalability of the codebase will be the first victims sooner or later.
There are cases where the tradeoffs don't involve the actual codebase. For instance, there are certainly many developers who are 3x faster than the average developer simply as the result of having enough experience and/or skill to know which tools and/or approaches are optimal for a large subset of the kinds of problems they encounter in their typical work, such as a developer familiar with a particular ecosystem accomplishing some task in 10 minutes by knowing which library and which function to use, and how, whereas another might take hours (or days) to accomplish the same task since they have to figure out what tools are even available to solve an issue and how to use them, or worse (sometimes), rolling their own solution where a library implementing the functionality already exists. The tradeoff here is simply that it will be harder to find such developers, and they'll generally want higher compensation, but their work will not degrade the quality of the codebase, quite the opposite.
My personal experience is that the majority of developers I ever encounter at work (as opposed to within my social network) are usually slow because they're bad engineers who don't understand problems quickly, don't have the knowledge or experience to see solutions quickly, and in general don't think deeply quickly. It's not hard to 3x performance without sacrifices when the baseline is mediocre at best.
"Fast must be cutting corners and not just actually better at programming" is a weird fallacy to hang your hat on.
It's a bit elitist[1]. They're saying we should expect 3x better performance from engineers. We should also expect them to write good code at this pace. The only reason we don't have these things is that most engineers are simply bad engineers. Only people the author perceives as good enough should ever be used as a metric of normal productivity. Presumably the author puts themselves in this group of good engineers.
I used to work with this kind of developer. He was bounds and leaps faster than all the other developers in the team.
The code he produced was absolutely and completely unmaintainable. The joke was that this guy could program C in any language. His code kindof-sort-of worked, but only he understood it and when anyone else had to take over his stuff, the first thing they had to do was rewriting the part from scratch. Of course there were no tests, so when the new person broke something in their attempt to try to work with their code they were scolded. The person in question was the CTO of the company, by the way.
He was 3x alright, but at the expense of everyone else on the team. I was very glad to see him go.
Especially in CRUD settings when if people would just bother to learn at least a bit about the framework, they could outsource most of the code to that. It’s not like that filtering you have to implement is some brand new discovery that has never been done before.. these apps are literally fancy excel tables very often.
There's a huge cost to having engineers who are bad at understanding and solving problems. The cost of having people who are good at understanding and solving problems is supposed to be that you pay them more, but that's not how employment incentive structures work in practice because information asymmetry and stigma around talking about salaries.
Sorry future me, I'm on a tight deadline. If I spent an extra two hours making this better now, you wouldn't have to spend two days fixing these problems. You're probably still on a tight deadline.
That’s not a great way to look at it. You can be passionate about the craft and not passionate about the product. You can usually find interesting problems relating to craftsmanship even when doing things you’re not thrilled to do.
Let’s be honest: sometimes you’re not “changing the world” like many founders like to think, just building yet another shopping cart.
If you lose love for your craft there’s no product that’ll cure it.
Well said. I am certainly not in this game because I love building forms. Mastering my tools of choice grants me satisfaction. It's hard to find a place working on a product that is truly interesting, but fortunately at least, finding a place working with tools I like is easier.
"Hacky code" is most often a result of those responsible for delivery having gaps in their understanding of the product need.
When engineers have a deep understanding of what is needed, there is rarely a need for hacks. Of course, the obvious exception to this is when libraries and/or external services mandate them.
It's true that the product should take priority over the code, but there's a limit to by how much. If the code is bad enough, its low performance, bugginess, and slow feature development will eventually be reflected in the product.
Performance and bugs are product features, not code features.
Who cares if it slows down feature development if you never have to change it. If it does impact the product, it then becomes part of the product value you're delivering. This is the time you should be investing in better code.
An article on HN a while ago posited that 'bad code' is directly correlated to forgetting what the code does over time - not the code quality itself. When you green-field a project, everything is fresh in your memory and the code feels great. When you come back to the exact same code a year later you you've lost your landmarks, there's workarounds you don't remember why they exist, suddenly you hate it, ugh this code is terrible! So you re-write it, making you more familiar with the code again and making it easy to remember - suddenly its good code!
I try to keep that in mind when writing my code. Sometimes its comments, sometimes its documentation, sometimes its CleanCode™, sometimes its big blocks of text explaining why the heck a certain class exists, but my approach is "Write it for myself in a year when I can't remember what the hell half this stuff does", using every tool at my disposal to achieve that.
After more than a decade of development, I've come up with this mindset:
I don't really care that much about the developer, and care a lot more about the users of my product, especially since that developer is usually me.
If I'm unsure of the value I'm adding, or if the ability to do something is greater than its quality, I will take shortcuts if needed.
If the codepath is critical, or if doing so is dangerous, I will take my time on the quality and the foolproofness. I will be pertinacious in this mindset, and if managers try to rush me, I will remind them of the consequences of failure and log everything.
I've seen too many developers (myself included) spending weeks and months on perfecting something that's never used, or barely used. The latter issue of not spending enough time is a lot more rare, but does happen.
One can figure out 'how' and 'what' from the code, given enough time. The most valuable thing in code is the brain of the Past Person who wrote it, looming over your shoulder, telling you 'why' in very explicit terms.
That 'why' also helps to show that Past You knew wtf you were doing, and lets Present You feel confident in making changes, because you know what the intent was.
"I'll remember this!" is one of the greatest lies in CS.
Yeah when I find myself wanting to write a comment that describes the implementation, I usually end up writing a test where possible.
I'm trying to think, but I can't come up with any kind of comment that has been useful that didn't have a "because" word, explicitly or implied.
Like, even if there's a comment like:
"This function only queries X and Y fields. Don't add more. If you need more than that, use this_other_function instead."
It has an implied reason behind it, and the mistake is not adding that reason to the comment. So this comment would be missing something like
"[...] because this is being used in a critical part of the code that is very nitpicky and it's very difficult to test because requires some annoying manual steps", or something like that.
There are probably situations where a "what" or "how" comment really are better than "self-documenting code" though. I would probably appreciate those comments if I ever need to read branchless code, SIMD code, or similar performance-sensitve witchcraft.
> I would probably appreciate those comments if I ever need to read branchless code, SIMD code, or similar performance-sensitve witchcraft.
I've done this before in production code. One of my only exceptions to my policy against leaving in commented-out code has been things liking leaving the unoptimized serial code commented out at the top of a section of some hand-vectorized code full of SIMD intrinsic. It clearly showed the intended "what", but was also useful to keep around as a baseline for performance testing and reference code to compare against for debugging.
You know how on a plane the flight attendant tells you to make sure your oxygen mask is on before helping others (in case of emergency)? I think of coding well and documentation the same way.
"At the start of my book, Learning JavaScript Design Patterns, I say
"good code is like a love letter to the next developer who will
maintain it".
It is an intimate correspondence, from one developer to another, spanning
time and space."
Seriously?!
It's simple. It's a matter of respect, and treating others as you would like to be treated. It's no fun trying to figure out the "why?" of some code you've inherited responsibility for. It can be frustrating and time consuming.
So why would you put others through it. Make the code and the reasoning behind it as clear to the next dev as possible. Explain it with comments. Refer to issue links. Make it so that there's minimal - preferably zero - unnecessary digging for the next poor soul who'll work on it.
Because I've found there's a bunch of developers who apparently think it is fun to figure out the "why?", and they code accordingly.
>So why would you put others through it.
Because they think their code is "self documenting" and perfect and that anyone that can't immediately understand their code is too dumb to be working with it.
>Explain it with comments.
We just had a discussion here in the last week or two where people were actually saying that comments are completely useless and should never be used.
>Make it so that there's minimal - preferably zero - unnecessary digging for the next poor soul who'll work on it.
Sounds great to me, but in my experience it's a minority of developers who agree with you. It makes working on others' codebases very frustrating in most cases.
Meh.
I've seen teams and projects bogged down by "good clean code" rules and nit picking code reviewers. These folks, typically "staff" engineers, over-police the repos and care more about clean code than delivery and execution.
I'm waiting for the day where AI/co-pilots can enforce team and industry best-practices, style, maintainability, testability, etc before the code is even committed.
Call it "uber-linting" and get rid of code reviews.
I'm working with someone who asks me to remove documentation that summarizes what a function does.
(The whole codebase is stripped bare of documentation.)
Copilot can write implementations based on comments. Soon I think it could flag potential errors if code doesn't do what the comment claims it does. Then we can do that in code checks.
This mentality drives me crazy. I mean i agree that most comments are bad, but some high level description what a function/object/library does or how it is supposed to be used is something different.
When it misses then I have to read the ** code to use the ** thing. And I am not interesed in that at all. Maybe they do this so someone has to read their "good" code.
> The beauty of our creations, however, is not judged solely by the elegance of our algorithms or the efficiency of our code, but by the joy and ease with which others can build upon our work.
Often this is not true. Our creations are judged by the user. They don't care how good it is under the hood. They care that it works correctly, that it's easy to use and that it's as fast as they need it to be.
Your boss should care that it's well written because that should mean it's cheaper to maintain. But, they don't look that far into the future when evaluating your performance. So, often they only care about how fast you did it and how happy the customer is with it.
Our industries incentives don't often align with good code.
Good infrastructure is too.
I had a colleague deploy the power whip for the last available rack position on his way out the door.
Years later when I desperately needed that rack to keep the site up I was able to roll it in and light it up without waiting 6weeks for an electrician change request.
He saved my bacon by thinking ahead. It was a gift he gave me, never mentioned and was not around to receive thanks for.
Here are my 2 cents, after 25 years of development in several companies.
If you want to do the next developer a favor, your codebase should be simple to understand. Period, that's it.
Simple to understand means:
1. Don't use fancy design patterns, FP constructs, performance optimizations etc. Unless the use case specifically requires it.
2. Code should do whatever it needs to do to fulfill its business requirements. Nothing less and preferably also nothing more.
3. Readable code beats documentation: only document the 'why', and only when it is non obvious. If you find you're documenting the 'how', have a second look if you can write your code in such a way that it explains itself.
4. Explain how to build the project, run tests and whatever else is non obvious in a Readme file in your project.
Documentation anywhere else, such as on Confluence, will not be maintained or read.
Where "fancy" is relative. There was a time when Java 8 streams were fancy for many devs, today it's the default way to manipulate collections.
I think the code should be "optimized" for the expected audience (of other devs). If you're a single rock star in a mediocre dev shop, you need to code differently than when you're part of a team of MIT PhDs.
It's harder to read the code than to write it so if you can barely comprehend the thing you just wrote, you probably won't understand it in the future: https://sonnet.io/posts/code-sober-debug-drunk/
2) I think it's more like speaking with ghosts (including a spoiler for The Sixth Sense):
Isn’t this only true of bad code? Good code is almost certainly the other way round, harder to write than read… just like a really good book was much harder for the author than it was for you, the reader.
> just like a really good book was much harder for the author than it was for you, the reader.
Now, I suppose this could work for a nested narrative novel, where the reader is meant not to just read the book, but to rewrite and expand it as they're reading it. Adding new ideas, new sub stories on the way. Think 100 Years of Solitude blended with Italo Calvino's Invisible Cities.
Seems like the [*]Bible is a good literary example of an async collaboration similar to code: notoriously hard to interpret, and full of dubious historical information and contradictions.
I know that Christians call it the "good book", but most of them don't use the word "good" in the same way they'd use it to describe a Nabokov or Eco novel.
The quote you call into question is semi-famous and I believe the idea is that it's relatively difficult to jump into one code unit of a larger system and get back up-to-speed because one must fill the data cache of their brain with all the considerations for why the author may have done things in a certain way. The algorithm choices and edge cases and exceptional logic paths were clearer to the one writing the original at the time.
As much as bad code irritates me, I'm not quite so quick to blame the original maintainer - they may well have wanted, badly, to write something better, but been put under artificial (and usually meaningless) time pressure that didn't allow it.
I maintain mainframe systems and very often I would like to restructure code to improve it, but the requirement is to make the most minimal change that achieves the fix or change required. Mostly because the code is 30 years of patches and minimal changes which is not well documented and is extremely fragile and not well understood.
> Good code also adheres to established best practices, [...]
Controversial opinion, but in my experience so far people usually say "this is best practice" as a way to avoid thinking, or to masquerade personal preferences. Not always, but more often than not.
As soon as you ask "why is it a best practice?", "what are the consequences of not following this best practice?", "since 'best' implies there's a 'not-best', what is an example of the 'not-best practice' that this wants us to avoid doing?".
The answer is usually something vague and handwavy like "because it's more scalable", but if you challenge that by saying "what do you mean with 'scalable'? can you give an example?", you usually won't get an answer.
Doing stuff without understanding the reason, that's cargo cult.
To be clear, I have nothing against people saying "I understand the code better like this"; it's subjective but honest, and it doesn't try to pass a subjective argument as if it were an objective truth. What I dislike is people saying "this is best practice" like an absolute truth that must be blindly followed and never challenged, without even considering the circumstances of the current project.
Please spare me from turning a simple HTTP CRUD of 4 endpoints into an overkill Icosahedronal Architecture with a reason like "when we need to write an v2 this will make it easier without needing a rewrite!"; because I know that when the time comes for an v2 you'll rewrite from scratch anyway because then you'll want to use the new and better(tm) Perpendicular Riemann Zeta Architecture.
I understand the spirit of your comment here, but in my area of experience, security, there are certainly objective “best practices”.
There is almost always pushback on security considerations, and partly because those methods quite often change and improve with time.
The term then becomes a blanket phrase that captures the constantly evolving landscape and necessity of accepting those new changes.
You’re right that it is becoming over used. Like most abridged language, once your favorite becomes popular a new way to express the same underlying idea usually comes along.
If a nobody like me writes a random blog post saying my custom way of doing ThisSecuritySensitiveThing is the best, at the next microsecond I'll have half the world linking me to like 20 different papers and 5 real incidents proving why I'm wrong.
And I wouldn't be able to wiggle my way out of that criticism with responses like "but it works for us".
Yeah, it's humbling enough when you are reviewing some old code and thinking "what idiot wrote this" only to git blame and realize it was you. What's even more jarring when you find some code, surprised the capability/feature even exists, then git blame and realize again it was you (this time hopefully with a pat on your own back because of how well it's written). Yes this has happened to me.
Nothing beats the feeling of having a coworker suggest what might be a substantial change to a codebase you haven't touched in months, only for you to look at the code and realize that the way you wrote it makes it a one- or two-line change.
All of the code I wrote more than a year or two before the time I look at it strikes me as being embarrassing. I hope this is always true, because I think it says that I have grown as a programmer over that time.
I recently had the opposite experience: encountered some code from 2017(!!!) that was really solid. Few "silly" things but the important stuff was there.
I realized that when I did that project I "had the time to care" which is not the case lately!
When I do personal projects for myself, I try to comment them really well for this exact reason.
Just today in fact I was trying to update some code I wrote two years ago because one of the underlying tools broke. I was very happy with past me for commenting the workflow of that tool so I could easily work around it.
A good alternative is code without comments that communicates just as well as that code without the comments. Not necessarily better but less likely to slip out of sync with the comments upon change.
"A good alternative to a car with seat belts is a car without seat belts that is nevertheless equally safe."
There's lots of important information that comments can convey which code itself cannot. In particular, a program's code can tell you how it works but not why it was designed to work that way.
And after a point, even trying to convey too much information about how a program works through code can be cumbersome. We've all seen function names that are way too long, because the author wanted to cram way too much information into it. That extra information should have been put into a comment, where the author could have articulated it clearly, instead of as a single overlong compound verb in camel case.
I'm a big fan of self documenting code, but especially when I'm doing quick and dirty projects for myself, where I value speed and cleverness over correctness, the comments really help. Especially when it comes to explaining past me's cleverness.
Yes I was only really referencing production long lived code. Mostly I’m a big fan of quick and dirty being something that doesn’t really exist. I believe in quick and throwaway and quick and single/multi use. Comments are fine, just the original suggestion I was referencing sounded like they were a priority.
>A good alternative is code without comments that communicates just as well as that code without the comments.
Now this is something I could never do. Or find in other people's code. I much rather appreciate the good comment explaining to me WTF is going on, annotating the larger segments, and so on. Of course it might just be my personal limitation as a programmer that's far from the best in the craft. But to me, comments are the most time tested.
It's not an alternative, you can't obviate the need for comments with code.
What you're supposed to do is write the code as clearly as possible and then add comments for anything you couldn't manage to express in the code. Usually that'd be all the context around why the code is the way it is and isn't the way it isn't.
Write good code. Professional code. Simpler and easier to understand code. That’s it…
Some Leetcode jock will come in one day and rewrite it anyway, or a new CTO will crash the joint holding a bathroom sink and make your code an orphan. Or the company will just disappear.
The code that i write like my own is, well, my own. Because that’s the code I come back to years later and I maintain, and always will. Everything else is ephemeral.
Good code is not like art, such as good books or paintings.
No matter the quality of your code, if the product or service is not a good market fit. It will be retired. No one will stumble upon it or pick it up after it's gone. Knowing this, the only reason for putting effort into it is to make it easier for myself.
> No matter the quality of your code, if the product or service is not a good market fit. It will be retired. No one will stumble upon it or pick it up after it's gone.
This applies to artistic endevours like books and paintings as well (especially those two, since markets for them are very oversaturated). Your technique might be masterful, but if your art doesn't align with current trends (along with a number of other factors), it'll drown in the endless barrage of other art, and nobody will stumble upon it or pick it up, even if it's still available.
Yes I thought about that, I was thinking in the sense that someone in the future might actually stumble upon a painting or a book. They won't with proprietary forgotten code.
The two aren’t always mutually exclusive, you might be writing code for a good market fit that stands the test of time and needs maintaining.
The times you appreciate it are actually when you open something and it’s nice and easy to change and you realise you were the one who wrote it. But real developer happiness comes when you have the same experience and someone else wrote it.
Sometimes I go to change code to do a new thing and it is easy to do. The new cases fit in, the existing system doesn't explode, all is good.
Sometimes the code is beautifully factored and tested. Maybe even documented. It then proceeds to fight against the new change. Maybe the type invariants fail everywhere for reasons that price spurious. Maybe code far away makes dubious assumptions and breaks in response. In the worst case, it's beautiful nonsense held up by undefined behaviour and the language has come to collect the tax.
I like code that can be changed to do new stuff without everything around it exploding. That's probably what I'd call good code. It's in the context of tolerating future requirements well as the future is now.
fuck the guy after me. I write good code for my own sanity and so my OCD doesn't make me want to refactor it all later. it's like saying save the trees for your grandchildren or the future generations. fuck them, I want a clean environment for myself gosh darn it!!
The only good code is no code. All lines of code will decay, teams will move on, assumptions will change, refactoring efforts will be non-exhaustive, dominant skill sets and experience in the industry will shift.
This is fairly tired conversation and while superficially interesting, there is simply no way to define good code except through local consensus of the people writing and reading the code. But that's as far as it will go, and time will will take what's owed to it.
Developers love to debate about these things, but everything has been said already. I've noticed that the better the developers, the less they engage about these topics.
Developers who enjoy discussing what makes code good are worse developers than those who don't enjoy discussing it? Is that the thrust of your comment?
It turns out this is true for well over 90% of relevant topics on the planet. Every year in the US, over 4 million people turn 18 years old and over 3 million people of various ages die. If we contract this dynamic to people who enter and leave our industry, it becomes clearer why it is not only helpful to say what has been said already, it is vital.
The emphasis on tests made me laugh: I work on a project where it's mandated that all the code is covered by Unit Tests.
In my estimate the time to make UT is 3* the time to create the code.
And no, nobody use TDD in this project because it takes 3 minutes to compile one UT (C++ sigh), so we all do 'post coding' tests.
While sometimes the UTs do help, they're also a huge burden: wants to fix some code? Well you also have to update the UTs..
Overly granular tests are a scourge of our industry, unless you’re writing a widely used utility library. It leads to tests that become a useless burden for even mild refactoring.
Writing larger test cases that test the actual functionality is a game changer. With Testcontainers and good mocking tools you can spin up a real version of your app with external dependencies mocked and faked, and write test cases against the same input layer your user uses. If your test fails you can be pretty sure it’s because you broke something.
This is a layer between unit and integration tests that you don’t read much about. I call them “service tests” (coming from a microservices world).
Investment is still needed though to reduce the effort of implementing tests, like extracting common fixture code so that every test doesn’t need long winded setup code. Also snapshot testing libraries take the pain out of writing assertions.
I am speaking from the API based backend world so YMMV. Once you’ve worked this way it’s hard to go back. Any developer can come in and confidently make changes. You spend way less time investigating regressions and broken environments, so you can release more frequently (which takes a whole other layer of pressure off the team)
I have come to conclusion that there are different types of people who work with code. There are writers. They can not stop writing. If something tries to stop them they move on and keep writing. Most of the code around is written by very small amount of people.
Personally I came to terms with myself for not having to do that. Some companies who rush things to market do not understand it but it is okay with me.
Looking back at years and shipped projects I do not trust such metaphors as in the article.
I prefer other pov. Where I grew up one songwriter and performer coined it (sorry at incorrect translation) "[they] write as they breathe. It is a natural order of things". So if I do not like working with someone's code I have to move on.
While I would not categorize good code as being "a love letter", I would say that it behooves software engineers to remember that the solution they encode is read many more times than it is altered.
Code is communication about as much as a septic line is communication to the next plumber or homeowner. It's a part someone will have to maintain after you. But it's also just a septic line.
It's a very nice way to imagine good code. More likely, someone inexperienced or less caring will come along and edit it just enough to turn it into mediocre code and no one will care anymore.
There might be love letters like that but any codebase that has been around for couple of years is not that. I'm yet to see any codebase in the industry that I can describe as a Love letter. I'm starting feel like I'll never find such a thing. Just make do with what you have. They are not paying you for reading love letters. It's mostly for maintaining and putting band aids over shitty code bases.
Sometimes it's not a love letter, sometimes it is a note saying "yeah, I don't like this anymore than you do, let's just get this over with so we can go take a coffee break and come back to something more interesting".
Code works. Not all work is fun and beautiful, some work is not even worth doing. But making something work because it needs to work is also worth doing.
I don't think being so romantic helps much. To take these articles seriously is to believe that there is such a thing as good code. but I don't think there is a good code that will make everyone happy. I understand the effort it takes to make the code something permanent, but I think the code has to be something that has changeable and the processes have to adapt.
A common reply, but misguided. A corporate coder who is not also CMO and CEO does not produce products for the end user. They produce code units which, after build, deploy, and integration, become useful building blocks in an overall solution which may (the company hopes) be a product for the end user. From this standpoint, the code they write will often be 'consumed' by other coders and by future themselves. The consumers are therefore coders.
dear future developer, nothing matters and code will still be hard to read even if I try my very hardest to make it as good as i can. the world is dying and we are doing our best to kill it. you having a hard time understanding my code is not really anything that can be fixed. it pales in comparison with the actual problems the world faces. tough it out or rewrite it. sorry
I largely follow the forgetful programming practice, i.e. I will get it out of my mind after finishing it. This forces the code to be clear and consistent so that I can pick up on it easily if I need to look at it again.
I've always said "write good, well-commented code as a favor to whatever poor bastard has to work on this in three years, and realize that the poor bastard in question may well be Future You."
Something I'd consider for the good code is leaving open nice easy tasks for future developers to pick up so they can get a sense of how the codebase works.
Very often writing good code is a love letter to your future-self. So many times I blamed the git repo to check who was so stupid to write such code, and it was me...
> (...) and the practice of Test-Driven Development (TDD) are indicators of a carefully crafted 'love letter'.
I call BS on this one. TDD is usually a sign of a struggling junior and results in the worst code quality possible.
Also, the next developer is in many cases myself, months later, trying to fix something or adding a new feature. I wouldn't care about other developers, because everyone has their own style and niggles, it is enough to cater to my own niggles, can't be bothered about others. And I also don't make a fuss about code written by others, as long as it meets some minimal standards.
This blog entry is a much nicer and more graceful version of the old adage about writing your code like the next guy is a psychopath who knows where you live...
I don’t think I’d do much differently in retrospect, except be far more cautious about where and who I spend time around.
Even this very thread has some pretty bad vibes: “who can say what good code is?” “good code can’t exist with deadlines.”
I feel really bad for junior/intermediate devs that read this stuff and internalize it.