Hacker News new | past | comments | ask | show | jobs | submit login
Writing maintainable code is a communication skill (max.engineer)
297 points by hakunin 44 days ago | hide | past | favorite | 93 comments

Sometime it is not the programmers that don't know the benefits of maintainable code or how to write it, it is the culture that rewards short-term velocity instead of long-term reliability. I've seen superstars designing shitty system and writing messy code to workaround processes and policies to launch quickly, and as a reward, they are promoted like taking rockets. It would be really hard for others to not follow the same path as long as the company needs to make money. Long-term benefits are hard to demonstrate while short-term feature/product launches are so obvious.

I agree with this - its trivially easy for programmers to write code in such a way that it is easy for them to write and maintain but completely non trivial for other team members to maintain and more importantly add to. Examples are adding in their own go to library for some function instead of using the existing one, ambiguious variable naming, etc.

The mechanism I've attempted to use to show to non-programmers whether a feature is written in a supportable way and encourage team members to write supportable code is by listing out every feature row by row.

Then in each column a team member will sign up as the lead for the feature. The rest of the team members will have to rate (from green to red) whether they are willing to PR review the feature commit, be on pager/bugfix duty for that feature going forward and be on the hook to add new subfeatures to the original feature.

Team members who lead features that have many green checkboxes for their rows are highly visible to management.

Team members who add esoteric dependencies and add too much "job security" also quickly realize they are going to be the ones stuck debugging/QA'ing that portion of the code.

It also allows PMs to understand the level of bus factor/ redundancy for each specific features. Its quite possible some features are experimental and don't need redundant support.

I just write simple code and it becomes very maintainable and easy to understand for anyone.

In my opinion, it's people who have the need to abstract away everything who complicates the code bases.

Give me simple, elegant code that doesn't complicate what the program does, and you have maintainable code that can be changed and deleted easily.

That maybe the case, but your inexperience shows, namely because different programmers have different styles or ways of doing things, so on a big system which has had numerous programmers working on it, in the absence of management installing a design & coding guide, you will end up with a can of worms!

So how many ways could you come up with that accesses data in a file? You have an OOP filemanager class which can handle CRUD to ISAM and RDBMS (SQL) back ends, the windows API's both to disk files and ODBC if the RDBMS doesnt have its own library. If you dont check and enforce basics like this, you will get eloquent functional code that meets the remit but doesnt fit into the app, eg some procedures which read and write to a file using win32 apis but ignores the OOP filemanager that exists and then the OOP filemanager class doesnt know what files are open on what thread.

The problem here is management didnt have any rule book for the basics, they also couldnt or didnt want to pay for someone else to look over the code to ensure standards/rules were being met. And then businesses wonder why their apps gets pwned so easily? Shareholders dont always learn because they dont fire the board who presided over such fubars either. And so the cycle repeats.

I have worked with "simple" code. It was a terrible but interesting experience.

The main developer was an old mechanical engineer who learned coding by himself. And to be simple, his code was simple. There was a GUI with a 7x8 table of numbers. How to do this? 54 text fields, all with their own name with copy pasted initialization. Most variables were global, no class hierarchy, no fancy algorithms, juste nested loops, etc...

Because it was so straightforward, it was surprisingly easy to understand despite its complete disregard of any methodology. If it has to do A, you will see A, no surprise action at a distance, and if it has to do A ten times, you will see AAAAAAAAAA. But maintaining it is as bad as it may seem, and if you have to change A to B, you will need to change all ten of them, being careful that they may be slightly different, and forget about parallelism, security, undo, etc...

So yes, simple code is simple, but without at least some level of abstraction it will soon end up being terrible. What is too much or too little abstraction? Just hire a few world class developers and give them as much time as they need, because it may be the most difficult question a developer has to answer. In fact, one could argue that the mythical "10x developer" is one who answers right.

> There was a GUI with a 7x8 table of numbers. How to do this? 54 text fields

Sorry, I have to fix this : 56

Maybe there were only 54, and that was one of the bugs to be fixed - exactly the sort of problem that lurks in that kind of repetitive code.

One of the issues is also that code that's trivial for John Carmack to understand might not be for $BODY_SHOP_RESSOURCE_200353.

I recall a self taught dev (or maybe from a bootcamp) coming up with a cascade of nested if-else, nested 8 deep. Someone with a background in CS asked him what he was trying to do and basically concluded that what he was trying to do could be expressed as a state machine. To which the initial dev replied that it was "way too fancy" and that he didn't need the code to be fancy, just work.

Not a counter to what you said, but just adding a point that might be worth thinking:

This way of telling may give impression you are inplying that CS graduates know patterns and self-taughts don't. You will certainly find self-taught who know Finite-state Machine very well, and CS graduates that might have heard but can't identify where to use nor how to implement.

Everyone always talking about simple code. But what can we agree on that makes code simple?

SOLID[0] is always somewhere to start if you really have no idea where to begin understanding what makes code more maintainable. There are plenty of books out there, but those can be a minefield in terms of knowing which ones to really commit energy to learning. The only real answer that I know of here is personal experience though.

[0]: https://en.wikipedia.org/wiki/SOLID

I completely agree with this, but this has little to do with the comment you're replying to.

Simple dumb code, unless absolutely needed to be "fast and smart", should be the defacto standard. Which is why Go is so good as an enterprise language. Its very design is a standard for maintainability.

Used to be, if you didn’t know what else to look for when interviewing people, you’d pick the person who communicated clearly.

Because if they’re wrong at least you know quickly, instead of them secretly making a mess for a long time before you figure it out.

From the opening of Structure and Interpretation of Computer Programs [0] there is the famous aphorism, "Programs are meant to be read by humans and only incidentally for computers to execute."

I've been programming for over twenty years. Try as I might to produce code that expresses the problem eloquently and succinctly to unfold the solution in the readers' understanding as they skim through the source... it has rarely ever worked. Firstly you cannot please everyone. And secondly, programs are not structured for pedagogy.

Writing maintainable code is a communication skill but I find the best skills are writing, speaking, and illustrating concepts in prose, specifications, white board sessions, chats, etc.

The technicalities of ensuring code follows some kind of style guide, design principles, etc plays a big part. But nothing will explain "why" or the big picture stuff quite like a specification or blog post in my experience.

[0] https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...

Update: added missing link

Author here. Looks like we both have a similar length of experience. I wouldn't give up this battle yet, because

> Firstly you cannot please everyone

> And secondly, programs are not structured for pedagogy

My theory is that you only need to please a couple of maintainers that work with you, not everyone. That's why I proposed a test with 2 colleagues at the end. It could potentially be spiced up with 2 colleagues of different levels. I believe this can act as one of those 20% effort to get 80% there, but definitely don't claim to have proof of this.

I am also on about the same timeline as you and GP, and in my humble experience, it's really just about having a lot of code/prose "under your fingers", so to speak.

I can't remember where I read it, but the wisdom I heard for writing is that until you've written a million words, your writing won't be worth reading. This feels to me like the 10,000 hour rule.

Code has the added problem that the reader must have additional fluency in it on top of human language, plus a whole bunch of idioms that may or may not be in vogue, or mayhap haven't been seen outside the organization they originated from (think silos; copyrights, secrecy and only getting binaries really don't help here). In some respects, we have code written by Chaucers, but is only from 50 years ago.

I do agree with you that Pareto principle definitely applies, but wherever possible I try to make my code as understandable as possible, with comments and ancillary documentation showing the "why". What I really wonder, is where are the world-class codebases we can point to as exemplars of what readable code looks like? Who can we cite as masters of the craft that make code others understand easily? Is Knuth one of them?

One codebase that I liked in terms of these qualities (albeit I'm sure it's not 100% perfect) is Redis[1]. There are YouTube videos[2] of antirez walking through it. Would love to find more examples too.

[1]: https://github.com/redis/redis

[2]: https://www.youtube.com/user/antirez/videos

> a test with 2 colleagues

> I believe this can act as one of those 20% effort to get 80% there, but definitely don't claim to have proof of this.


I think it is possible to produce code that conveys everything the spec would. The problem is the same as with Donald Knuth literate programming: to use it, you now require your coders to be both great programmers AND great writers. And these two traits rarely coincide in a single person.

IMO this is why good developers are so in demand. They are those rare people who are good at both. People who are great at both are the fabled 10X or 100X programmers.

Writing code is writing foremost for communication. If you are not good at writing you aren't good at programming. There is really no way around that.

I think invoking literate programming is spot on.

One of the keys there is that it lets you discuss additions to previous parts of the code really well. Something that no other style really does.

And is one of the things that makes "readable" code so hard. We want a view of the code that acts as a parts list and as a narrative all in one.

The other problem with literate programming having been pointed out by McIlroy: https://leancrew.com/all-this/2011/12/more-shell-less-egg/

Wait, is this the same McIlroy who claimed that PL/I has features no other language has, then listed syntactic sugars and things that could be implemented easily in libraries?

Edit: After having read the link, yes, McIlroy has a point. For quick prototyping, a one off, above all else small project, the UNIX way fits, and fits well. It's a philosophy I agree with and use on a very regular basis as a sysadmin.

But as a developer on larger projects, where chunking them into pipeable tools would probably result in 1000 pipes, any one of which might break, and you want to hand it off to people who don't even know what a pipe is, well, I'm sorry, but I'm going to make something in the tools that they understand and use, and I'll make it more fault-tolerant to boot. And yes, I've run into exactly this sort of situation multiple times in my career.

I might prototype with pipes and commands, but Larry Wall invented Perl for a damn good reason.

Really, TL;DR is that Bentley picked an over simple problem (count words in a file), asked Knuth to write a program in the literate programming style, and McIlroy completely missed the point of the exercise. What a shame, he might have had something useful or insightful to say had he understood what Bentley and Knuth were trying to achieve.

Having rigorous code reviews that aim to raise the bar on readability is the best way imo.

If a few people agree that something is readable then the odds are much better that code is maintainable.

You have to be careful with readability in code review: you can't please everyone.

There's nothing more demotivating than spending 8 hours on a problem, feeling good about your solution, and then having some "senior" developer come along and tell you to rewrite it "for readability."

My best defence for this is to have a guiding philosophy or design guide beyond the usual linting rules and code formatting tools. It prevents unnecessary stylistic changes.

A good guide will centre itself around the intersection of the IC's values and the requirements of the system. If you're working on a game engine you might value performance and tend to prefer vectorized code over branching code in order to exploit parallelism as much as possible in the pipeline. Whatever your team values the most is what should go in there and it should be as clear as can be so that people can reference it when giving suggestions in reviews.

> There's nothing more demotivating than spending 8 hours on a problem, feeling good about your solution, and then having some "senior" developer come along and tell you to rewrite it "for readability."

While a senior engineer just asking you to rewrite it without further details is bad, it's good practice to follow the "make it work, make it right, (optional) make it fast" approach. The "make it right" part is where you make your code readable after you've ensured that it works and before you submit it for review. If you find later that you need to make it fast, it is expected that readability will have to be traded for performance.

Or, for the contrarian view, you have a few people mutually reinforcing poor coding practices under the guise of whatever they view as readable and maintainable, which in my experience is equally likely.

Even if you don't agree with the coding practices, at least it enforces consistency throughout the code base. It might not be the consistency that you desire, but it much better than every author having their own style.

I think this view assumes that consistency like that actually works. You can be consistently driving headlong off a cliff, right?

> Firstly you cannot please everyone. And secondly, programs are not structured for pedagogy.

I would say that this is understating it, even. Not a Donald Rumsfeld fan, but his remark about the "unknown unknowns", that is, things we don't know that we don't know, is apropos.

Everyone knows about Quake 3's fast inverse square root with floating point bit hacking. It's a clever bit of code that, based on the comments, no one at id Software knew how it worked (at the time). And yet, it's not unnecessarily clever. It's not being clever to be clever to stroke any egos. It's critical to the entire game! (Tangent: Who here is brave enough to release critical, but mysterious code to millions of people and not have a panic attack?)

To bring this back to SICP, I believe I first encountered the recursive Fibnoacci function in SICP. It's a popular bit of code that demonstrates recursion in Scheme. The naive, but elegant version is:

  (define (fib n)
    (cond ((= n 0) 0)
          ((= n 1) 1)
          (else (+ (fib (- n 1))
                   (fib (- n 2))))))
And of course, SICP goes into an iterative form and later goes on to describe a memoized form. Each version mucking up the elegance of the original tree recursive form a bit.

But what if you're terrible at math (cough, cough), and aren't aware you can do this:

  (define (fib n)
    (* (/ 1 (sqrt 5))
       (- (expt (/ (+ 1 (sqrt 5)) 2) n)
          (expt (/ (- 1 (sqrt 5)) 2) n))))
Readable? Understandable? Maybe to a mathematician. No recursion, no iteration, no memoization. I suspect most developers would say "what the heck is this?", much like they did with Quake 3's inverse square root function. Although this at least has a wiki page about it. Try Googling 0x5f3759df when Google doesn't even exist. Like the Quake code, this code starts losing precision with larger "n" values. There is also a matrix form which is a bit better, but requires many more multiplications. Which code is right for you? That's the question. Imagine you only ever need the first 20 values of fib sequence. You might opt for a hardcoded lookup table. More lines of code. But it's a valid trade-off.

This is a straw example, and Quake's code is a case of necessary optimization (as opposed to premature optimization). But the point is each person carries their own set of "unknown unknowns" with them. If enough people leave your project, then the unknown unknowns start piling up. You may even be tempted to rewrite your system. Now you've entered the hell known as the second system effect.

We can talk about communication all day. But I have yet to work at a place not tempted by the siren song of the rewrite or ended up rediscovering every bit of knowledge Fred Brooks dropped on the world more than 45 years ago. The Mythical-Man Month isn't even a long book. But I don't think anyone reads it today. I certainly don't think we would have went through the microservices fad if people understood the implications within that book.

Note that, if you coded these two Fibonacci functions in e.g. Python 3, the "naive" solution remains correct as n goes into triple digits, while the "fancy" one runs into floating point error and then range overflow...

>I certainly don't think we would have went through the microservices fad if people understood the implications within that book.

Please explain? What's that book imply about microservices?

I usually have the opposite problem. I end up writing clean code because it's the only way I can keep everything in my head, and hence can keep things straight. But it does make me a slower programmer than others, since maintainability takes thoughtfulness, which takes time.

That doesn't seem to get rewarded in startups, because often times, your goal is to test the startup hypothesis--and you end up throwing away code anyway. I think only two of my code bases have survived to this day in my whole career.

I am convinced that it pays to be faster, since you have more times at bat trying out ideas. I have adjusted and found some ways to eschew clean up because it doesn't matter in the long run, but I'm still struggling to find the right balance after all these years.

Anyone out there have heuristics that helps them with this struggle?

It's harder to be correct than to be fast. So I'd keep doing you! However it's hard to say "how slow" you are referring to. If you literally take a day where someone else takes an hour, you might indeed want to work a little faster.

One trick I do is I don't get bogged down in details and get it to work fast, even wonky, leaving a trail of "TODO" and "FIXME" comments in the code, that I revisit pre-commit. I want to insist on the fact that I don't want you to commit incomplete or half broken solutions. These have to be attended prior to merging. Some can be so broken that I use "NOCOMMIT" and prevent a commit.

The point of this, is that half the time, I saved a lot of time because the solution just wasn't the right way, or the requirement have changed under my feet and I didn't waste any time on those items which would have slowed me down significantly. Also, they're usually tricky things to deal with, and by the end of the feature-complete, I have a much better understand/clarity over the problem, which means solving these TODO and FIXME typically are much easier to deal with, or have become irrelevant/non-issues. Gaining clarity over a problem space is solving half the problem and all of its descendants, and you're far more likely to have that clarity near the end of the feature-complete than 10% into the journey.

So my first advice is don't write a specific state postcode validator until much later, when the business is asking you to build the next facebook by tomorrow.

I was talking to a friend a few days ago, both of us are quite senior with over 20 years in the industry. He tends to be slower than me and more correct; let's call it pendantic ;)

It was quite staggering when we were talking about how things were built, in the metaphore of building a house... I go straight to the foundation and the ground floor, get the walls up as soon as possible, even get a room almost finished to get a good feel for the product. Him, on the other hand, does the drive-way first, polishes the driveway and sets up the mailbox and everything before even looking at the house.

Two very different approaches in the form of a metaphore; my argument is, don't waste time on the driveway until you know what the entrance will look like... His argument is, it will need to be done anyway and you can't access the house without a driveway.

I like the homebuilding metaphor very much :)

To torture it a bit further... I like to start with a 1 room cabin + dirt driveway after I've made sure the lot is big and flat enough to expand in the future. Hard to know if you like the neighborhood without living there for a bit!

That's an interesting metaphor for sure! I'm a new backend dev after having switched careers. I've noticed that I'm more of the type of person to do the scaffolding of the house first, then build the house. Then at the end, build the driveway to get to the house (the API routes, controllers etc)

Very relatable. Within the context of a single startup this definitely applies. However, in the span of your career, I like to ask — do you want to get fast at writing good code or writing bad code? You won't get perfect code of course, but at the very least you can learn to churn out code that's easy to change. It's not the same thing as expressive code, but it does the job at a startup. (I wrote another article[1] on the topic of writing code that's easy to change a disappointing number of years ago.)

[1]: https://max.engineer/cms-trap

>How often have you seen an algorithm expressed with such grace that it appears boringly obvious?

There is a perverse quality that I see in mostly junior engineers of not wanting the complicated thing to appear simple. I think it's probably a result of some ego and accomplishment and wanting others to know it was challenging. I'm not sure exactly but I've seen it a lot.

there was once a kind of obituary for a super-famous carmaker (Ferdinand Piëch, grandson of Porsche) by a renowned engine-engineer (Friedrich Indra) which boiled down to the fact that

"… Piëch was complicated and also explained everything in a complicated way. The art of the engineer is to make things as simple as possible. But if you explain something simply to a person and he understands it immediately, you are just a normal engineer. Someone who exudes this aura of ideas you don't understand, on the other hand, must be something special. …"

It is pessimistic but it resonates. A similar quality I've seen is people who use excessive "lingo" (like obscure abbreviations), when they know that their audience is not as familiar with the subject matter. I find myself constantly stopping them and asking them "what does X mean?" I know I shouldn't feel stupid but I do.

I get the sense that it is a similar perspective as that Piech character

U know $perl ?

I found out that some fellow engineers do not want to write maintainable code, they instead want to write code that:

- follows SOLID principles

- follows Clean Code principles

- follows Hexagonal architecture principles

- ...

Sometimes such principles do lead to maintainable code, but, most of the time they don't (at least in my limited working experience of around 10 years). An example: If you duplicate code because at the end it seems to be the proper way to write a piece of functionality that will be easier to maintain in the future (and most importantly, it communicates clearly its intention)... well, that's a no-go for some fellow engineers because, somehow, that violates all or some of the principles they have read in some blog post owned by internet celebrities.

Yeah, obsession with DRY to the detriment of readability and maintainability does happen. If you need to write another script like the one you wrote yesterday (but slightly different, and you don't know exactly how different), then starting with a copy&paste is a valid strategy. As you go on you will realize the common points of both scripts, and then you can DRY them both. Or not, if the common parts are too few to bother. Creating complex class hierarchies to "promote reuse" before there even is a case for reuse could be seen as premature optimization or YAGNI.

On the flip side, in my experience, people with certain amount of years under their belts tend to treat all those principles in a way they should be, ie. as inspiration instead of as law.

My personal pet-peeve is when the obsession with DRY carries over to testing. Testing is not the place for DRY. Reading a test should be like reading a recipe: do A then do B, then do C, then expect result X. When you start abstracting the setup to the nth degree it becomes much harder to understand the test because you have to click around the code to following the abstractions like a trail of bread crumbs, when really what you want is for the stack trace to say "error on line 123" and the cause to easily found near line 123. Tests shouldn't be dry, they should be wet.

It's a trade off for sure. I have seen people write test code with no regards to DRY and it results in tests that are hundreds of lines of setup trying to wire up the inputs and the expectations when a little bit of thought to remove all that repetitive code would go a long way in making the tests readable.

I do agree that there doesn't need to be obsession with DRY when it comes to writing tests (esp when you are bouncing around multiple functions and files just to understand the testcase) but completely abandoning it either isn't great either.

Usually hunderds of lines of set up code point out the issue in the code itself -it's usually a smell of a god object/function that has too many responsibilities

This might be a reference to WET (write everything twice): https://dev.to/wuz/stop-trying-to-be-so-dry-instead-write-ev...

I've run into exactly this, where each developer has their own test setup abstraction and attempt at automation. Difficult to add your own tests and difficult to fix theirs. It's maddening.

A sensible suggestion i've found in regards to code duplication and DRY is the rule of three: https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...

If you run into the same code three times, that's when you consider refactoring. Of course, in practice it can be ignored like any rule as long as you keep the ideas behind it in mind (small amounts of duplication are usually manageable).

Sometimes you have a shared context which might indeed use the same code and you'll want to apply DRY sooner, whereas other times you'll have code that is the same only incidentally, but is a part of two or more different contexts which may evolve independently.

Edit: If only we had tooling on top of Git to give us hints about how the codebase has evolved over time, like: "Hey, this file is 84% similar to this one other file, which was added under ISSUE-2415 3 years ago by $PERSON for $PURPOSE. Click here to see the current diff between these files and how they've evolved over time."

The current workflows around git blame feel lacking, especially if you are ever dealing with files that have been deleted (e.g. the "origin" for the current file was actually a different file that was deleted way back for another reason that you won't know about now without digging through the Git history for thousands of commits).

This is something I've seen described as "optimizing for deletion" which is the only way to have true modularity.

Yes! And deleting large amounts of code from a massive tightly-coupled monolith is a great way to learn that first-hand. I like to say that you learn a lot more from dealing with a legacy monolith than you do by implementing the latest clean patterns. It eliminates a lot of cargo-cult mentality and lets you really see what patterns make the difference.

A faulty/clumsy abstraction is worse than duplication.

And writing your abstraction before you have two or three different uses for it can often make it clumsy and/or faulty.

The key is this: If you have to change one, do you have to change all copies? The problem is not duplicated code; the problem is duplicated concepts.

DRY was a response to cut-and-paste programming, not thoughtful application of the same (or similar) code to different problems.

The problem often arises when two pieces of code happen to look the same, but are conceptually doing two different things.

I have seen far too many codebases where a junior engineer took it upon himself to DRY-ify two similar looking bits of code, only to have to later go start adding knobs and if-branches to handle differing evolution between the two problem domains.

If the original piece of code had some logic bug, sure, it will likely be necessary to go fix that bug in all of the duplicated spots. But if the code evolves in one place due to the problem domain changing slightly, that rarely implies that the duplicated code needs to change.

I coined it initially for configuration management, but... "Keep accidental an intentional identity apart".

By that I try to convey that just because two things look the same doesn't mean that they are by necessity the same. So if it is a case of "happens to look the same", keep things separate, but if they're fundamentally linked, keep only one copy and reference it.

In terms of code, I think that means "repeat the codepath if the two copies just happen to look the same today, but are not necessarily going to do so in the future" and "if the code paths are always going to be identical, make a function/procedure". That can probably e extended to subclassing as well.

From your example, it sounds like your gripe is more with engineers that follow a given principle dogmatically, rather than applying it judiciously based on context.

Although I think there is some value in principles like these, I am always wary of anyone that's overly absolute in their thinking.

I once worked with a principal who applied a dogmatic view of the 12 Factor App and developed a framework based on those principles. It was to be the One Framework to Rule Them All, and the framework upon which our entire portfolio's application infrastructure was to be based because it would make everybody's lives so much easier and novice programmers could be productive right away, etc.

Maintaining code in it was a hassle. Code was scattered to the four winds in a required directory structure. But because simply importing modules wasn't allowed -- modules had to be structured as functions that accepted a "dependency manifest" containing all the modules they depended on -- it was difficult to figure out where your implementations lived. So instead of being able to read the code and figure out what was going on, you had to navigate this tangled web of dependencies and guess or search for where some of them lived.

Thankfully, with the arrival of a new system architect, cooler heads started to prevail and the company abandoned plans to go "all in" on this solution.

I'm a big fan of the 12 Factor App, but I don't think prohibiting importing modules is one of the 12 Factors.

yeah some people think this has to be done. But duplication can be cheaper especially if you are building the wrong abstraction.


The problem is "writing maintainable code" doesn't stroke the ego quite like "writing impressive code"

It's programming if "clever" is a compliment, but it's software engineering if "clever" is an accusation.

SWE book puts it really well. https://abseil.io/resources/swe-book

The challenge is, how do you teach it?

Like many writers, many coders think their code is readable and are resistant to feedback. They literally do not see the problem.

You can institute all the rules and guidelines you want, but they see it as friction and overhead and only do it because they are forced to.

Plain real world experience. So one way is:

1. Learn all the guidelines/rules/principles out there (or at least the ones that appear to be the most important or popular ones)

2. Apply them in real projects (and keep working on such projects for at least N years)

3. Realize how painful is to maintain some of your solutions. Unlearn what you have learnt

Point 1 is important (otherwise you may miss other people's good solutions). Point 3 is important (self-critic, self-introspection, let your ego go away)... but for me the most important one is point 2: unfortunately most fellow engineers out there jump from job to job after 1 or 2 years and they don't end up maintaining their own code (so, they usually don't get to experience point 3).

Where N is at least 3, in my limited experience.

The first step is to agree on what it is.

I've never actually seen rules or guidelines that were:

* Non-trivial - e.g. not about spaces or line lengths or something.

* Objective and specific - e.g. with a vague principle like "single responsibility" it doesn't take much to trigger an argument about what constitutes a responsibility.

* Actually made it more maintainable - I remember some of the DDD guidelines were non-trivial, objective and specific but they 3-5xed the SLOC.

I tried writing some of my own, but it's a lot of work, doesn't generalize very easily and is as likely to spark an argument as it is to actually help.

But those are "writing techniques" and not the actual writing / stories. As long as you're having a "conversation" with a codebase by continually reading and editing code you know what parts of the code flow well and are expressive and what parts are not. I think a discussion about the eloquence of a codebase should be had in continuity, parallel to all other individual tasks.

Actually the article illustrates some of the criteria like if the code answers "how/what/why" questions. From my own experience the codebases where the code didn't answer those questions were the worst.

A big issue seems to be that what is "readable" to one person isn't necessarily "readable" to someone else, and you end up with endless yakyak over whether the braces should be inline with the `if` statement or not.

My take is that some compromise is key: Yes, there are some standards that objectively make code easier to understand. But you should also learn to read code that isn't written exactly as you'd like it to be. And try to stay consistent with the overall style.

The yakyak is all but a solved problem in many languages.in c++ Clang format on submission, and clang format on updating files to your preferred local syntax.

No kidding. The number of projects and developers that I see that refuse to use the tools is quite interesting. One project I worked on dozens of engs say 'your code is so easy to read'. Then one dude comes along and it 'gives him a headache'. He also spent way too much time refactoring and renaming things. He became obsessed by it. Every few weeks changing the spacing, indentation, and naming conventions. Then expecting everyone to keep up with whatever decision he made at 10PM on some random Friday 3 weeks ago in a fit of open every file and change everything. He looked like the hardest working person in the group but he was adding zero new features. It killed the project. The worst are the groups that say 'everyone is a professional we do not have to deal with that'. You then end up with dozens of modules with whatever rando style they happen to like that day. Pick one. Put it in the tool. Stick to it for a decent amount of time. If it is not working bring it up in a meeting.

The ‘black’ formatting tool for python has worked amazingly for my projects. It may be controversial, but I think Python’s strong opinions on code format and conventions was one of the things it got right.

It might be solved for the very basic stuff (spacing, indentation, etc), but there's still plenty of yakyak to be had for higher-order "religious" debates...

That's who you need to let go. People resistant to feedback are bad for any role in any company, and if they have their heads so far up their... that they think they know how the code looks for someone else better than that someone else, they really shouldn't have been hired in the first place.

Haven’t come across this before but I can see myself using this framework of how, what and why in the future.

Few thoughts…

How: Talking as a mid level programmer, I can’t control the language of choice, I can’t influence the framework or the design. I can control how the individual piece of algorithm is written.

What: Apart from the function names, the main important thing here is abstraction. Abstraction is a long term game and it improves with your understanding. The abstraction you choose with a few weeks of exposure to the domain will be entirely different when compared to the person who knows the domain for 10 years.

Your abstractions can also changes with the exposure to the actual problem in hand. The more you go down the rabbit hole the more abstraction will keep changing.

I like what Kent says here… https://kentcdodds.com/blog/aha-programming

As the author says, it takes years to develop the overall taste but also it takes time to find the right abstraction for solving the problem in the domain.

Why: Never thought of comments as the why. I used to think of comments as what more than why. Simple but important distinction.

Thanks for the insights.

Made a little image to make myself remember this better... https://imgur.com/a/kVXNbsE

Nice, tweeted it crediting this comment: https://twitter.com/hakunin/status/1467223381854539791.

That's nice of you. Thanks!

It's politeness first, and skill second.

Most people don't lack skill per se; they just don't give a shit about you.

I really don't believe that - it does not match my experience at all. It's just that there are a lot of conflicting forces at play. It's taken my whole career to get better at balancing those forces, and it changes for each scenario. Building software is a people problem, and people (like software) are hard.

It's often both, unfortunately.

In other words, code should be eloquent, it should be as easy as possible to understand why it exists and why the way it is written is correct (or give information as to why the author thought it was correct).

Likewise, with regard to modularity, I like the take on codebases that optimize for deletion and not abstraction (after all, abstraction is only a tool, not an objective).

These two together make up for the best code IMO.

I've found that documentation is one of the first things to go when time-constrained in Sprints. The PM/PO or Team Lead will pay lip service to the idea of allotted time to documentation writing for each Story, but don't enforce that during planning. Nor do they call it out during Retros.

Heck, there's some devs, senior and junior, who don't realize the value of standard branch naming, ticket references in commit messages, etc etc. Or they'll pay lip service (again) to a conventions standard, but still continue to work the old way over and over.

The common thread among devs that I've spoken to is that they feel they aren't talented enough to write good docs, that they don't have time, and that they don't know what to write about. I feel like the team lead and higher-ups need to focus on enforcing standards for a few sprints to get everyone working on the same page.

I interviewed someone from China once who wrote code almost as I write prose.


Now imagine 10,000 lines of code written like this and done so in a way that was thoroughly impressive. The creativity alone was startling. I would do it a disservice to further explain.

Yes, her object names were ridiculously long, but it was the most readable code ever. It was like she combined the code and documentation together. I'd seen this neither before nor since, and it was strange and beautiful.

The hardest problem is to adequately foresee the context and the background knowledge future maintainers will have.

* Context will change due to modifications and added features in the surrounding code.

* One tends to overestimate how much background knowledge a future maintainer will have.

* One tends to misjudge which parts of the necessary knowledge will be externally available / still "obvious" in the future.

I took issue with this quote

> “Isolating complexity in a place where it will never be seen is almost as good as eliminating the complexity entirely.” — John Ousterhout

If you can isolate the complexity in one place, it isn't complex. It means that it is definitionally simple and separable.

There are things which cannot possibly be isolated and they are complex. Examples are idempotency, things that deal with synchronization or time, resource management across multiple entities.

It seems pedantic, but this insight is remarkable in effective system design. If you figure out the inherently complex non-negotiables ahead of time, the rest of your solution becomes pipe-fitting. A large percentage of modern software these days is lost because they start with pipe-fitting and just move the complexity of their problem around endlessly, never identifying it.

Yes, writing maintainable code requires the ability to explain.

There is far far more to "communication skills" than explaining yourself very well, stuff like negotiation and persuasion and conflict resolution. Most of those are less relevant to writing code as much as to getting it accepted.

My experience has been, the next guy always thinks it's the last guy's fault. But actually the fault usually winds being that of the next guy - even when I was the next guy.

Programmers are impatient, we look at the code, the code looks terrible. We lay into with a hatchet and three days later, it works. But there's a very minor problem. You look at the problem, look at it again. Three more days and you've ripped out your code completely, done apologies to the ancestors of the last guy and finally his work and all the problem are solved.

And the comment? The communication? No one read them, they made no sense. But once you complete the code, you realize no other comments could have sufficed.

There is a parallel to this comment that is not much thought about in mechanical CAD: some designers use tools that get the job done, but the model breaks if you look at it funny. It is all too easy to fall in this trap unless management is willing to pay for models that are maintainable and not just quickly moved out the door. Asking engineers to use the "principle of least astonishment" and develop work others can maintain is difficult at first. It's inculcating the idea of model maintainability and then supporting it against budget constraints that takes the work.

I consider maintainability and readability related, but two different things. I'd expect maintainable code to be readable. However, I've encountered code bases that were easy to read, but hard to maintain. I mean that the code was hard to change after new requirements came up.

So when writing some code, a programmer needs to have some good understanding on how the business requirements might evolve, and design the code with that in mind, so the next programmer can not only easily read the code, but also change it in face of new requirements.

The problem is that predicting how a business might evolve is pretty much impossible. It might make more sense to architect for ease of change in general rather than ease of change into a specific direction. To do that (aside from the advice in the article), I like to imagine a scale where static/hardcoded/build-time architecture is on the left end, and dynamic runtime-manageable architecture on the right. To avoid over-engineering, the trick is to always lean as much as possible towards the left side of the scale. Only introduce runtime complexity when absolutely needed.

Communication? Maybe. But in my mind it's more of a user experience. That is, someone else picks up your "product" (i.e., code), how easy - or not - is it for them to engage with that product?

It's about good abstraction.

"If a piece of code could be abstracted, it'll eventually be extracted."

Software development is communication; writing code is a side-activity.

If you have one group of coders writing new functionality, and another doing maintenance work on the application, you have diverging interests. The "new functionality" people have an incentive to rush stuff, because bugfixes won't be their responsibility anyway. The most cynical way to do this is writing a lot of new functionality, put it on CV, and leave for another company.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact