Hacker News new | past | comments | ask | show | jobs | submit login
Cognitive Biases in Software Development (smyachenkov.com)
260 points by _sJiff on March 31, 2020 | hide | past | favorite | 125 comments



My personal observation (and view of course) is that most people simply cannot code. And I am not talking about that reverse-a-binary-tree or traverse-a-linked-list type of coding, but more profoundly they just cannot wrap their head around a problem and are able to create a sufficient solution with respect to the context of the code and its environment at the same time. Good software development is strongly connected with ones ability to think clearly about any given problem and the ability to logically deconstruct it in smaller, more digestible thoughts. If there's confusion in the mind, there probably will be confusion in the code as well.

Also, it just seems as if many are unable to step back and look at the broader side of things when coding. Asking questions like: Are we using the right concepts here? Did we develop sufficient abstractions? Case in point: I was just refactoring a code base for a client where the code was paved with a `MemoryHolder` type, where simply `Buffer` would have been the better name for the same concept.


I don't think this is the right mental model of coding.

It seems a bit too similar to the idea that "most people simply cannot read".

We all accept that nobody can read without instruction, practice, and feedback. Why is coding somehow different? If anything, I think reading is more foreign/difficult, because coding is explicit thinking, and we all think. Whereas reading is a completely synthetic act that starts with arbitrary symbols that must be memorized by rote before you can even take the next step of using them.

None of us start out life being literate, but few people lack the ability to become literate. Why is coding different?


>None of us start out life being literate, but few people lack the ability to become literate. Why is coding different?

To elaborate on the sibling comments, coding is generative work instead of passive consumption like reading. This split becomes easier to see when looking at a bunch of domains.

Most of people can learn to read a book but very few can write a good book. Similarly, it's easier to re-tell someone else's funny joke than be creative and write original comedy.

Many people can learn basic physics equations like "f=ma" is "force equals mass times acceleration" but fewer are able to be generative and discover new physics equations that are accepted by the science community.

Most people can listen to music and appreciate it, but a smaller percentage can play instruments. And then within the set of musicians, an even smaller percentage can compose new original music. There must be something more to it than "training and practice" to explain why a 19-year old Chopin can compose sophisticated piano compositions while most 70-year old professional concert pianists that have performed for decades with more repertoire than Chopin have written no notable original music.

Coding may not be as hard as creating "Theory of General Relativity" but it's not the same as reading literacy.


Generative work is not all created equal. Most of programming we all do is like applying physics equations in practice. A tiny portion is truly on the cutting edge, more akin to physics research.

Likewise, a lot more emails and discussion comments and school reports are written, per volume, than masterpiece books.


I'd be willing to bet that

{people who can look at a real world problem, choose an appropriate physics model, and apply the right equations, correctly}

and

{people who can look at a real world problem, choose an appropriate logical model, and apply the right language/frameworks/code, correctly}

are probably really quite similar sets.


I look at a coding problem and think, what problem did I or somebody else solve in the past that looks similar to this new problem? What’s different about this new problem? What’s the same? In that sense coding is more like reading, in that it is about pattern recognition and “reading the pattern”, then it is about creating something new entirely.


> It seems a bit too similar to the idea that "most people simply cannot read".

Most people can read in that they can translate text into sounds. However most people cannot read if require that they can accurately comprehend what the text says, just look at the results of reading comprehension tests, a majority scores horribly on them.

> Why is coding different?

You can get by just fine in life without accurately understanding text, but you can't get by just fine as a programmer without accurately understanding what code does.


I think the more apt analogy would be that most people can follow a recipe, but not many people can invent new recipes from scratch (let alone achieve a specific food effect with said recipe).

Most people can modify a recipe, if the modification is natural or easily comprehended. For example, adding extra ingredients to flavour a food. But to invent one from scratch would be akin to creating a new type of food, or new way to cook, or a new way to combine cooking methods.


Elaborating a bit more on this, I have made the observation that again those _most people_ don't care much reading (good) code from other people. There are many outstanding examples of code quality out there. For instance, I've found many good patterns and solutions by simply reading the Chromium source code, many of which I can apply directly to similar problems I'm facing myself for any given software project I'm doing.

The same holds true for literature, I think. Surely, there are many people how can read and write a piece of text. However, there surely is a qualitatively difference between a gossip article from the magazine and Shakespeare's Hamlet.


> I have made the observation that again those _most people_ don't care much reading (good) code from other people

Yes. Lack of reading is observable even inside code bases themselves.

Some contributors are more prone than others to not read the codebase they're working on and break patterns, use different file naming and folder structures, disregard currently existing code and re-implementing things from scratch, use a completely different code style, use tabs instead of spaces.

I don't think it's fair to assume that those people are doing this out of malice, or even to push their own style, because as soon as you mention that to them, they admit they were just not paying attention.


Simpler still: they have a style and don't have the ability to change it on a dime. So they continue as they're accustomed to.


Wanted to point out that parent is referring to people who are somewhat in occupation where "coding" is a requirement and probably all those people he is referring to had training or extensive training in that.

I do not think parent poster is referring to junior developers or just people on the streets.

Even though they work as developers and have experience they still don't get it. Just like people who mechanically read symbols and put it into words but don't get the meaning behind paragraphs or whole stories.


> It seems a bit too similar to the idea that "most people simply cannot read".

Remember that most people require extensive schooling for the better part of a decade in order to learn how to read.


> similar to the idea that "most people simply cannot read".

I think you're probably right, but in a strictly semantic way: I think it's feasible that nearly everybody (excluding, say, the severely mentally handicapped) could learn to program to a reasonable level of proficiency, with enough effort. However, most people won't, so it ends up being the same as if most people can't.


HN is a self-selecting population of higher=than-median IQ people with specific interests. Outside of that population, skill levels look like this:

https://www.oecd.org/skills/piaac/Country%20note%20-%20Unite...


People think differently and the higher up in the abstraction level, the more diverse it will get. Like for example some people think in shapes and will get crazy if you do not format the code in the same shapes. Others think in words and writes the code as like they where talking to a human. Others have images in their head and doesn't really care how the code looks. Some people annotate their code so that it looks like LLVM. Others think of real world items like containers, ships, streams, etc.


I have a coworker that names variables after Marvel and Disney characters.


After being tasked to do the same repetitive job over and over I became very bored, and to make it fun I changed the variable name in one of the repos. The funny name stayed there as some sort of artifact and probably still haunts the poor people tasked to maintain it. And it has become a war story about that time when one of my colleges was tasked to make a routine change, but for the life could not figure out was was wrong, only to find out that repo had a different variable name...

So your college probably need some more challenge. Put him on binary (assembly) optimization or something, where he doesn't have to come up with variable names.

That said, naming things is probably one of the most difficult tasks in programming.


Do we have an analog to Occam's razor for can't/won't? It sounds like your colleague has an obscure sense of humor or has simply checked out.


I'm not proud of this but I once temporarily named an exception class after a cartoon character and it kept popping up in server logs for years.


Depends on the definition of "can"

I think, most people have potential to be able to code. By "can" I mean if they applied themselves to learning.

The problem is there's just no easy way to somehow transfer the knowledge and experience of what it means to actually code. The Doening-Kruger effect is in full force because you can't tell if you can code until you can. Moreover, almost every technology today must cater to complete beginners so it is easy to write a Hello World but then there is no way to tell you how to put your solution together. There is nuggets of wisdom but you need to have some prerequisite knowledge and experience to make use of them.

It is easy to see that I can't sight-read music. I mean I am amateur flautist but for some reason I just can't "get it" (but I also did not try hard). I know you can sight read because there are a lot people that can just take a printout and start playing it without previously studying it.

The same is not true with programming. It is not easy to see people do programming -- you only see results that you do not understand until you actually can code. You can appreciate the music but you can't appreciate the code until you actually can code.

It does not help that the whole industry is crazy right now. There is very small proportion of people with actual experience because of exponential growth of number of developers. The salaries are inflated and newcomers are getting paid as if they were experts in other industries. If you are getting paid a lot it means you must be valuable, no?


> I think, most people have potential to be able to code. By "can" I mean if they applied themselves to learning.

My time spent teaching makes me think otherwise. I saw so many smart, motivated young people who simply could not get their homework done. This was especially discouraging since my alma matter has a very strong focus on pedagogical programming.

There's something about programming that "clicks" for some people. They can look a the little pieces and immediately understand how to put them together. If you don't have that spark it's going to be an extremely challenging journey.


> If you are getting paid a lot it means you must be valuable, no?

And if you actually are experienced and try to offer a newcomer some guidance in writing code that they won't regret having written a year from now, they'll accuse you of being a "perfectionist" who's wasting time trying to obtain some theoretical optimization.


Unfortunately that mentality isn't limited only to newcomers. There are plenty of developers with many years of experience who still practice that same short sightedness.


Although somewhat amusing, I'm assuming there's more to 'MemoryHolder' than just an inappropriate name. What was wrong about it at the type level that illustrates your point? Or do you mean to say that it's (somehow) illustrative of a lack of conceptual understanding?


I'm curious about the same thing. With my Ada background, I see a completely sensible interpretation of MemoryHolder:

A Holder type is a managed (RAII/garbage collected) wrapper that has a reference to some unmanaged/primitive object. If that primitive object is some notion of a contiguous region of memory, then MemoryHolder is a perfectly good name for it.

A name like "Buffer" might not automatically tell the users that the type is managed, or that it wraps some underlying primitive type. That may or may not be relevant to the user, but it's not obviously always irrelevant, at least.


Thank you for this insight. I wasn't aware that this was a reasonable name when coming from Ada.


The Fizzbuzz test sounds like a joke when you first hear about it, but it's incredibly effective at weeding out these people.


I don't think the parent is complaining about that sort of thing, they just have a higher standard. You will disagree with a lot of naming decisions from people you work with, and they all pass Fizzbuzz.


I think we should strive to make more ways for people that "can't code" to be able to contribute. It's hard. I've been thinking recently that there might be an additional lesson behind Conway's Law; that the interface and character of a system mirrors its creator(s). This means if you hire super smart people to build something, it's going to require super smart people to interact with, operate, and maintain.


I have noticed that adept software developer's can be more patient with tools that require fiddling, especially if the fiddling feels like challenging but rewarding puzzle solving in a sea of powerful options.

Unproductivity that feels viscerally very productive.

Alternatively, command line tools tend to be created and increase the productivity of the more technically inclined, but are opaque to the less technical.

Which effect is the GibbonsRCool's Law? Or both? GibbonsRCool's Laws of Skill Drag and Acceleration?


There's also cognitive bias bias - prematurely jumping to the conclusion that some opinion is simply based on cognitive bias, and should therefore be dismissed or contradicted. Basically, it's good to use awareness of cognitive bias to moderate your own thinking, but if you signal to others that you think they are labouring under some bias don't be surprised if they shut down the discussion ASAP.


I agree.

Criticizing someone for being biased is an ad hominem; dismissing their arguments because of it is committing the genetic fallacy.

In the extreme, it reminds me of this blog post, about how knowing about biases can let some folks universally dismiss others they disagree with (and thus manage to never learn or consider alternative ideas): https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-ab...



I work as a "consultant", building cloud based business software. Above all else, my job is to deliver value. Something I constantly struggle with is "clean code" vs just pumping out a mostly static transaction script and moving on.

More and more, I write code I'd almost be embarrassed for colleagues to review. But for the type of work I'm doing (poorly defined, highly volatile, potentially short lifespan), I can't justify anything more.

It doesn't feel good and it's still hard to accept that I'm doing the right thing.

If the language I'm working in supported FP, then my world would be much better.


I've exactly the same problem with perfectionism. It's not what the industry wants.

And I mean anywhere: "If the language I'm working in supported FP, then my world would be much better" :) I've seen the fuckups that are possible in FP, and indeed worked with a guy who used it to make things as complex as possible. You can really produce nasty code because of it's much-touted compositionality in the hands of a dickhead can generate horrors. Map within a map composed with a reduce composed with... in one statement.

Add in statelessness, which can complicate things in some cases, if said prat pushes statelessness due to the latest blog article he read, it can get worse.

From memory: "against stupidity the gods themselves struggle in vain".


Nice quote from Friedrich Schiller. When I searched it from curiosity, I found another statement by him that seemed appropriate for the topic at hand:

"He who considers too much will perform little."


> "He who considers too much will perform little."

The perfectionist's curse? I have to say there's a core of truth to that! Analysis paralysis IOW.


This is why procrastination is more widespread in high IQ individuals.


I recently built a system under very changing directions. I put all my calls into a single controller, so there were some tens of methods in the single controller.

It felt nasty, it felt ugly, but when time came to actually keep up the desired functionality, put a break and comb the system, it was great to have a single place from where to extract common functionality from. At this point, it was obvious what functionality needed extraction.


This is exactly the attitude we need. Make it work, make it right, make it fast. In that order.

It's sad that too many people are so afraid of technical debt that they demand perfection at first try. At their first try, they don't even know what perfection is.


I've been doing Go for a while now, and more and more I find myself just writing out really dumb looking code. Add a new line, write another short-lived variable, treat the guy reading it - me - like he's a complete idiot. I don't work in a team at the moment, but what I do now is the groundwork for something that will likely be used for the next decade (UI work, back / front-end).


> really dumb looking code

You cannot imagine how I'd value having really dumb looking code in my life.

Yet here I am, swimming in a swap of cleverness that I can't refactor nor will ever be able to wrap my head around.

I envy you.


Both "dumb" and abstracted code can be hard to read and change. The most important part is consistency.

It is generally easier to write dumb code that is consistent. It requires less mental energy and is often a very good starting point for refactoring into something more declarative and DRY.

The pain comes from inconsistent code, often either based on wrong previous assumptions, premature optimization or time pressure.


Clear and concise isn't dumb. It's more readable and maintainable. The dumb code is the clever code that no one understands and requires weeks of effort to do, and to understand, no more than what the clear and concise code does.


I also write the most dumb and simple code and if it works, then great, I don't touch it until it needs to be optimized or more likely: removed.

Sometimes I don't really know what I'm doing, just getting all tests to pass. Then once it's working, when I know why it works, I will clean it up by removing unnecessary variables, double negations, and name the magic numbers, etc, and try to make it as simple as possible.


> poorly defined, highly volatile, potentially short lifespan

This. This is the reality of software development, especially in a startup. You have to quickly adapt your code to the <not well defined> client's needs and to the business constraints, because if there is no cash, the company wouldn't exist anyway.

You can't do that with "perfect" code.


> You have to quickly adapt your code ... You can't do that with "perfect" code

Actually, I would say that the exact opposite is true: only perfect code is perfectly adaptable. The times I want to yell at my past self are the times that he wrote something quick and dirty that I now have to rewrite because the requirements have changed.


>If the language I'm working in supported FP, then my world would be much better.

Assuming FP means functional programming and not function pointers, what language these days doesn't support functional programming?? Are you coding in AWK? Seriously almost every language but C supports higher order functions, anonymous functions, and recursion (though recursion is an anti-pattern and iteration is superior).


This is a good essay, and I think one of the main lessons is that programming is a small part of software development.

The question of whether to use an inefficient, readable implementation or an efficient, cryptic implementation can mostly be solved with the correct class topology. If classes are SOLID, then it is easier to justify a cryptic implementation of one method as that is hidden from most other developers. A class that will be inspected by others more, such as a business logic service layer, should lean towards inefficient, readable code.

Likewise, the question of whether to reuse or roll your own can also mostly be solved with a good architecture. Logically separating components makes it easier to tailor the implementation to the use case, and replace if necessary.

As always, breaking the problem into smaller problems is most of the battle.

Sadly software development as it is taught and measured by interviews, is mostly about programming and algorithms. The real art and value in software development is interface design and system architecture design.


Personally I found that my productivity as a software developer was much higher when I was young and didn’t think so much about the cleanest code or the perfect architecture for a problem and instead just tried different approaches. Now I often feel paralyzed by having to “get it right” from the beginning.

Valid (in my opinion) lessons I cherish today are keeping individual parts of a code base simple and understandable, documenting and testing a lot and preferring simpler solutions over complex ones.


This is exactly where in have ended up. And I am happier for it. My code is simpler and less "clever" per line but far wiser is structure and in use.

I abstract far more carefully now and I don't mind a few extra lines of code that make it easier to digest at a glance.


Over the past few years I have grown disillusioned by the seemingly hyper focus on clean code concept. It seems perfectly logical to have readable code. But the excess that we programmers do with the concept.


I think what's worse than that obsession, and perhaps is actually what you meant, is the concept of "elegant" code.

Elegant code is easily just as bad as spaghetti code. I can't even begin to quantify how many hours of my life were wasted because someone thought it was more important to make something elegant rather than understandable. I get that it's satisfying to make something that "makes sense" if you've built a mental model of the problem from the beginning, but if it's incomprehensible to others(without serious devotion to figuring out what is going on) then it might as well be crap in the first place. At least spaghetti code can be fairly easy to hack because there's usually lots of duplication and specialized code, making it straight forward to step through with a debugger and make a change without mysteriously borking the entire app. Elegant code, ironically, can be more flimsy because it's usually written assuming that the system stays perfect, and changes to the system reveal single points of failure.

Clean code can be written without necessarily going overboard with elegance to the point where it's not easy to understand. Even dirty code can be workable given documentation(can just be comments explaining what things do) and consistency.


I always thought "elegant" code was clean, concise and understandable.

Edit: Stack Exchange seems to agree that readability is part of what makes code elegant. https://softwareengineering.stackexchange.com/questions/9791...


Clean code has always meant to simple, easy to understand, not elegant.


I don't think they are mutually exclusive.

In a recent interview question I was asked to find the largest three numbers in python list. My first attempt looped through the starting list, comparing the number to the previous minimum number in the result list - replacing the number if it was larger. I needed a get_minimum(function). It was the obvious solution that came to my head at thr time.

An hour later I realized a far more elegant solution was to sort the list and slice the last three elements. It felt far more elegant to me and was as easy to understand as the initial obvious solution that came into my head.

(Though this is why I dislike these type of interview questions. the first solution in an interview situation is not always the best or final one)


Before I worked on firmware I worked on PCB layout. Thing you learn from that is you can keep fucking with the layout forever and it is simply pointless from a business perspective. Spend an extra day 'cleaning up' the layout. Well sweet now your delivery slipped a day.

Also experience with redactoring away a wart only to find there is now a wart somewhere else.


Difference with PCB is that it is normally one off job. If you are building a one off application or a script, clean code is not very important.

If you are building a system that will be used for years and needs to be extensible than it makes much more sense. Just like it would make sense if you got your PCB back and needed to change features or add new ones.


Exactly. I did layout for awhile. Some boards you knew weren't going to come back. Customer was just paying for us to turn a schematic and BOM into a board with the least amount of cost possible. Quoted low. You could let the autorouter have at it. But other boards (especially finicky high clock speed chip tester boards) would definitely be coming back. Sure you could let the autorouter do its thing, but when that board came back for a rev, I'd have to fuck with all of that laziness to make space for new components or reduce the layer count.


Sometimes clean code also means simple, less bloated with features that promise but don’t deliver and smaller sized projects in general


There's something deeply unnerving about ugly code, I don't think we can just say 'well it works'.

I think we should maybe aspire to find a frame of reference for evaluating when to change and not.

During development, I find iteration can be useful, i.e. when you have code that's 'hot in your mind' and can be re-worked.

Another key that the article doesn't seem to reference is encapsulation. It's one of the most golden concerns in software.

If 'ugly' is confined to a single function and doesn't leak outside - then it's almost pointless to re-write it.

On the other end of the spectrum, if an API is 'ugly' it leaks all around the code inside and out. Re-write is more expensive, but could possibly be more worth it.


Rewriting an ugly API could be problematic if not rewritten while fresh. Once it starts being refed everywhere good luck.. I bet there are no test cases either and the task becomes daunting and expensive


Yes, that's the thing. Exactly. APIs are by definition leaky. Look at Python v2 v3 debacle. So it has to be really well planned, even then possibly not worth it.


So many good lessons for newer developers and good reminders for more experienced ones. It was due to the Bike Shedding Effect that I came to not appreciate code reviews. Every review seemed to break down into discussions of variable and method names. I tried several times to create "off topic" rules like no discussion of variable names, etc. It didn't work...ever.

Now that I've been away from mandated code reviews for quite a while I can understand why reviewers go to the trivial. Firstly, "code reviews" are really "system reviews". It's difficult reviewing a project whose requirements you don't know with a design that came about through iterative design and possibly in a language you don't use.

My current team is small and split into an integration group (mine) of two people and a UX group of two people. My group partner and I look over each other's projects and stay up on the requirements as the project progresses. But it's completely informal. Yet it seems to work better than having a group of 2, 3, X number of randomly chosen developers from my department.


That’s why pair programming works better than code reviews. Unless you have crazy amounts of time there is almost no chance a code reviewer can really understand the design of the change. So you end up with people picking on very local stuff like variable names. With pair programming you have two people who understand the history of the code and why decisions have been made.


But seriously, naming things correctly makes understanding the global system so much easier.

The whole point of naming things is that you don't have to dive in this 60 LoC function, but rather can use its name to know what it does, and make a global mental model.

If you're not naming things correctly, you effectively make it almost impossible to review the design, and that's why people starts with asking about names.

Reviewing names is the exact opposite of picking local stuff. Names have no importance locally, but they have tremendous one globally.

So, please, use good names ^^


Totally agree about the importance of naming. But the big problems that are hard to fix later come usually from design flaws, not from naming and often are overlooked in reviews.


I agree with you. My point, if it wans't clrear, is that I think correct naming is a prerequisite to finding design flaws in review ! Hence we have the same ultimate goal :-)


I don't find that to be the case. I'm usually looking for bigger things like: 1. making multiple trips to a relational database when a single statement joining tables can make it one trip; 2. not handling exceptions that can cause data loss or data corruption.

(I'm not trying to be exhaustive. The examples above are common ones I've seen regularly and they make me cringe.)

I don't need to know the variable names used in these cases. The very act of not using a relational database properly or not handling exceptions is easy to spot.


That doesn’t sound right. How big are the changes you’re reviewing?


Sometimes small and easy, sometimes big and impacting the whole system. Generally small changes are preferred but that’s not always possible.


We generally review anywhere from 50 to 300 lines per diff. An unfamiliar reader may not be able to anticipate spooky action at a distance in other parts of the codebase, but a good design doesn’t permit much of that anyway. The reader can certainly understand what’s going on in the change, and if they don’t they shouldn’t approve it.


Some of our changes are bigger if they touch several systems. You have to have very good knowledge of each system to make an assessment of the impact of the change. And nobody has time to really look into this. You don’t get much credit for code review on the scrum board.


I think code review is still valuable, but I agree that the majority of actual reviews are surface level and unhelpful. On my team right now there are like three different people who all have different opinions on how code should be written, and all bikeshed it differently. It's really interrupted my flow lately, because I'm thinking less about what to write and more how I can do it in a way that will appease the reviewers. And then I get differing feedback anyway and have to spend my time mediating between the reviewers who don't even agree themselves.


I call this, Developer Inertia: It's hard to learn others' code, but it's easy to fix to bugs in others' code. It's easy to write your code, but it's hard to fix bugs in your code.

It's good to have someone else looking at your work in established projects. Pair programming provides some value to overcome this problem.

Overcome your weakness, you'll become a better developer.


You call what? Can you be more specific?


I think the GP's

> I call this, Developer Inertia:

should read

> I call this "Developer Inertia":


Naming stuff is not trivial and not a bikeshedding.


There are two difficult problems in software engineering: cache invalidation, naming things, and off-by-one errors.


I can vouch for problems #0 and #1 but I've never experienced #2.


There should be naming guidelines just like how a team should have style guidelines. The discussions should happen when creating (or updating) these guidelines.

Then these guidelines should be followed, no need for long discussions during code reviews.

Variable name does not follow guidelines => Mark it as an issue, defer to the guidelines move on.


Some people will just find it as an excuse to go all over it the naming issue. Worse than the "80 columns is the limit" people, because after all 80 columns is 80 columns. But there's no "ideal name" for some people.


No. The purpose of a name is to convey meaning to the reader. There is exactly one valid complaint about a name: “I’ve read this carefully and I still don’t know what it means.”

A standards-compliant name that interferes with understanding is not okay. A name everyone gets, which happens to not follow a rule in some rulebook, is probably fine.


No, sorry. Wasting too much time is BS.

Use a good name and be done with it. You're naming a variable, not the title of your Magnum Opus.

Edit: I think people are reading too much into the initial dismissiveness and not going past the 1st line.

More important than picking the perfect name:

- Being consistent with the naming (you called it a 'bolt' keep calling it a 'bolt' for the same type of object and for its life through the code path. If bolt is not a great name you can change later.

- Being easy to remember and type. There's no point with TurningWheelThatGoesSqueak and having a TurningWheelThatGoesSquak and a TurningWheelThatGoesSquewk this will only make the developers go crazy. Simplify.

And yes bikeshedding is bad. I just find it too ironic that parent's username reminds me of the place that took bikeshedding to the extremes.


> No, sorry. It's BS.

Way to open up a constructive discussion ;)

Any extreme of naming is bad. Spent zero time thinking about naming and you end up with a codebase where the same thing is called differently depending on how the person felt that day. Or core data structures are called "node", "element" or "link", words that are already overloaded and should be avoided.

On the other extreme, thinking too far about naming leads into bikeshedding and no work getting done.

So as with many other things, balance is the thing that gets you the furthest.


You're naming a variable, not the title of your Magnum Opus.

You're naming a variable, and that name is probably the most important documentation about that specific point in the code. A lot of the time a useful name will be quite obvious, so you should use one. If you see code that has variable names like a, i, myVar, value, etc then it's usually a sign the developer didn't think very hard about the code that they were writing. Using a name that gives some context to the data the variable should hold is massively useful to the next developer to work on the code (which is usually you, so you'll be thankful you did).


'a' and 'i' are perfect for indexes or range traversals. Yes, myVar is bad.

You can always read the context and see 'for value in list_of_values' or understand what's it that you're working with right now.

Yes, be more explicit on the tricky parts, but sometimes i = i+1 is just fine.


'a' and 'i' are perfect for indexes

Except they're not as good as 'index', which is obvious and provides some meaning, so why accept single character name?

I have an eslint rule that blocks single character var names on my projects. It makes my team develop good habits, and no one has ever complained about it. Our code is very readable.


> so why accept single character name?

Because readability suffers with repeated long variable names

There's a reason why math uses i,j,k and x,y,z and it's not to be petty.


There's also a reason why a lot of people find equations incredibly hard to parse.


This is a nice semi-random deeply nested spot to observe how the topic of naming is so cognitively attractive.

Even in the meta-sense it is being discussed here.

There is something about naming that we like thinking about, and fine tune our thoughts on, more than a simple need for legibility.

There is a sense of ownership in picking a name.


Are you awful or just trolling? I've said that naming is very important, but I didnt't suggest that people should choose a longer word when two are equivalent. A real way to give 'more meaning' to the index is to indicate to which collection it belongs.


What is a good name?

Sometimes the failure at finding a good name (and taking 30 minutes to name your variable) means your abstraction is not good and/or your variable encloses multiple concepts.

Of course, a short-spanned internal-only variable used just once or twice doesn't deserve a 30-minutes debate; OTOH a public variable which could be externally exposed by a class could require a bit of thinking.


Elements of Clojure is an excellent book that goes through what a good name is and what it should represent. Don't be put off by the "Clojure" in the title, the book is really not about Clojure but about programming, using Clojure for its samples.

You can read a sample chapter online, which is the chapter about naming. Highly recommended! https://leanpub.com/elementsofclojure/read_sample


  // TODO it works, but it's ugly, rewrite
  function init() {
      // some code
  }
What is ugly, what would rewrite accomplish? Excellent example of bad commenting. Comments are ideally unnecessary, so bad comments just litter code and stink it up even more. No code will live forever anyways, and it says something about someone when they falsely believe in perfection.


I write similar comments sometimes, although a little more descriptive. It's to acknowledge that this snippet has some issues and could be improved, but for whatever reason not now.

I also go back sometimes and do just that. without the comment I forget 95% of the time.


Worth the click even to just skim the hilarious images that illustrate the points.


I think line anxiety might be another one. Unless it’s just me. If I feel that a function or a file is just irregularly long compared to the others, I find myself asking if I need to break this thing up to be more abstract


Definitely not just you. To quiet such anxiety, I usually ask myself whether the length of the function is affecting the readability and speedy understanding of the logic. Longer functions are not necessarily bad, but if they pack so many logical units and so many scope variables, they could be improved.


> When you look at complicated systems and clever solutions - most likely it took a lot of time and resources to implement it, but all you can see is the smooth result. It is easy to fall into this fallacy because complex solutions always require a lot of work, testing, and iterations.

I used to have this problem all the time with the managing director:

MD: Company X has a really nice and simple help system. Why can't we do the same?

Me: We can, but we need a product owner, a designer, a developer and a tester full time on this for at least a couple of months.

MD: I don't understand why we can't just copy what they did.


"MD: I don't understand why we can't just copy what they did."

What was your reply?


We can. They have a product owner, a designer, a developer and a tester full time.


If the author reads this: Dark theme completely FUBARs the code snippets.

Sample: https://i.imgur.com/QUO48fn.png


I get the idea of ios/android apps supporting dark theme, but since when did this concept apply to websites?


Through the prefers-color-scheme CSS media query [1], a draft [2] that is already implemented in mainstream browsers.

Websites can write stylesheets specifically for light or dark preferences. Browsers currently derive this information from the operating system's preferences.

[1]: https://developer.mozilla.org/en-US/docs/Web/CSS/@media/pref... [2]: https://drafts.csswg.org/mediaqueries-5/#prefers-color-schem...


How did you change theme to Dark?


Happened automatically on my device, probably because I have macOS running in dark mode. It's a feature. Switch your OS to the color scheme and language you prefer and websites / apps should adjust accordingly. Should. We're not quite there yet.


You set your OS' system theme to Dark, and the browser should pass this information on to websites.


Not all code is critical and intended to maintained forever. Everyone is not working at Google, Amazon etc. most code I write is not run daily by 1000s of users.


Maybe there are just lots of FAANG engineers on HN, but it's amusing how many people overestimate the size and scope of the projects most software developers work on. There's so much discouragement and pessimism if a software wasn't written to handle millions of requests, or intended to live on for 15+ years.

On the contrary, there's software written all the time that has a realistic lifetime of a few years before it's either replaced with a 3rd party alternative or rewritten by some disgruntled programmer. Deadlines and business priorities often mean that foregoing what most of us would tout as "best practices" actually makes sense. Some software is only used by 10 people at a time, and can be maintained by one or two engineers. It probably makes more sense for the engineers to choose what's best for them and the company over what most people would say are the right things to do. There are people I know who don't write tests for their software(something I don't know that I would ever do), but each piece of software they write is small and has a lifespan of a few years at most, and the advantage to removing barriers to development is that they can make changes very quickly and get things done.


Yes and no, I'm guessing most of us here at some point have inherited a codebase that wasn't intended to live longer than a few years or serve 1000s users but nonetheless has. I'm not saying everything needs to be overengineered to be FAANG scale but don't go too far the otherway either, think of the person who might inherit the codebase.


But, how would the developer know about the lifetime of the code. How would the developer know if code will never be extended or will remain stale for the rest of the life?


Exactly this. I'm working on a codebase right now that looks like it was written by someone who had barely finished school, figured out how to use a function in JS to produce the HTML for an input element inside of an XHR response callback, then just kept reusing that over the next eight years leading to multiple >10K LOC files. I'm not sure the original author ever intended the application to be around that long.


But those two things are not related - my experience shows is that a giant amount of code WILL run forever. That quick hack you did? In production for 6 years. That quick "demo" you thew together? People will curse you for next 10 years.

If anything, FAANGs have resources and intertia to actually work on technical debt. Smaller companies will just churn and churn and churn until they bog down into unmaintainable mess of a code and can't respond to market changes anymore because their codebase is impossible to change or maintain.

Then they'll usually call us to fix their issue and be angry when the answer is "there's no easy way out of the mess you caused" while their competition is moving ahead of them.


But... why not just write the best code you can anyway?


I've been putting off writing about similar subjects. This is a favorite genre for me.

I often consider path dependence in product development. The decisions we face for any given circumstance is limited by prior decisions and experiences, even though past circumstances may no longer be relevant.


Also about trade-offs of clean code https://overreacted.io/goodbye-clean-code/


Image selection for the article is on fleek!


Best laugh I've had all day.


by who can pay?


The entire human race is a cognitive bias, you can't have christians and muslims be right at the same time. One population of religious people must be wrong and therefore a huge portion of the population must be cognitively biased to an extreme degree of believing in something completely wrong.

It is part of human nature to be hugely biased to a completely absurd degree and software engineers and scientists are not immune to this effect.

The worst that I've seen this in (to the best of my ability, as I'm cognitively biased too) is in design patterns.

Typical scenario:

Object instantiation requires 50 parameters to be fully realized. Some coworker suggested that we instantiate an object with empty members and create 50 methods on an object that each take one parameter to load the object.

Apparently having a function that takes 50 parameters create the object was a code smell because it didn't have a fancy name. That fancy name was "builder pattern."

If you can't see how utterly stupid builder pattern is for this case... well... cognitive bias.


> If you can't see how utterly stupid builder pattern is for this case... well... cognitive bias.

You cannot win against the prevailing wisdom. Try having a public mutable field on a class, and watch people lose their ever-loving minds. But why do we have more than half of the classes' exposed with getters and setters? Mmmmfh.

I think the problem is that it is really hard to reason about software in the large. So we reason about software in smaller and smaller granularity, e.g. classes. And then we want to load down every single class with armor as if it needs to be protected against the wild hordes of unwashed masses who will rampantly mutate it. But most classes naturally fit into a larger unit, a cohort of classes, and they are coupled with each other, and they mutually share state. You can't understand the whole thing by staring at a single class, anyway. But it feels like we are doing the right thing because we are told every class needs armor. Nevermind that now every class is 3x bigger, and so the whole thing is 3x bigger, which makes it 9x harder to fit in your head! Uggh!


Well, obviously the simple solution is not to use getter/setter methods but simply overload the implicit get and set operators of the field when you do need to track or react to field value accesses and changes.

Not a language option? ... Oh, so simple ... and yet, so far away.


No dude. The solution is not to use the builder pattern period. Just have a function/constructor with 50 parameters.

Why? Because the instantiated object requires all 50 parameters to be fully realized. Having to call 50 separate setters means you can create an "invalid" object where only 23 setters are called. Why allow this to happen at all? It's like a null value, why allow that value to exist on a type?

If you need all 50 parameters to be fully realized, then the logical thing to do is to only allow the object to exist with all 50 parameters in place. A constructor with 50 parameters insures that this fact is reality. It's that simple.

Yet even you, with the answer right in front of your nose staring you in the face still thought that you 50 setters is okay with syntactical sugar! That's how powerful cognitive bias is with design patters. "Builder pattern" is a word that lends a sort of artificial aura into what is essentially stupidly dividing up a constructor into 50 setters for no good reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: