
Writing good code: how to reduce the cognitive load of your code - SarasaNews
https://chrismm.com/blog/writing-good-code-reduce-the-cognitive-load/?2
======
agentultra
Whenever I read articles like this and the ensuing discussion that follows,
I'm reminded of this quote from Dijkstra:

 _Don 't blame me for the fact that competent programming, as I view it as an
intellectual possibility, will be too difficult for "the average programmer" —
you must not fall into the trap of rejecting a surgical technique because it
is beyond the capabilities of the barber in his shop around the corner._

 _Dijkstra (1975) Comments at a Symposium_

Whenever I read code written in the "so boring it cannot fail" camp I get
exhausted. This code is inherently procedural and rolls itself into a giant
ball of mud -- composition is difficult to achieve so as requirements change
the code accrues more loops and conditionals until it is nearly
incomprehensible.

It might start out neat and clean but rarely will it stay that way.

Good, non-leaky abstractions are key. This can even be achieved with
procedural code but I think functional programming techniques like pure
functions, immutable values, and a sound type system help a great deal... even
at the expense of the initial "cognitive load," it takes to learn how to
employ these tools.

~~~
mybrid
Functional programming is inscrutable by most. Berkeley uses LISP as a flunk
out class to weed out freshman.

I find it odd that given in the physical world people require different shoe
sizes for different feet that in the intellectual world people believe one
size fits all, or that there is ultimately a single style of code that is
comprehensible. Do you really believe our brains are all the same?

The functional programming crowd will always play a minor roll in human
programming and not because as you suggest, you are all superior.

But rather it has to do with how brains differ. In fact I think the very real,
emotional, visceral reaction people have to functional programming is key.
People who love it, love it. Everyone else is completely demotivated by it.
Lost in sets of parenthesis as the say. There is little middle ground. The
very emotional reaction people have speaks volumes about how the brain views
programming.

Spoken language changes every day because people use words differently due to
brains being different. This vexes those who are compulsive about adhering to
fixed definitions and grammar rules because of some imagined ideal. The same
is playing out in software. We will always be creating new languages and
tweaking them and there are there will be those who believe in an imaginary
ideal of what readable code is.

Ultimately I think software writing needs to mature to where written language
has matured: editors. Code review has some semblance as does paired
programming. But editing is far more involved. Editors can reject entire
writings.

So why do we need editors in the first place? Because in order to have happy
consistency a set of rules is not enough. I comes down to style: a word here,
a name change there. It may not seem like much but in fact it is.

Given a language is Turing complete, meaning it can exercise the full
capability of the CPU and computer, then the choice of language and style is
about the human condition, a condition which is as varied as the people in it.

If one cannot write code that ones finds usable then one needs to keep
searching in earnest for a better style. However, once you have a style you
can appreciate your own code from years ago then it is a matter of arbitrary
standards when interacting with others. You have to draw the line somewhere
but lets not pretend for a second that a single writing style is ideal for
everyone, or even most. Its arbitrary because without any standards you have
an unmanageable chaos.

~~~
fauigerzigerk
_> Do you really believe our brains are all the same?_

One thing that all brains have in common is that they can learn. One thing
that all feet have in common is that they can not learn. That's why your
analogy doesn't work here.

 _> Functional programming is inscrutable by most._

As is musical notation, maths, foreign languages, CAD drawings and everything
else that has to be learned before use.

I am not a huge proponent of FP as I'm not entirely convinced that the
benefits of immutability always justify the restrictions imposed on algorithm
and data structure design, both in terms of simplicity and performance.

But one thing I am convinced of is that whatever newbies may find inscrutable
is entirely irrelevant unless you plan to cut costs by having interns write
all your code for free before replacing them.

~~~
mcherm
> But one thing I am convinced of is that whatever newbies may find
> inscrutable is entirely irrelevant unless you plan to cut costs by having
> interns write all your code for free before replacing them.

I completely agree. My standard in choosing a design or syntax is whether a
reasonably competent (but not brilliant) programmer will understand this well
if they are already generally familiar with the codebase (but unfamiliar with
this particular module) and are in a particularly big hurry.

As a specific example, I was horrified the first time I saw Java's convention
for setting fields in a constructor:

    
    
      public class SomeClass {
          private final int shoeSize;
          private final String favoriteColor;
      
          public SomeClass(int shoeSize, String favoriteColor) {
              this.shoeSize = shoeSize;
              this.favoriteColor = favoriteColor;
          }
      }
    

The constructor has a local variable with the same name as the instance
variable. The syntax works because local variables shadow instance variables
but not when explicitly referenced via "this." \-- a moderately obscure part
of Java syntax.

Normally, I would object if a developer created a variable name that shadowed
another and expected the reader to keep it straight and not get confused. But
when you do this REGULARLY, it becomes just another standard idiom. Most
readers today (now that the practice has been standard for over a decade)
wouldn't even blink. A complete novice might be confused, but the complete
novice isn't my target audience.

------
ser0
Sometimes I find myself writing in reviews for less experienced developers the
comment: this is clever but not clear.

I think as developers we get too enthralled in the problem solving and forget
that in the long run we are more like journalists noting business rules at a
snap-shot in time, which a future maintainer of our software must act as
historian/archaeologist in order to understand.

What's funny is that often our future selves is the maintainer of our
software. However, as we lament choices in the past, we continue to write
intricate code in the name of elegance/conciseness.

These days I'm pretty pleased when I can say a piece of code utilises only
syntax and statements taught in an introductory programming course.

~~~
TallGuyShort
One of the most valuable things I got from university was my professor's
saying, "When somebody tells you your code is clever or interesting, that's an
insult."

~~~
TeMPOraL
And in my experience, like most insults, this usually says more about the
person saying it than about your code.

------
reikonomusha
Some good previous discussion: Simple Ways of Reducing the Cognitive Load in
Code
[https://news.ycombinator.com/item?id=11992684](https://news.ycombinator.com/item?id=11992684)

My comment there is still relevant here:

I've noticed recently that especially in online discussions, the term
"cognitive load" is used as a catch-all excuse to rag on code that someone
doesn't like. It appears to be a thought-terminating cliché.

There's definitely room to talk about objective metrics for code simplicity,
which are ultimately what many of these "cognitive load" arguments are about.
But cognitive load seems to misrepresent the problem; I think it's hard to
prove/justify/qualify without some scientific evidence over a large population
sample.

With that said, the article presented fine tips, but they seem to be stock
software engineering tips for readable code.

~~~
gravity13
I think cognitive load is a perfectly acceptable term here. Taken within the
context of Cognitive Load Theory, we assume that any individual can only
maintain a few items into their working memory. These items are portions of
the code that you need to think about at once in order to achieve a task, and
good code partitions off the logic so that in order to understand individual
components you only need to reserve a few slots of your working memory.

Of course this is a bit contrived, but I would argue that pretty much
everything we've come to understand as "easy to read code" all reduces down to
how effectively it organizes itself given the limitations of our working
memory. And in that case, it's one of the first things you should be sure to
understand on your path to becoming a better programmer.

------
awinter-py
Hire programmers who can read.

Lines like "Don’t use tools that are still too hard to get a grip on" are code
for 'your team will get discouraged if they have to do any homework at all to
understand your project'. If that's true, how do you expect them to understand
the business requirements?

Every large project has embedded tools and legacy tricks so the author is
implicitly saying 'don't let projects scale'.

Simplicity is hard to achieve, and people who can't understand complex code
can't write simple code.

If a book hits you on the head and makes a hollow sound, it may not be the
fault of the book.

~~~
gaastonsr
This is my thought. I LIKE to read clever code. Most often than not it teaches
me something. Some neat way to do x. It teaches how to write terse code and
I'm probably going to learn how to apply that to other areas of my code.

I would be worried if my code is being reviewed by somebody who doesn't
appreciate clever code.

Now, clever is different than complex, or confusing code. Clever code is a
neat way to do something. Complex and confusing code can be spotted
immediately because it does one or many of the following things:

\- Functions get too nested

\- Tries to do too much

\- Function is too long

\- Function is not broken into logical parts

\- Confusing parts don't have their own function

\- Long conditionals

\- Non descriptive variable names

\- Modifies state all over the place

And I could go on. Those are the things I would watch out for and that really
take a cognitive load on me.

Also, clever code is different than tricky code. To try to use a programming
language quirk is crazy. You're asking for your code to be hard to read. Using
a little known useful feature is good way to extend your team knowledge of a
PL.

------
jacquesm
It starts with picking the right language. Some languages help you to clarify
your thoughts and some seem to do their best to obscure whatever meaning there
was.

Then, after you've picked your language it's up to you, the programmer. And
'good code' to me translates into 'whatever is on the screen is enough to
understand the code'.

If you have to page back-and-forth all the time between different parts of a
function or between different functions or even different files then your code
will be hard to maintain, hard to read and probably buggy.

So work hard on reducing scope as much as you can.

------
mikulas_florek
I expected some science, however it's just subjective rules backed by nothing
substantial.

~~~
klibertp
That's because no one even knows how to start doing "science" in this
direction.

When you read the code there are so many different things influencing your
understanding that it's hard to impossible to even list them all. And if you
take a look at some of the things that might influence your understanding
you'll notice that most of them are very hard to impossible to measure.

From the top of my head, things which may have an impact:

    
    
        * your level of skill in a given language
        * your level of familiarity with the style of a particular programmer who wrote the code
        * the tools you have at your disposal (go to definition, see docs functions of IDEs)
        * your familiarity with a particular framework used
        * your preference and expectations regarding the identifiers
        * your knowledge of what the system as a whole (or its part) is supposed to do
        * your familiarity with the project structure
    

And so on, and that's even _before_ we start talking about concrete examples
of readable code and trying to get some metrics on it!

Writing code is no more susceptible to scientific analysis than writing prose
when it comes to other people reading the code (and not machines executing
it). To write good code you need to first assume something about your readers
(their level of skill, prior experiences, etc.) and then optimize the form of
the code so that it doesn't confuse them (too short) or bore them (too long).

Seriously, writing prose and code (the latter only if meant for human
consumption) is very similar: you need structure, things following one
another, sentences of appropriate length and "density" and so on in both kinds
of writing. Programmers could learn a lot from writers, but they most often
refuse to do so. Literate Programming should be the default by now, yet is
still used very rarely...

~~~
cessor
> That's because no one even knows how to start doing "science" in this
> direction

I am sorry to disagree, but this is just not true.

[http://www.ptidej.net/courses/inf6306/fall10/slides/course8/...](http://www.ptidej.net/courses/inf6306/fall10/slides/course8/Storey06-TheoriesMethodsToolsProgramComprehension.pdf)

[http://www.cs.kent.edu/~jmaletic/papers/EMSE12.pdf](http://www.cs.kent.edu/~jmaletic/papers/EMSE12.pdf)

[https://link.springer.com/journal/10664](https://link.springer.com/journal/10664)

[https://scholar.google.com/citations?view_op=view_citation&h...](https://scholar.google.com/citations?view_op=view_citation&hl=de&user=2WeXBokAAAAJ&citation_for_view=2WeXBokAAAAJ:k_IJM867U9cC)

I have to say this:

[https://brains-on-code.github.io/shorter-identifier-
names.pd...](https://brains-on-code.github.io/shorter-identifier-names.pdf)

My supervisor has to say this:

[http://pi.informatik.uni-
siegen.de/stt/34_2/01_Fachgruppenbe...](http://pi.informatik.uni-
siegen.de/stt/34_2/01_Fachgruppenberichte/WSRDFF/wsre_dff_2014-18_submission_w10.pdf)

These are just the ones from the top of my head that I can google quickly, if
you want I would be happy to share my zotero database or a large bibtex file.

> When you read the code there are so many different things influencing your
> understanding that it's hard to impossible to even list them all.

You are completely right: Psychological research on programming shows that it
is a very complex cognitive task, best done by experts, and poorly understood.

You describe knowledge and experience, and they in fact matter a lot. The
above studies, for example, show (often just as a sideeffect), that experts
are impacted less severely by badly written code (however that was
operationalized).

> Literate Programming should be the default by now

I agree.

~~~
klibertp
There's nothing to be sorry about, actually, I'm really happy to learn that
such research is being done! I tried searching for similar studies quite a few
years back and came back empty-handed, so I assumed it's either not done at
all or is very niche.

As you seem to be knowledgeable about the field, how relevant/applicable you
think the studies you linked to are in the general case? In the study about
identifier length, for example, seems to be very specific and I'm not
convinced at all the results would be the same in a different language, with
different people and even with slightly different identifiers (abbreviating
start to str vs. beginning to beg, for example).

EDIT: another thought on the study: does it control for presence or absence of
widely known conventions? For example in Haskell, OCaml and others it's
customary to write `x :: xs` - would writing `element :: list` instead improve
the time needed to comprehend the code? On the other hand, in Smalltalk, you
frequently write `add: aNumber to: aList` - the identifiers are longer, but
they provide additional (type) information which is otherwise not present. So
how long the identifiers need to be may depend heavily on the language (the
study used C# I think), is it accounted for in the paper?

Still, all the papers you mentioned look interesting and I will read them once
I have some time. Thanks for posting! :)

~~~
cessor
klibertp,

> another thought on the study: does it control for presence or absence of
> widely known conventions?

I am very happy to encounter other critical thinkers - your question is a
really good one :) You are right, the study is not capable of explaining this
effect (that is, how commonplace / conventional some abbreviations are), but
it was considered in the design. I am sure that this plays an effect but I
wouldn't dare to give a definitive answer based on the data from my study.

For example, config or cfg are arguably so common that there they don't hurt
comprehension. Similar for single letter variables. Point.x and point.y are
easily identifyable as coordinates. Or the variable name i in a for loop may
not be problematic, as it becomes almost meta-syntactic (much like foo and
bar). However, i,j,k,l index names may really hurt comprehension, when you
have a complicated looping strucutre with many lines in between, as they are
likely to strain your working memory. As for the point.x example: I would
explain this as a priming effect. The name of x is fine, because point already
preactivates the right direction. X in isolation might be worse, and if you
encounter new MessageBrokerInstance().X() you might as well read your code in
base64... Thus, based on my experiment, I can talk about variables in
isolation, but usually, code is mixed and here, other effects might be
relevant.

In the longer versions of my experiment, I considered the effect of common
abbreviations as well. Psychology lists several word frequency effects. Common
words can be immediately accessed *(from the so called mental lexicon, a mind-
dictionary if you will), but uncommon words have to be synthesized on the fly
though their phonetics (see, for example the dual route cascade model,
coltheart 2001, [http://www.cogsci.mq.edu.au/~ssaunder/files/DRC-
PsychReview2...](http://www.cogsci.mq.edu.au/~ssaunder/files/DRC-
PsychReview2001.pdf)). Thus, high-frequency words (=often occuring, common
words or strings) are quickly read and their meaning is understood, whereas
uncommon words or strings do not have a representation in the mental lexicon
and you have to synthesize their meaning first, thus slowing down
comprehension.

My argument is simple: It is always possible to understand code, no matter how
mangeled or obfuscated it is (after all, reverse engineers are doing amazingly
hard work). The question is how easyly the code can be comprehended.
Abbreviations that are common to some (e.g. experts), may not be common to
others (e.g. novices in their first job). Of course, the newbies will get
there eventually, but abbreviations have a higher learning curve, thus new
people will be unproductive for a longer time.

Think about yourself, you surely know this effect:

1\. Write code. 2\. Problem solved 3\. don't touch it for 4 months 4\. Changes
needed, need to fix bug, add feature 5\. How does this work? 6\. Wtf, what was
I thinking?

For the sake of all newbies, your company, or even your own, I encourage the
use of identifiers that can be read, because you can READ and know LANGUAGE,
and not because of arbitrary conventions. There are many conventions (e.g.
x:xs, for i=0;i<10;i++, point.x) that can surely be considered domain language
and don't impede comprehension, but still might hinder comprehension for
novices, or yourself in 4 months.

> how relevant/applicable you think the studies you linked to are in the
> general case?

This is really hard to say. Many processes take place when programming, and
many programmers have theories about why it is hard and how to make it easier
(as the entry article citing cognitive load, which is a good methaphor, imho).
So far, I know of many such scientists who are trying to isolate the different
effects. For example, I am focused on identifier names, as I find them to be
impactful. Their meaning can't be analyzed automatically (even with sound nlp
techniques which are relatively limited), and the programmer is totally free
to name their variable names what ever the hell they want. I am sure that in
comprehension of programs, identifier names play a big role, but when I
encounter "clever code", with weird recursions, counterintuitive measures, or
plain magic
([https://en.wikipedia.org/wiki/Fast_inverse_square_root](https://en.wikipedia.org/wiki/Fast_inverse_square_root))
the value of identifiers are limited, or, in other words, there are other
things going on that impact my comprehension BESIDES identifiers. How they
interact, I cannot say for sure, but if complex code has no clear identifiers,
it becomes complicated.

I believe that each of the effects in isolation is relevant, but I am not sure
which one is the most dominant, or, for that matter, whether there is ONE
thing that will solve all problems.

------
pmarreck
The largest improvement you can make to reduce the cognitive load of your code
is to move to a functional language with immutable data and optionally static
typing. Empirical data about bug tendencies by language
[http://macbeth.cs.ucdavis.edu/lang_study.pdf](http://macbeth.cs.ucdavis.edu/lang_study.pdf)
is IMHO indirect evidence of cognitive load issues.

~~~
lacampbell
I'm a big fan of immutable objects. Best of both worlds in my opinion.

~~~
pmarreck
Well, there's somewhat of an argument to be made that immutable data
structures are a bit less performant than mutable ones... but it still seems
to be the way forward, especially when you take any sort of concurrency into
account

~~~
lacampbell
The real distinction I am trying to make is that objects are a higher level,
cleaner and more composable way of organising code - much more so than the
"module and record" nonsense most FP people seem to push.

------
falsedan
Writing good technical articles: if a part is probably the most important
part, put it first (or nearly first) in the doc & don't waste the readers time
congratulating them for spending 2 minutes reading.

------
mrlyc
When I write code, I keep the target audience in mind. That target audience is
a maintenance programmer doing their first programming job.

~~~
JustSomeNobody
What's a maintenance programmer? Sounds pretty derogatory.

~~~
dragonwriter
A "maintenance programmer" is a programmer whose job is _maintaining_ a system
(bug fixes and adaptation to relatively minor environmental changes) that is
viewed as complete and basically static, either without major new development
expected or with major new development done by separate teams in as-needed
projects. Often, maintenance programmers are employed by the organization
owning the system, while new development is done by contracted teams.

It is a particularly common role in government and other enterprise shops,
particularly those that still use a pre-Agile, civil-engineering-metaphor
approach to software development.

------
bipson
This (arguably nice) post covers a small number of the points made in the book
_The Art of Readable Code: Simple and Practical Techniques for Writing Better
Code_ [1]

I can recommend this to every programmer, even experienced ones, because even
if they might know most of the things mentioned, it is presented in a very
approachable, structured way and I think it always helpful, never boring and a
diverting, easy read.

It also tackles these issues of "Gurus say" and "Everybody knows..." and tries
hard to refrain from subjective matters, clearing up a few misunderstandings
and old habits.

1: [https://www.amazon.com/Art-Readable-Code-Practical-
Technique...](https://www.amazon.com/Art-Readable-Code-Practical-
Techniques/dp/0596802293)

~~~
reacweb
IMO, all the points comes from 2 basic rules: 1 KISS (keep it simple and
stupid) 2 DRY (don't repeat yourself)

KISS being higher priority than DRY.

~~~
jiaweihli
DRY isn't something you should blanket apply. There are cases where you do
want to repeat yourself, e.g. to improve readability (avoiding
metaprogramming, which is hard to reason about and will break your IDE) or
because two things aren't actually semantically related (trying to force an
abstraction means you need to undo it later anyways when the implementations
diverge).

~~~
reacweb
Yes, that is my point in giving higher priority to KISS.

------
11thEarlOfMar
The highest compliment I ever heard, not paid to me, alas, was: "Your code
reads like a story!"

------
stevecalifornia
I like code that reads like a Dick & Jane book ("See Dick. See Jane. See Dick
run. See Jane run.").

However, it appears to be trendy to write insanely difficult to read code. To
use the analogy, Shakespearean code. Instead of one line of code doing one
thing the developers will write a ton of functionality into one line of code
by using fluent and method chaining. As someone reviewing the code I have to
keep this mental stack of what the code is doing and it just becomes too much
to process.

It's a personal opinion, but I had to share.

~~~
TimJYoung
I can't up-vote this enough. As someone that started doing development in the
late 80's, I find this style of coding to be infuriating. It completely
destroys the ability to map lines of code directly to function calls without
resorting to manual coding rules that require that each .function() be on a
separate line, and makes debugging way harder than it needs to be. And, for
what purpose ? To avoid a local, temporary variable ? Do these developers
think that their code is faster this way, or...what ?

It's "write-only" code - good luck to the next guy that has to read it.

And, don't get me started on: if <Constant> = <Variable>... :-)

~~~
Falling3
When learning C in college, I was specifically taught to use CONTSTANT ==
variable. Accidentally omitting a '=' is common and putting the constant on
the left side ensured the compiler would catch the mistake.

~~~
kennu
I would argue that by using CONSTANT == variable, you are reducing the
cognitive load of the compiler, but increasing the cognitive load of the human
reading the code (who has to mentally flip the expression around). I think the
original article intended to say the same thing.

~~~
thyrsus
I don't get it. A test for (in)equality is commutative unless you're inducing
side effects (i++) - and then side effects are going to be the big "cognitive
load", regardless of which side of the comparison they occur. Am I dyslexic
because (null != foo) is exactly as easy for me to read as (foo != null)?

~~~
douche
It reads awkwardly for a lot of people, to the point that some call these
constructs Yoda Conditionals.

I'm quite happy that all the languages I use regularly disallow assignment in
ifs, at least.

------
sqldba
"Comments are closed" says everything.

The if ($null -ne $blah) one is STILL RELEVANT. In PowerShell for example it
differentiates between an empty array and either retrieving an array's
contents, or only the nulls from an array.

------
kccqzy
I don't understand the first example.

    
    
        if (null != variable)
    

If this was C, it should be NULL, and it is almost always better to just write
`if (variable)` to check for NULL pointers instead.

If this was JavaScript, this check includes undefined too. Not sure why it
didn't use triple equal.

If this was just talking about placing a constant value to be compared before
a more complicated expression, I really don't see a convincing argument for
either. Does switching the order reduce the cognitive load that much?

~~~
flohofwoe
I also don't understand this, nothing wrong about Yoda conditions and I don't
understand the 'cognitive load' argument since the condition is just as
readable. And: Visual Studio 2015 does _not_ warn in the case of "if (a = 5)",
not even at the highest warning level, only the VS2015 static code analyser
catches this.

~~~
maccard
It also probably shouldn't (warn)

    
    
        if(Foo* a = GetAFoo()) { /* Do something with a non-null foo */ }
    

is perfectly valid code.

~~~
gnaritas
Depends on the language, C# won't allow that, if's don't accept null, they
accept bool.

~~~
maccard
True. I was speaking of C++, which also accepts bool [0] (or more
specifically, something that can be converted to a bool, which a null pointer
can be [1]).

Actually [0] has a good example of when the if (Foo* a = ...) syntax is useful
(dynamic cast)

[0]
[http://en.cppreference.com/w/cpp/language/if](http://en.cppreference.com/w/cpp/language/if)
[1] [https://ideone.com/im4uVn](https://ideone.com/im4uVn)

------
scandox
Stone Soup: all you need to code well are just these few little pebbles of
wisdom. Oh plus 15 years of learning how to apply them intelligently in any
given situation, and subject to any given constraint. At which point you don't
really need my advice.

These articles are addictive but I suspect reading Code Complete one page a
day might be more useful and ultimately more enjoyable.

~~~
jiaweihli
Why stop at 1 page a day? Code complete is an easy read that deserves to be
enjoyed multiple times.

------
rb808
Code Complete is an old book now. Is it still good advice?

In my new project I see a lot of code like below. I hate it but I'm not sure
if I'm old fashioned or correct in thinking it should be 4 or five lines.
Should I reject a code review for stuff like this?

return
(HadoopSummary)ScopeCoordinator.getInstance().findObject(Scope.getFirst(), new
Path<String>(SCOPE_PATH.split("\\\\.")));

~~~
Jach
I still hear recommendations for Code Complete so I assume it's still useful,
but haven't read it myself. A somewhat old book I like is _Working Effectively
With Legacy Code_ , I think its main premise and prescriptions can help a lot
of codebases out there even if not all of them, especially those in functional
languages. (Namely, your codebase will be better the more you get it under
test and the more you leverage OOP design principles.)

Your code snippet looks like normal Java code to me. :) Not great that it's so
common but it's at least not unusual... Being one line or more lines for that
piece of code doesn't really matter to me but I'd prefer the single line in
this case: I'm viewing it in an IDE, I've got more than 80 columns, and the
pieces of syntax are easy enough to spot I don't need vertical cues.
(Unfortunately rainbow parens still seem to be a minority preference.)

My own quick context-free review of that: is it testable in junit? Can you
substitute a mock (without using something like PowerMockito) for
ScopeCoordinator.getInstance() and for Scope.getFirst()? May be better to make
the instance a member variable that you can mock by just passing a different
one in the constructor. Why is it Scope.getFirst(), unless this method is
explicitly about finding the first of something so it's clear in context? For
the 'new Path<String>(SCOPE_PATH.split("\\\\."))' part, that looks like it's
going to be the same every time and not dependent on any runtime code so why
not make it a static member? (Or an instance member you can mock, or maybe the
enclosing method can take a Path as an optional param with the default being
the static one.) Can the design be redone to avoid the type conversion or is
it too late?

------
Vinnl
> Libraries and tooling can also be a barrier for new developers. I recently
> built a project using EcmaScript 7 (babel), only to later realize that our
> junior dev was getting stuck trying to figure out what it all meant. Huge
> toll on the team’s productivity.

While I agree that libraries and tooling an be a barrier[1], I think getting
junior devs up to speed with the (latest version of a) language is part of the
toll you're accepting when hiring a junior.

[1] Although this still holds true if you write the library yourself, so be
reluctant in getting rid of libraries that save you more than they cost you.

------
hasbot
The same holds true with APIs: too many concepts make a hard to understand
API.

------
sirmoveon
The day the cognitive load is at an ideal point is the day you will lose your
relevance to society, because it will be easily automated. I know that day
will come; but I'm in no hurry given the current state of affairs with
capitalism and a global oligarchy trying to enslave the rest of the specie.

Meanwhile, I'll be happy writing obscure and complicated code that I won't
have to maintain. Giving the industry leverage for the near future.

------
kuharich
Previous discussion:
[https://news.ycombinator.com/item?id=11992684](https://news.ycombinator.com/item?id=11992684)

------
mywittyname
> Actual code from a makefile I wrote. Junior devs can’t handle overuse of new
> tech.

Most devs can't handle overuse of new tech. It seems like the _junior_
qualifier was put into place in order to get away with this bit of hypocrisy.

The introduction of new tools will absolutely lead to increased cognitive load
until the entire team is familiar with it.

------
Jach
At the risk of making too much over the section (and ranting risks in general
:)) I get the sense that "Keep your personal quirks out of it" strays
dangerously close to "don't learn anything new, don't encourage the team to do
it either". Of course I agree with not being clever for the sake of clverness,
and not going against the overall style of the team with your preferred style.
But the article goes on:

"The problem is that people just want to fix their bugs and move on." Yeah,
screw that, maybe if people spent some time learning better coding techniques
they wouldn't have so many bugs? Take for instance this "trick":

    
    
        String blah = Optional.ofNullable(foo.bar()).map(Clz::doZap).map(OtherClz::extractZorp).map(OtherOtherClz::toString).orElse("");
    

The normal "just leave me alone and let me code and fix bugs and get on with
life" equivalent is:

    
    
        String blah = foo.bar().doZap().extractZorp().toString();
    

Problem: any of those method calls can blow up with NPE, because they were
written long ago by other not so careful devs and you can't simply rewrite. I
see this all the time. Or a variant where foo.bar() is null-checked so they
can call doZap(), but they still do (or edit it to do later) the rest of the
chaining of doZap().extractZorp().toString(). When something inevitably does
blow up, you get your bug to fix and then move on, but wouldn't it have been
better to not have the bug in the first place?

It's not even that devs don't realize that code could blow up with a NPE, a
lot of the time they do, they just don't want to do the ugly "solution" up
front (that someone will end up doing when they fix the bug and move on
anyway) of all the intermediary variables and if scopes checking for nulls (or
a NPE exception handler in the middle of their logic) and convince themselves
it probably won't ever be null. The Optional 'trick' lets them be lazy (low
syntax overhead once you understand what map() and flatMap() can do) and safe.

Without even bringing up streams and lambdas, a nifty trick that appeared in
Java not that long ago is the for-each syntax (which prevents all too easy to
happen off-by-one errors in a loop counter). I keep up with language
developments, I'm going to use new expressive capabilities in my code (when
they're helpful -- again I'm on board against cleverness-for-cleverness'-sake)
and anyone who has a cognitive load with it ought to learn it well enough so
there is no load and we can develop more solid code. Ultimately I concede the
point I've heard from Haskell or Scala advocates that as you practice all that
Type power becomes less troublesome, I'm just not willing to invest the
cognitive effort up front to get to that point since I think the tradeoffs
aren't worth it for my use cases. The fact that I find a lot of Scala to be
incomprehensible is a fact about my state of mind, not a fact about Scala or
the developer who wrote the code.

In the end these aren't even huge issues. The worst bugs aren't often the
result of presence/absence of good code or capabilities (security bugs are
probably a big exception), they often happen before coding even begins and
accumulate over time with more and more edits to a system without stepping
back to see if the original design makes sense for the current system or
whether we've been stapling things together. We focus too much on these small
details about how it takes 30 extra seconds to parse a too-terse line of code
that would have been easier to swallow if it was 5 lines and ignore the fact
that we've got 30 classes for this feature (so modular and testable) that
could have been done in maybe 30 terse lines of a more powerful language with
perhaps some extra cognitive overhead upfront.

------
tln
I always feel guilty when using idiosyncratic code. A couple of jobs ago I
briefly used this questionable Python idiom:

    
    
        var = "%s code %s are bad"
        var %= "Stupid", "tricks"

------
JelteF
Can this title be edited to say (2016)?

------
brilliantcode
holy hell that interactive video/code editor is pure fucking genius!!!!!

Is the mouse cursor an actual HTML element moving according to pre-recorded
session?

Really would like some more details on this. I've never seen anything like it.

