
Clear is better than clever - UkiahSmith
https://dave.cheney.net/2019/07/09/clear-is-better-than-clever
======
flowerlad
Ability to make complex things simple is what sets apart a great programmer
from an average one. So why do most programmer interviews at companies such as
Google focus on writing performant code instead?

Most professional programmers only occasionally need to look into performance
issues, but need to take complex things and simplify them with every line of
code they write. And yet most programming interviews don’t evaluate this
ability. I think this should change.

~~~
james_s_tayler
We have this discussion on HN every month.

So far the answer I've worked out is:

Because its optimal solution to let talent flow through BigCo in SV. There is
no domain or tech-stack specific stuff. So studying for the interview process
allows the applicant to apply for multiple companies. At the same time it's a
pretty hardcore process, so the company knows they are getting engineering
talent that is a) smart and b) obedient.

For the rest of the world it absolutely doesn't make sense and that's why
where I live, you don't see that style of interviews at all.

~~~
wpietri
One thing I'd add: One possible function of an interview process is to find
qualified interviewees. Another is to make _interviewers feel smart_. I think
typical BigCo interview processes are optimized for the latter.

David Epstein's new book _Range_ talked about academic research splitting
domains into kind vs wicked. Kind learning domains are ones where "feedback
links outcomes directly to the appropriate actions or judgments and is both
accurate and plentiful", while wicked ones are "situations in which feedback
in the form of out-comes of actions or observations is poor, misleading, or
even missing". [1]

The hard parts of real-world software development are generally in the
"wicked" bucket. Schoolwork and puzzle questions are both generally "kind" in
the sense that there's a right answer and you're expected to figure it out.
It's impossible to be too smug working on "wicked" problems because you get
your ass kicked often enough to stay humble. But in "kind" domains it's quite
easy to indulge one's desire to feel superior by dragging people through
things you know well.

Personally, when I interview people I try to set things up so that there's no
right answer; the goal is to see how well they get to good answers, and how
well they collaborate during that process. I'd love to see more people do
that.

[1]
[https://pdfs.semanticscholar.org/5c5d/33b858eaf38f6a14b3f042...](https://pdfs.semanticscholar.org/5c5d/33b858eaf38f6a14b3f042202f1f44e04326.pdf)

------
kosievdmerwe
The concept of clarity is why I don’t like the consequence of SOLID, where you
have lots of tiny classes.

It’s easier to understand complex functions and a simple class structure than
the other way around. Because jumping between multiple files/classes incurs a
high understandability cost, whereas complex functions fit on your screen
typically.

And reading some more about it, I found this good article:
[http://qualityisspeed.blogspot.com/2014/08/why-i-dont-
teach-...](http://qualityisspeed.blogspot.com/2014/08/why-i-dont-teach-
solid.html?m=1)

~~~
azeirah
I typically find it very difficult to understand complex functions, (100+
lines of code, ~3 or more nesting levels deep) and even simpler complex
functions (mixing 2-3 concepts into a 15 line function).

I've only just started doing TDD the "Growing Object Oriented Software guided
by tests" way, and I find it incredibly helpful that each and every class does
just _one_ thing, even splitting up those 15 line functions into two or three
separate classes implementing an interface -- single responsibility -- helps
me a lot in reasoning about the code.

I _have_ experienced the dependencies issue myself already though, it's very
annoying to click on a method in my IDE, and then get shown the interface
definition of that particular method. I'll then have to trace my way through a
couple of files to find the dependency, very annoying.

~~~
dang
This is widely believed and repeated, but empirical evidence actually runs the
other way: according to studies cited in the book _Code Complete_ , functions
in the range of 100 to 150 LOC are more maintainable than shorter ones.

[https://news.ycombinator.com/item?id=6229801](https://news.ycombinator.com/item?id=6229801)

[https://news.ycombinator.com/item?id=3876434](https://news.ycombinator.com/item?id=3876434)

[https://hn.algolia.com/?query=by:cpeterso%20%22code%20comple...](https://hn.algolia.com/?query=by:cpeterso%20%22code%20complete%22%20function&sort=byDate&dateRange=all&type=comment&storyText=false&prefix=false&page=0)

~~~
BurningFrog
At the end of the day, a simple measure like LOC can never capture
readability, good or bad, and you get eaten by Goodhart's Law if you focus too
much on it.

[https://en.wikipedia.org/wiki/Goodhart%27s_law](https://en.wikipedia.org/wiki/Goodhart%27s_law)

~~~
dang
No one would argue otherwise. Indeed, you can trivially take any readable
function and transform it into an unreadable one of exactly the same length.
But this doesn't seem like a valid reason to dismiss specific findings of
specific studies. Don't you think it's interesting that such research as we
have contradicts the most often-repeated claim about this aspect of
programming?

~~~
BurningFrog
I don't know. I've learned over the years that there is _always_ a study
confirming or contradicting whatever point you want to make. "Beware The Man
Of One Study".

I say that without even looking at those studies, which is perhaps unfair. But
there are So Many Studies...

My personal experience is that when I was exposed to shorter simple and (so
important!) well named functions, my work became so much better. And that is
now the school I subscribe to.

That's not - at all - to say you can't also find very good practices doing
different things. But that's not where I found it.

~~~
TeMPOraL
Still, one study is still an important piece of evidence to consider when all
you had before is no studies and a gut feeling.

My personal experience differs from yours somewhat. I believe it's not the
length or the number of methods that matter, but what language (i.e.
abstraction) they create. You try to subdivide the function into functions
that are natural fit for the task being done, but _no further_. If you still
end up with a long block of code - as you very well might - consider comments
instead. A comment telling what the next block of code will do is kind of like
inlined function, except you don't have to jump around in file, you don't lose
the context. Much easier to read.

I used to write code where essentially every piece of code longer than 3-5
lines got broken out into its own private function. The amount of jumping I
had to do when reading the code, and the amount of work maintaining and de-
duplicating small private functions, was overwhelming.

~~~
BurningFrog
We may not be that different.

When I was shown that you can break out a function that's only used once,
_just_ in order to _name_ it (2005, or so), it was one of the greatest
revelations in my career.

It also serves as a way to tell you _what_ that code does, without you having
to know details of _how_ it does it, until the rare day when it's important.

But I only do it when that code is genuinely hard to follow, not because my
function is "over 10 lines, and that's our policy".

~~~
TeMPOraL
agumonkey's mention of cyclomatic complexity in a parallel comment made me
remember yet another realization wrt. breaking functions out: if you work with
languages without local functions and start breaking your large function into
smaller functions or private methods, you run into a readability/maintenance
problem. The next time you open the file and start reading through all the
little helper functions, you start wondering - who uses which one? How many
places use any one of them?

With IDE support answering the question _for a single function_ is just a
matter of a key combination, but that still adds friction when reading. I
found that friction particularly annoying, and a file with lots of small
helper functions tend to be overwhelming for me to read (it's one reason I
like languages with local functions). Whereas if you didn't break the code
out, and only maybe jotted a comment inline, you can look at it and know it's
used only in this one place.

~~~
BurningFrog
Sure, there are both costs and benefits to this, which is why you should only
do it when the benefits clearly are bigger than the costs.

Not every time your function line count is > 10, as I heard from some crazy
company a friend worked for...

I prefer to put the broken out function(s) immediately below the main one for
a logical reading experience: Overview first, details below if needed.

Comments are of course good when they are current and correct. But they rarely
stay that way for long...

------
RcouF1uZ4gsC
The issue is that in Go speak, "clever" means any programming language
technique that was invented after the 1970's. We are fortunate that the
creators of Go felt comfortable with structured programming, or otherwise they
might have felt that function calls with their compiler maintained stacks and
local variables were "clever" and that we should all use goto statements since
those are much clearer as to what is actually happening.

------
ngngngng
The difficulty of giving examples to illustrate what "clarity" means really
shows in this article.

For the most part I evaluate software clarity by how many times I had to hit
"goto definition" to see what was actually happening. But this takes us away
from what the author was attempting to say. In my opinion, 95% of clarity
comes down to writing good abstractions, and it is next to impossible to
articulate what a good abstraction is.

It's like describing the taste of salt.

~~~
audience_mem
And even if one could articulate what a good abstraction is, someone would
come along and disagree.

~~~
zzzcpan
Because "good" just as "clear" is not universal. But we can actually estimate
how good abstractions can be in specific circumstances for specific users if
we think of them in terms of familiarity, simplicity, consistency, flexibility
and universality.

------
stcredzero
There was that recent post about innovation being a limited resource. Clever
is a limited resource. Save your clever for good, insightful architecture, or
for when you really need the algorithmic chops. Don't expend it just to
impress your coworkers.

~~~
deogeo
> Clever is a limited resource. Save your clever for good, insightful
> architecture

I'd sooner say clever is a skill, that gets better with practice. Compare it
to solving tricky math problems - the more you solve them, the more clever you
'expend', the better you get.

~~~
runevault
Some of A, some of B. But there's plenty of ways to expend clever that don't
make the code harder to read as well (clever architectures that aren't obvious
before you create them but once they are built are still readable). Clever
tricks that require being clever to read as well as to write are the dangerous
ones.

~~~
deogeo
Never meant to imply otherwise :)

~~~
stcredzero
Over the years, I have encountered dozens of programmers in their 20's and
30's who seem to prioritize impressing fellow programmers over the clarity of
the code base as a whole. In fact, I'd say there's something about programming
education which seems to produce these attitudes.

------
AngeloAnolin
I think one reason why people like to write supposedly _clever_ code
especially on the realm of software development is due to lots of these
developers want to get acknowledgement of what they produce. And hearing
someone say "Oh, this is a very clever implementation" sort of fix that
inherent need to be recognized. I haven't heard (particularly in corporate
environments) where a praise in the line of - "wow, this was a very clear and
simple implementation" trump what managers and people deem as superior when
the term clever was attached to it.

I've challenged far quite a lot of implementations where understanding a piece
of functionality has required for the developer to jump between more than 23
files across 8 different projects in implementing a very domain specific
functionality. Splitting code into single independent parts introduces
simplicity, only and if only you are reading that part by itself, but when you
layer it overall to get the functionality it delivers and it becomes a web of
tangled mess of code, then that clever solution was not really clever after
all.

~~~
hinkley
I have an intuition about library design that I’ve been slowly trying to
formulate into a set of guidelines. I have extremely high standards in this
area and not being able to state them concretely makes communicating them a
struggle.

One of the ways I complain about particularly bad decomposition (the sort of
practices that lead to parodies like Enterprise FizzBuzz) is the
ridiculousness of stacktraces for errors in these systems.

We tell people to use delegation but many have trouble differentiating
delegation from indirection. You know things have gotten particularly bad when
you have traces with the same sequence of three or more functions appearing
three times. Debugging this is a nightmare. It’s literally a maze of logic.
This type of code has to be memorized to be understood, which further makes an
existential threat of a saner person’s attempts to refactor it - moving things
around to be discoverable and debuggable comes at a cost to the people who
already memorized it.

There is also DAMP vs DRY and “desertification” of code, which is related to
the good versus bad indirection problem.

When you get a prolific “clever” person who suffers from these problems, the
whole team suffers with them (which is why I need a new job...).

Someone above mentioned flame graphs, which is a trace of every call in the
system, typically for the purpose of visualizing where time is spent by the
CPU. In thinking about this thread, I now want to look into using them as a
measure of time spent by the _reader_.

My overall philosophy on code is that we should use our best days to protect
ourselves from our worst days. I expend most of my clever on trying to make
things look easy, which is a bit of a challenge come review time because one
of the hallmarks of really clever reasoning is that people react by saying
things like, “well of _course_ it works that way”.

------
cgrealy
Debugging code is more difficult than writing it. If you write the cleverest
code you can, you are, by definition, not smart enough to debug it.

/shamelessly stolen from somewhere I'm too lazy to look up.

~~~
steveklabnik
"Everyone knows that debugging is twice as hard as writing a program in the
first place. So if you're as clever as you can be when you write it, how will
you ever debug it?" \- Brian Kernighan, "The Elements of Programming Style"

~~~
jodrellblank
Writing "for (int i=0; i<limit; i+=step) {}" is easy. Debugging the off-by-one
error requires that you understand the alignment issue you ignored in the
first place. That would make debugging harder, not because debugging is
inherently hard, but because that's the place you're forced to deal with the
hard bit you skipped earlier. I wonder how often that applies, compared to
well-understood-but-implementation-mistake code?

------
tombert
My rule of thumb has always been "the amount of comments I leave is directly
proportional to how clever my code is".

Code that is trivial really needs no elaboration, but occasionally I feel like
I gotta go crazy with a bunch of hashmaps of lambdas and all that jazz, and I
don't think that there's inherently anything wrong with that.

However, when I do that, I make sure I document it like crazy with comments,
so that when I have to look at the code two weeks later, I at least can figure
out what I was doing.

~~~
hinkley
I will agree with that but only if you put air quotes around “clever”.

There is a point beyond which accurate documentation is more difficult than
improving the code to negate some of that documentation. That makes the code
cleverer still (without air quotes). This is not far off from St Antoine de
Exupery’s comments on perfection being achieved when there is nothing left to
take away.

------
DogLover_
Have to really disagree with the "comp" function example. No need to have
"else" statements. The one with early returns and only "ifs" is succinct and
good.

Also what is up with comparator function being used this way for most articles
nowadays? If I am not mistaken "return a-b" is the much better solution and
don't say that it is considered too clever :)

~~~
panopticon
"return a-b" is susceptible to overflows and may lead to subtle bugs depending
on your language.

~~~
audience_mem
In a sense it is the perfect example of "clear is better than clever", or
perhaps "explicit is better than having buggy edge-cases"

------
o_nate
I agree with the sentiment of the article, though I don't really like the
example given. To me, using a switch statement instead of if-else-if is not
any clearer.

~~~
kosievdmerwe
I think for that particular example it's bad, but it might work better for a
more complicated example with many more cases.

------
miccah
Code is written and read by humans, therefore it should be clear and concise.

Cleverness should be used in constrained situations like performance (fast
inverse square root [1] comes to mind), and comments explaining the cleverness
is important.

I should note [1] did a terrible job at commenting.

[1]
[https://en.m.wikipedia.org/wiki/Fast_inverse_square_root](https://en.m.wikipedia.org/wiki/Fast_inverse_square_root)

------
jodrellblank
> Here we have a comp function that takes two ints and returns an int;

Ah, "comp"utation. "comp"lex numbers? "comp"licated function? "comp"onent?
"comp"anion numbers, is that a thing? "comp"rehensive example?

Saving three characters of typing is more important than being clear?

------
thsealienbstrds
All these quotes remind me of another quote that I like:

Simple and dirty beats complex and clean any day. - Learn C the Hard Way

~~~
stcredzero
Unless simple and dirty is O(n^2) and simple and clean is O(n).

~~~
mercer
and unless that matters.

~~~
stcredzero
I originally wrote O(n^4) vs. O(n^2), which is something that actually
happened to me two days ago, but thought I'd go more basic. I should've gone
real-world example.

But yes, O(n^2) vs. O(n) can matter for large n.

~~~
mercer
I don't disagree though. Even though in much of my work performance is usually
not a concern, I still run into plenty of situations where it _really_ is. But
I run into more situations where I probably spend more time optimizing when I
don't need to. YMMV, obviously.

~~~
stcredzero
I'm forced to optimize unnecessarily in code reviews at work. Gotta save that
extra microsecond when the user pushes that button!

~~~
mercer
Ha, I spend a decent amount of time trying to explain to clients how it's
worth their while to decrease a page's load time from 20+ seconds to < 5 in
just a few hours of work...

~~~
stcredzero
I'd feel much more job satisfaction from that.

~~~
mercer
it's web development and it pays well enough...

------
makecheck
If you have only a final “else” instead of a bare “return” in a function that
returns a value, you are making it _worse_. Now when I read the code, my first
instinct is “seems to be a bug, undefined behavior” and I have to read more
carefully.

This is a terrible example and a code change I would never approve. The
clarity-over-cleverness goal is good but not with these kinds of cases.

------
peter_vukovic
Creating anything is a craft and creating software programs is no different.
While everyone should strive to learn how to write programs well so the intent
isn’t obfuscated, it ultimately boils down to two factors: programmer’s
experience and talent. Most programs, like most works of art, will be utter
crap and nonsense, as most artists are - with rare notable differences. This
is why I heavily support frameworks and prescriptive style of programming, or
“opinionated” systems as some would call them. They are usually invented by
people much smarter than the average Joe and ultimately generate better long
term results. It would benefit our productivity much more if we invested
efforts into translating these brilliant minds into compiler features so the
compiler checks for style as well, not just “spelling”. We need Grammarly for
code.

------
apta
> If software cannot be maintained, then it will be rewritten; and that could
> be the last time your company invests in Go.

Maybe on to something there, golang code bases I've seen are a complete mess
because of how "simple" the language is. Hopefully more people start realizing
this and moving on to better languages.

~~~
zzzcpan
In my experience Golang code bases are messy because of two things: OO and
CSP. People all too often resort to goroutines, channels, objects and
interfaces where a simple side-effect free function could do.

~~~
apta
It's ironic because the golang authors say that golang is not "OO".

Of course, not that there's anything wrong with properly done OO, it's just
that golang's implementation of it is botched.

------
smacktoward
This put me in mind of a passage from the (excellent) 1982 book _Inside the
Soviet Army_ , by the Soviet defector Vladimir Rezun (which can be read in
English in its entirety here:
[http://militera.lib.ru/research/suvorov12/index.html](http://militera.lib.ru/research/suvorov12/index.html)),
which explained why ammunition for Soviet weapons hadn't been standardized
across a common set of calibers:

 _The calibre of the standard Soviet infantry weapon is 7.62mm. In 1930, a
7.62mm `TT ' pistol was brought into service, in addition to the existing
rifles and machine-guns of this calibre. Although their calibre is the same,
the rounds for this pistol cannot, of course, be used in either rifles or
machine-guns._

 _In wartime, when everything is collapsing, when whole Armies and Groups of
Armies find themselves encircled, when Guderian and his tank Army are charging
around behind your own lines, when one division is fighting to the death for a
small patch of ground, and others are taking to their heels at the first shot,
when deafened switchboard operators, who have not slept for several nights,
have to shout someone else 's incomprehensible orders into telephones-in this
sort of situation absolutely anything can happen. Imagine that, at a moment
such as this, a division receives ten truckloads of 7.62mm cartridges.
Suddenly, to his horror, the commander realises that the consignment consists
entirely of pistol ammunition. There is nothing for his division's thousands
of rifles and machine-guns and a quite unbelievable amount of ammunition for
the few hundred pistols with which his officers are armed._

 _I do not know whether such a situation actually arose during the war, but
once it was over the `TT ' pistol-though not at all a bad weapon-was quickly
withdrawn from service. The designers were told to produce a pistol with a
different calibre. Since then Soviet pistols have all been of 9mm calibre. Why
standardise calibres if this could result in fatally dangerous
misunderstanding?_

 _Ever since then, each time an entirely new type of projectile has been
introduced, it has been given a new calibre..._

 _[West Germany and France] have excellent 120mm mortars and both are working
on the development of new 120mm tank guns... [W]hat happens if, tomorrow,
middle-aged reservists and students from drama academies have to be mobilised
to defend freedom? What then? Every time 120mm shells are needed, one will
have to explain that you don 't need the type which are used by recoilless
guns or those which are fired by mortars, but shells for tank guns. But be
careful-there are 120mm shells for rifled tank guns and different 120mm shells
for smoothbore tank guns. The guns are different and their shells are
different. What happens if a drama student makes a mistake?_

 _The Soviet analysts sit and scratch their heads as they try to understand
why it is that Western calibres never alter._

(This specific chapter can be read here:
[http://militera.lib.ru/research/suvorov12/06.html](http://militera.lib.ru/research/suvorov12/06.html))

------
DiseasedBadger
If it's both clear and conventional, gcc should be able to make it clever. If
it can't, I'd rather pay a human to profile it, than to make it clever.

There are too many developers who are "super good at writing clever code", and
not enough who are comfortable with profiling and analytics.

------
ksaj
Nice. I said exactly this for the same reason yesterday. In that case it was a
thread about Lisp. Here's the comment:

> Even the open source Common Lisp compilers, written by arguably the lispiest
> of Lispers, don’t have a lot of “cleverness”.

To which I replied my agreement:

> Don't write clever. Write clear.

Its a sentiment that you find attached to Lisp programming style fairly often,
although ironically there is a whole lot of barely readable Lisp code out
there.

Personally, I think the code is (nearly) worthless crap if someone with skill
has to spend as much time parsing it as the writer did writing it.

[https://news.ycombinator.com/item?id=20376344](https://news.ycombinator.com/item?id=20376344)

------
ufo
That switch case transformation in the end is just syntactic sugar for the
equivalent if-else-if chain so I am not sure it is actually an improvement.

That said, I still agree with the basic idea. An if-else-if chain is easier to
reason about than if-return, and representing information with enum or
algebraic data type can be more robust than using a combination of booleans.

~~~
mdpopescu
The whole point of syntactic sugar is that it makes a code fragment easier to
read / understand. That's why we're not calling it syntactic poison.

------
SolaceQuantum
This is a strange read as a fiction writer. This lesson is learned quite early
in prose writing- explicitly due to the fact that written works must be read.
Maybe because of this, I've never felt the need to write 'cleverly' in my
programming either. Can other prose writers corroborate?

~~~
steveklabnik
I believe that approaching writing, including programming, to this, is
reductive.

At its base level, this is missing an important thing: context. Clear _to
who_? Something can be very clear to one person, yet opaque to another. In
writing, and in programming, you need to decide which audience that you're
talking to, and write something that they will understand.

This often comes up in discussions about jargon. Jargon is a way to increase
the density of communication. This is often perceived as a loss of clarity,
but the question again is, clarity for who? For two experts, discussing
complex things in their field of expertise, jargon can _increase_ clarity, by
referring to shared context. Higher bandwidth communication allows for more
discussion of more complex topics, because you're not wasting time and mental
energy re-explaining things from first principles.

Put another way, there is _always_ some shared context going on; that's what
language actually is in the first place. I have used a number of words in
writing this comment, but I haven't set out any definitions; that's because
I'm assuming that you know English in order to read my comment. If I were
trying to communicate to a child, I wouldn't be using all of the words that
I'm using here, because it is too complicated for them to comprehend. But,
trying to explain the topic of this comment to that child would take _much_
longer, and be _much_ more difficult.

So yeah, that's just _one_ way in which discussions like these tend to
frustrate me. Writing is a rich, wonderful thing, that has a huge variety of
uses. Pigeon-holing it in this way makes me feel, well, dispirited. Or should
I say "sad"...

(I do believe that, for both commercial development of software, as well as
commercial development of writing, "keeping things simple" can be important,
for various reasons. But not everything we do in life must be in the service
of business needs.)

~~~
audience_mem
> _This often comes up in discussions about jargon. Jargon is a way to
> increase the density of communication. This is often perceived as a loss of
> clarity, but the question again is, clarity for who? For two experts,
> discussing complex things in their field of expertise, jargon can increase
> clarity, by referring to shared context. Higher bandwidth communication
> allows for more discussion of more complex topics, because you 're not
> wasting time and mental energy re-explaining things from first principles._

> _Put another way, there is always some shared context going on; that 's what
> language actually is in the first place. I have used a number of words in
> writing this comment, but I haven't set out any definitions; that's because
> I'm assuming that you know English in order to read my comment. If I were
> trying to communicate to a child, I wouldn't be using all of the words that
> I'm using here, because it is too complicated for them to comprehend. But,
> trying to explain the topic of this comment to that child would take much
> longer, and be much more difficult._

Thanks for this. I always struggle to articulate it.

------
hyperpallium
> The competent programmer is fully aware of the strictly limited size of his
> own skull; therefore he approaches the programming task in full humility,
> and among other things he avoids clever tricks like the plague.
> [https://wikiquote.org/wiki/Edsger_W._Dijkstra](https://wikiquote.org/wiki/Edsger_W._Dijkstra)

> Everyone knows that debugging is twice as hard as writing a program in the
> first place. So if you're as clever as you can be when you write it, how
> will you ever debug it?
> [https://wikiquote.org/wiki/Brian_Kernighan](https://wikiquote.org/wiki/Brian_Kernighan)

~~~
jodrellblank
Being quoted several times in this thread; what about the experience of
finding something feels "clever" but then after a while of using it, finding
it no longer feels clever but instead feels normal?

> Every weightlifter is fully aware of the strictly limited size of his own
> muscles; therefore he approaches the weight lifting task in full humility
> and among other things he avoids heavy weights like the plague.

~~~
levythe
Your analogy assumes that the goal of code is to make your code more and more
clever over time, just as a weightlifter seeks to lift heavier and heavier.
The goal of code, however, is simply to communicate a process to a computer.
Or rather, when that process is subject to change over time, to communicate a
process to a computer, simply.

~~~
groovy2shoes
> the goal of code, however, is simply to communicate a process to a computer

While we're throwing fun quotes around:

"programs must be written for people to read, and only incidentally for
machines to execute"

— Abelson & Sussman

;)

~~~
jodrellblank
Then why aren't they written on paper, in English? Because that's how people
used to read things when the A&S quote was from, in 1979. And natural language
is still how people read things today, even if on screens. People don't code
as if code was primarily for people to read.

~~~
groovy2shoes
> Then why aren't they written on paper, in English?

On the other hand, why aren't programs all written in machine language, in hex
or octal? Why invent assembly language? Why invent macro assemblers? Why
invent high-level languages?

Programmers are not unique in this regard. Mathematicians and logicians do not
write all their dealings in English. They've developed a highly specialized
notation for writing compact and _precise_ descriptions of their ideas.

Furthermore, many layfolk might even say that the language of jurisprudence
isn't quite English, despite how it looks. The jargons of many fields, like
“legalese”, serve the same purpose as mathematical notation, which is itself
the same purpose as programming languages: to enable ease, brevity, exactness,
and precision in their respective domain-specific communications.

You can see a little of all that in the same preface by Abelson & Sussman,
which goes on to say:

 _These skills are by no means unique to computer programming._ … _We control
complexity by establishing new languages for describing a design, each of
which emphasizes particular aspects of the design and deemphasizes others._ ¶
_Underlying our approach to this subject is our conviction that “computer
science” is not a science and that its significance has little to do with
computers. The computer revolution is a revolution in the way we think and in
the way we express what we think._ … _Mathematics provides a framework for
dealing precisely with notions of “what is.” Computation provides a framework
for dealing precisely with notions of “how to.”_

> Because that's how people used to read things when the A&S quote was from,
> in 1979.

Clearly it's not how people always read things back then, as it's not how
people always read things now. People read programs, sometimes on screens,
sometimes on paper, just like they read mathematical formulas _łsĩ_. In some
cases, programs have been written on paper in some formal language that hadn't
actually been implemented, simply because that language was seen as an
effective means to communicate them. We usually identify it as pseudocode,
ranging from “pidgin algol” to “plausibly python” to the M-expressions of the
early LISP manuals.

M-expressions are still used in the LISP 1.5 manual of late 1962, despite the
fact that 2.5 years after the LISP 1 manual, the LISP system was still
incapable of reading M-expressions—the programmer had to translate them to
S-expressions by hand before entering them. The Appendix B of the 1.5 manual
gives the code for the interpreter, as well as some rationale:

 _This appendix is written in mixed M-expressions and English. Its purpose is
to describe as closely as possible the actual working of the interpreter and
PROG feature._

(It turns out to be possible to get an even closer description with a formal
notation for the semantics, as was done with the definition of Standard ML,
but such formalism has yet to catch on).

This emphasis on the importance of notation for the exact expression of
thoughts and precice description of “ideal objects” is not particularly new,
and it certainly predates the invention of the computer:

… _I found the inadequacy of language to be an obstacle; no matter how
unwieldy the expressions I was ready to accept, I was less and less able, as
the relations became more and more complex, to attain the precision that my
purpose required. This deficiency led me to the idea of the present
ideography._ …

 _I believe that I can best make the relation of my ideography to ordinary
language clear if I compare it to that which the microscope has to the eye.
Because of the range of its possible uses and the versatility with which it
can adapt to the most diverse circumstances, the eye is far superior to the
microscope. Considered as an optical instrument, to be sure, it exhibits many
imperfections, which ordinarily remain unnoticed only on account of its
intimate connection with our mental life. But, as soon as scientific goals
demand great sharpness of resolution, the eye proves to be insufficient. The
microscope, on the other hand is perfectly suited to precisely such goals, but
that is just why it is useless for all others._ ¶ _This ideography, likewise,
is a device invented for certain scientific purposes, and one must not condemn
it because it is not suited to others._

(from the preface of «Begriffschrift» by Gottlob Frege, 1879, translated by
Stefan Bauer-Mengelberg).

In 1882, Frege further explained: “My intention was not to represent an
abstract logic in formulas, but to express a content through written signs in
a more precise and clear way than it is possible to do through words.”

> People don't code as if code was primarily for people to read.

I agree. I am often guilty of this too, although I usually forget about it
until I try to read a program I'd written some time ago and discover that it
requires some careful study to figure it out.

It's a shame, really, because we _should_ be writing readable code. But after
I'd read this statement, I was thinking: how _do_ people code, then? And I was
reminded of this little bit from Paul Graham's essay “Being Popular”:

 _One thing hackers like is brevity. Hackers are lazy, in the same way that
mathematicians and modernist architects are lazy: they hate anything
extraneous. It would not be far from the truth to say that a hacker about to
write a program decides what language to use, at least subconsciously, based
on the total number of characters he 'll have to type. If this isn't precisely
how hackers think, a language designer would do well to act as if it were._

 _It is a mistake to try to baby the user with long-winded expressions that
are meant to resemble English. Cobol is notorious for this flaw. A hacker
would consider being asked to write_ `add x to y giving z` _instead of_ `z =
x+y` _as something between an insult to his intelligence and a sin against
God._

;)

------
snarf21
I wonder though, isn't it even more clever to be simple (which is also
clearer)?

~~~
papito
Being simple is _hard_.

~~~
mercer
[https://www.youtube.com/watch?v=34_L7t7fD_U](https://www.youtube.com/watch?v=34_L7t7fD_U)

------
makapuf
I often cite this quote : Everyone knows that debugging is twice as hard as
writing a program in the first place. So if you're as clever as you can be
when you write it, how will you ever debug it? (B. Kernighan)

------
k__
Often I only need clever for performance reasons.

I write a filter/map, which is slow in some languages, but looks readable and
if I need better perf I rewrite it with a for loop.

------
bryanrasmussen
brevity is the soul of wit and the veil of clarity.

~~~
jodrellblank
But .. verbosity is the veil of clarity?

> verbiage: excessively lengthy or technical speech or writing.

> synonyms: verboseness, padding, superfluity, redundancy, long-windedness,
> protractedness, digressiveness, convolution, circuitousness, rambling,
> meandering; waffling, wittering, " _there is plenty of irrelevant verbiage
> but no real information_ "

------
nightcracker
I think clever stuff should be behind clear APIs.

------
mrkeen
Two men enter a bar. One is taller than the other. Which one is taller? -1.

~~~
mrkeen
I guess the downvoter didn't like me pointing out int is a shitty way to
answer the question "which is bigger?" in an article supposedly selling
clarity over cute tricks.

------
vemv
<subjective thing> is better than <subjective thing>.

Oh thanks, that was so useful.

~~~
dang
Please don't be snarky and please don't post shallow dismissals here.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

