
You Don't Want to Think Like a Programmer - platz
http://prog21.dadgum.com/190.html
======
dlss
OP says that a masterful understanding of coding in general necessarily leads
to a disregard for features that users want.

I think this is a false dichotomy -- developing a masterful understanding of
what things cost must be coupled with a masterful understanding of what makes
things valuable.

If OP shared what caused him to think this thought perhaps a good conversation
would happen. I could see myself write something similar about CFOs after
encountering one at a past startup who kept turning the air conditioning off
to the point where coding in the office was a sweaty experience.

I think the title of the correct article is "You want to think like a
programmer and a customer" or "You don't want to thin like a bad programmer"

~~~
coldtea
> _I think this is a false dichotomy -- developing a masterful understanding
> of what things cost must be coupled with a masterful understanding of what
> makes things valuable._

It might be a false dichotomy in theory ("well, no reason why these two
concept should not be orthogonal") but it sure is a real dichotomy for very
many developers.

People lost in the details of coding, best practices, new fancy languages,
monads et al, while not producing anything of substance for real users -- or
getting lost in the process and not getting anything released.

There's even a kind of term for these people "architecture astronauts" (coind
by Joel Spolsky IIRC). And the even older, "worse is better" notion is also
related to the same issue.

~~~
ENGNR
There's an even worse kind of architecture astronaut, almost the opposite of
this article. It's the architects who don't even know about encapsulation,
loose coupling/tight cohesion or even not to duplicate data anywhere without
rules for resolving mismatches.

They've never written a line of code and yet somehow are dictating spaghetti
enterprise architecture. Their business requirements are things like "must
comply with all applicable standards" (isn't it your job to at least work out
which ones, or are you above that?) and they never actually stop to think
whether any of this will actually work.

I'll take a language hipster and a clueless business user who just wants it
done any day.

------
jeffdavis
It's really _hard_ to keep moving all the way up and down the stack. If you
spend a few weeks hacking on a few key low-level routines, that might enable
you to write the rest of the code in python and use some library you really
want. In turn, that might enable you to release many more innovations faster
than you would otherwise, and create something awesome.

But it's hard to remember all of that when you're in the middle of writing
some C code so that it can be optimized with vectorized instructions when
targeting newer chips.

Hacking is great, but engineering can inspire and enable hacking. How many of
these cool startups would be around without gnu/linux/freebsd?

I think the trick is to go to a higher level once a day, or spend more time at
a higher level once a week. What is value you will deliver to at least one
other person? Are you making real progress getting there; or are you building
an endless series of platforms on top of each other without ever having a
clear application in mind?

Even if you are working on a generic performance improvement you can still do
that. If 10,000 servers are running your software, and you improve performance
by 2.5%, then you can measure the value you delivered in terms of power saved
per year and it would be a real accomplishment. That will be motivational, and
it will also help you choose a better follow-up project to deliver even more.

~~~
erichocean
_It 's really hard to keep moving all the way up and down the stack._

The main advantage, to me, is being able to collapse layers of abstraction.
We've done that at one of the startup's I CTO for.

On the backend, we eliminated the entire multi-tier stack we've come to know
and love from the late 90s. The database, application server, caching server,
and authentication server were collapsed down into a single executable.

We dropped TCP, and went to UDP + NaCl-based crypto. This in turn changed how
we did session management and logins (we only target mobile devices), and
allowed us to gain further performance by bypassing the Linux kernel entirely,
and talking directly to the Ethernet hardware. Our wire-to-wire latency for a
UDP packet (without any processing, just heading through the LuaJIT app
framework) is ~26 nanoseconds. We measure the entire request/response time in
less than 10,000 nanoseconds for updates, and frequently, less than 3000
nanoseconds for reads.

Nanoseconds! Today's hardware is crazy efficient, but yesterdays software
architectures waste it all.

For caching, we employ a database library (lmdb) that uses the file system and
memory mapping for all data, a lot like Varnish Cache does. We can service
reads without a single malloc() call. No more memcached.

Finally, the entire system is built around an event streaming approach (see
Disruptor/LMAX). For that to work, we wrote our own cross-platform Core Data
alternative on the client, and for this particular application (a social
network), the end result is that the user experiences ZERO network latency for
everything but search.

Everything is local by the time the user is made aware of it. For updates like
making a post, we make it happen instantly on the client, speculatively, and
eventually the update makes it's way to the server and back, and then on to
everyone else. The user doesn't even need a live network connection to post,
and "offline" usage works as expected.

And also (since this is a speciality of mine), our object graph syncing is
multi-device from day one. Lots of projects (e.g. Vesper) struggle with that
one. We didn't bolt it on, it's a fundamental property of our data model and
overall system.

After collapsing the crypto, database, and authentication, we were able to
introduce object capabilities that requires zero memory lookups, instead of
the usual RBAC, which in my experience is difficult to implement efficiently
and hard for people to manage.

And because we have our own network protocol, and are targeting native clients
we control, we can easily do client-side load balancing. We use consistent
hashing to contact read-only servers (that also handle crypto), mostly so we
can spread out the bandwidth without having a crazy expensive load balancer.
Thanks to our custom database/app server, we can service all writes on a
single machine, although like LMAX, it's designed to run three in parallel,
discarding the results of two of them at any given time.

Point is, there are HUGE gains to be made when you (a) understand the whole
stack, (b) have the skills and understanding to rewrite any part as you need
to, and (c) have the guts to ignore 15 years of received wisdom on how to
scale an app and are willing to collapse layers down in pursuit of speed and
simplicity.

It may be harder, but the payoff is enormous. I did all of the above myself in
just over four months, including building a cross-platform app framework for
iOS (porting to Android in March), and the app itself.

~~~
pjungwir
> UDP + NaCl-based crypto.

I would love to hear more about this. I was looking for a way to send UDP that
is encrypted and authenticated, with private contents and no replay attacks.
The best option I could find is DTLS, but that has spotty support on Windows
and no API in common scripting languages. Can you say more about your
approach? If it is roll-your-own, what makes you confident it is secure?

~~~
erichocean
_I was looking for a way to send UDP that is encrypted and authenticated, with
private contents and no replay attacks._

I use crypto_box/unbox as is, although we do do the
crypto_beforenm()/crypto_afternm() as (a) our clients are known, and (b) it's
ridiculously faster. The docs on NaCl mention it, sort of, but it's much,
_much_ , faster. If you can do it, you always should do it.

As for replay attacks, we use what I'll call "hash chaining" (if someone knows
the real name, I'd like to know). We have two kinds of messages, ordered and
unordered. Unordered messages can be replayed at any time, as they are read-
only, or some other kind of status messages, such as a heartbeat.

For ordered messages, we hash the previous message and use it as input for the
next message when it is hashed. Both hashes are sent with each message (we use
Sip-Hash 2-4, highly recommended BTW). (One analogy would be how git works,
where the previous commit's hash is hashed as part of the new commit's hash.)

The chain of messages from the device to the server is called the "device
stream", and we have a second chain going back to the device, the "update
stream". The system supports multiple device streams for a single users (e.g.
an iPhone and an iPad), but there's only one update stream for all devices, so
they stay in sync.

Both ends of the connection test the hashes before applying, which trivially
prevents replay attacks, since every ordered message can only be "played"
once. We store the current device stream hash when the database is updated, in
the same transaction, on both the client and the server. This provides
resiliency in the face of crashes.

We also use the hash chain to detect out-of-order messages. In our
application, they're pretty rare, so I simply drop out-of-order messages and
request the correct one from the client.

This could be bad, because playing back old messages would illicit a response.
For that reason, and also because I don't want to store update streams in the
database permanently, the messages include their offset in the stream. The in-
memory, wire format, and on-disk format of each messages is identical, so we
can lay them end-to-end on disk. When a messages comes in, we can trivially
determine if it is old, and drop it that way, too. If we need to write an
update stream to disk, because the client hasn't connected in a while and we
want to free up space, it's a single sendfile() + known offset call,
conceptually, to send the update stream since they last signed back on. That
way, we don't need a separate index for streams on disk.

Hope that helps!

~~~
pjungwir
Wow, thank you for such a detailed writeup! I'll need to digest this a bit and
see if it makes sense for our application.

------
einhverfr
I disagree with the author over software engineering vs problem domain issues.
It's not that the author is wrong. It's that he doesn't articulate the role of
the software engineering concerns. It isn't about _whether_ to obsess about
these. It is about _when_ to.

The ideal is to cycle. Focus on the problem domain. Get something working.
Come back, look at the software engineering domain. Obsess about that for a
while. Then move on to the next project. The cycle itself means you get to be
productive, get to focus on solving real problems, and get a chance to
evaluate your code style and experience, improving it along the way too.

I have maintained 200k line Perl programs written by someone who never stopped
to ponder anything outside the problem domain. It was a total mess. The
program was basically a textbook in how not to program, particularly how not
to write Perl. Matt Trout said it was "written by the programmer that Matt's
Scripting Archive would have fired for writing bad code."

If you want your software to last, then software engineering has to be an
aspect of it. But the author is also right that you can't let this detract
from solving the problems at hand. Consequently you have to be able to really
focus on learning each side in cycles.

~~~
fuzzix
> I have maintained 200k line Perl programs written by someone who never
> stopped to ponder anything outside the problem domain

Ditto, though I think I have a unique element to my story.

I spent a week deleting code I couldn't detect a use for. I nuked nearly
40,000 lines and could not discern a difference in functionality (though,
couldn't be certain since I hadn't completed the test suite.)

Writing a test suite for software with no spec and only others' vague
recollections of meetings to go on (Agile!) is fun - it basically ends up a
regression suite enforcing current broken behaviour.

Perl takes no part of the blame for this, some of the Rails code needs to be
seen to be believed - looked like an early PHP tutorial in places.

What was great was, the Perl was full of comments about how much easier it
would have been in Ruby/Rails because he never bothered to read the docs on
references.

~~~
einhverfr
> I spent a week deleting code I couldn't detect a use for. I nuked nearly
> 40,000 lines and could not discern a difference in functionality (though,
> couldn't be certain since I hadn't completed the test suite.)

I don't think we deleted 20k lines. Probably more like 5k.

However the kicker was when deleting something that seemed harmless enough
broke something. In that case, it was a _comment._ We're not talking about
smart comments here. We are talking about comments that are parsed and used by
the application at run-time for actual logic :-P.

Oh, and when we added test cases for rounding numbers.... the test cases
failed....

Regarding the role of language, I would much rather be maintaining bad Perl
code than bad PHP code.....

------
fsloth
I think the problem is not that you shouldn't think things through with
academic rigor.

The problem is that there is so much cargo cult junk in programming as a field
that it's impossible for a novice to discern the pseudo-engineering crap from
the truly beneficial concepts. Beyond the core algorithms and data structures
whose mastery should be kinda obligatory if you want to implement anything in
production, it is impossible for a novice to tell what he should ingrain in
his daily work and what he should not.

The advice given here is therefore is very good. Get practical experience in
building things, so you identify the beneficial abstractions when you learn of
them, and have courage abandon those that do not provide value.

The downside is the "expert beginner" who learns a one way to do a thing and
thinks it's well and good to do it that way for all eternity. Higher level
languages are fantastic because they offer efficient and practical
abstractions out of the box for the "patterns" you would implement yourself in
a half-assed way in a lower level language like c.

People infatuated with solutions finding a problem are kinda bad from a
communication point of view. They are so a plenty that when one tries to offer
practical higher level solutions to existing problems the listener often
thinks this is motivated out of love for the solution and not out of love for
the problem.

------
fsloth
As a counterpoint, there are very good engineering reasons for standardized
code layout and enforcing const correctness.

They are there to optimize the use of the mental energy of the programmer.

Const correctness makes it much easier to read and reason code someone else
has written. Or myself, months from now on.

Standardized code layout should be strict enough that you can identify scope
at a glance when browsing through a 10 MLOC engineering CAD application and
hunting for that one bug or strange call path.

But - these are necessary features for _production_ code. For _prototyping_
code you should optimize for maximal mutation rate and not necessarily
legibility rate. Although the code should not be garbage (if you can't read it
it's kinda worthless) strict conventions often hinder fast iteration and you
should really figure out which ones are necessary constraints for the problem
at hand and which are not.

------
connor
I'd take this further and add that you should always be doing something wrong
in the pursuit of more than just code. As a novice programmer you can spin
your wheels learning every best practice without building much of anything.
And HN often compounds this problem - look at every 'You're doing it wrong
...' post. Instead, I think you should make something crappy and make the
rookie mistakes. Overtime, simply look to improve bit by bit.

I imagine someone will protest this idea, saying that there's already enough
crappy software out there, that the world doesn't need any more technical
debt. But this fails to realize that technical debt is a good thing in
someways- it's a by-product of learning and crashing through boundaries.

In a way, technical debt is analogous to carbon emissions- yes, by itself it's
a bad thing. But in the larger scope, carbon emissions are a by-product of
industrialization, a process which has vastly improved our quality of life.
Technical debt, like carbon emissions, is a sign of progress.

~~~
bad_user
I once worked on a project that pivoted the business logic so many times, with
one hack after another, that technical debt went through the roof - at which
point it became next to impossible to further evolve it. Because of legacy and
tight coupling, simple features would take longer and longer to implement. And
because the architecture was completely based on assumptions of low traffic
that would grow organically (it was a web service), the shit really hit the
fan when we started getting tens of thousands of requests per second, again
because the architecture was so bad that it seemed as any optimization we did
was worthless and actually made matters worse. The web service actually had to
be stopped for a while, because as a startup we couldn't afford to rent a
thousand frontend servers and even if we did, you're only moving the problem
up the chain, which in our case was the data storage and again, because of
tight coupling, it was unfeasible to move it to something that scales
horizontally.

So the solution was to basically rebuild this web service from scratch. This
decision also came with business logic simplifications - as we realized that
certain features were unfeasible at that scale, so many features ended up
being dropped based on technical considerations. Which IMHO was OK, because
the best way to approach problem solving is to build _less_ and sometimes we
end up building features instead of solving the problem that we set out to
solve in the first place.

I wish I could say that our initial approach allowed us to iterate fast. But
it didn't. I was there from the start, I've built the prototypes, I did the
pivots, I made all the wrong choices, I managed the team that came after and I
could've made all the right choices without sacrificing fast iterations. All
it takes is experience and tools that do "the right thing" in regards to your
problem, without sacrificing productivity - being able to recognize the best
tool for the job, being able to simplify problems, being able to tell the
_cost_ of something - now those are traits of good software developers.

The article does have a point though - you can learn all day about tools, best
practices, algorithms, paradigms or whatnot, but in the end the goal isn't and
shouldn't be to write "beautiful code", but rather to produce artifacts for
solving problems that people have.

~~~
einhverfr
> I wish I could say that our initial approach allowed us to iterate fast. But
> it didn't. I was there from the start, I've built the prototypes, I did the
> pivots, I made all the wrong choices, I managed the team that came after and
> I could've made all the right choices without sacrificing fast iterations.

As long as you learned a lot from it.... We all make architectural errors. One
can't learn without making mistakes though.

------
singingfish
> You come to understand why every time a useful program is written in Perl or
> PHP

Mate, I agree PHP sucks (aside from the legions of maschochists who do useful
stuff with it), but a lot of perl (when used in a disciplined manner in
communication with ones peers) is a useful and productive orthogonal take on
computer science.

------
ycmike
As someone leaving the beginner stages and moving to intermediate development
I found this to be very true and wish I did a lot less tutorials and just
started to hack stuff together. I learned far more doing something crappy from
scratch than 100% on some "X" clone!

------
kh_hk
I think the article stretches too far away the "think like a programmer" motto
by including design patterns and TDD in there. You will have trouble learning
if you do not think like a programmer, as in basic CS 101.

Thinking like a programmer is really near to mathematical thinking, just the
basic stuff: logic, invariants and program correctness, in paper, and using
pseudo-code. After a year one will be ready to throw that stuff through the
window, not because it is not useful, but because it is already engraved on
your brain and somehow, even if you do not use assertions at the start of your
functions or do not write an invariant down for each loop, it will be there
and will make your code cleaner and free of copy-pasted code.

~~~
GarvielLoken
I guess you like math much, am i correct? Programming is in some wayslike
math. Programming is in other ways more about organisation, expression and
language skills.

But the real truth is that people are different. Some people are mechanics in
the head, they like to optimize small functions and break systems and loved
the assembly c time. Some people see the world as states and attribute facts
to static objects, these people love OO, its an entire paradigm that complies
to their though process! Some people are rigours about how the world moves,
they like Haskell and logic programming and always bring these aspects to
their code base.

Some people are dynamic and see the world as a continuous flow of time (not
pinning attributes to things, seance a thing can change along its time line,
thus brining focus on the core of things and ones purpose), they like
languages like lisp where transformations of data is the center, not classes
with fixed static attributes, that is contrary to their irl thought processes.
These are the people who develops non-deterministic programming, because the
focus is on time not space. They like macros and other efficient expressive
tools because it allows you to express your solution in a better language and
automate transformations.

tl;dr / Summary People think differently and programming is merely a means to
express yourself, ergo your view of programming will be a view of how you like
to think and function, and to say that your own definition is the universal
truth is arrogant, narrow-minded and wrong.

[http://wikisocion.org/en/index.php?title=Statics_and_dynamic...](http://wikisocion.org/en/index.php?title=Statics_and_dynamics)

~~~
kh_hk
Maybe I am wrong, and I am confusing programming with programming for maths.
Now again, on the low level, it's really easy to find maths everywhere, while
on the high level, one just interacts with development libraries.

As a note, maths are not a field I am really strong on, and had a hard time
with them at the university. But, even when it was hard, it helped wrap my
brain around programming and provided me with elemental tools. Yes,
programming might be drawing pixels in a grid, or could be just throwing words
at the computer until the output is what you want.

I do not consider myself a proficient programmer and yet, I pity the
programmers that look confused at a chunk of code in Eclipse and just add
whatever a set of libraries provide to get something to work, which often ends
up with larger chunks of code that basically do nothing, but are there just
because (with an special guest, more chunks of commented code that are waiting
for someone to delete them). I experienced this feeling the first time I
programmed something for Android. And no, in this case there's no 'some people
might have a different view on the issue'. There's people who code that have
no clue of what is happening, and that is saddening (and I guess, stressful to
them).

As an exercise on what I am referring with basic tools let's see the simplest
example I can think of: define a function that returns the factorial of a
number. We know the answer already, but instead of doing that, let's think
first, and generate the code later [∗]

What defines a factorial?

    
    
        n! = 1 * 2 * 3 * ... * n-1 * n
    

Any particular case?

    
    
        0! = 1
    

Let's pick an strategy for our loop:

    
    
        - At each iteration, we will mantain that f = i!
    
        - For all iterations, i ≤ n
    

The program finishes when i = n

Ok, now code

    
    
        // Pre: n ≤ 0
        // Post: returns n!
        int factorial(int n):
            int i = 0
            int f = 1
            // inv: f = i! and 0 ≤ i ≤ n
            while(i < n):
                i = i + 1
                f = f * i
            return f
    

[∗] = And that's for me the main reason this works. This is a silly example,
nobody would think of invariants when you are iterating through a set of
records from a database to calculate the median age of your employees, but at
least it has trained you to think before cracking code. And this is just the
first step on the path.

It is not "the universal truth of programming", and there are many many
different "states of mind" that one will need to learn in order to do
something (OpenGL and state machines, SQL and declarative languages, etc). But
before learning something (unless you are really talented, I guess), one must
learn to learn.

~~~
GarvielLoken
I am having a hard time how one could have gotten that impression if one has
worked in ecommerce or business software, i promise you that mayor ecommerce
site is just not interacting with development libraries.

I really can't see the point of your third paragraph, what are you arguing
for? That the people who threw code was people who didn't share your view on
programming? Or is the paragraf to be interpreted that anyone who try to use
leverage in code is a fool? And also how are inexperienced untrained
programmers skill really relevant to different views on programming?

Let me give an exercise in return. Add to the code base a module that allows
the skin of the site to be changed depending on the users device in a way that
it is easy to add skins and also capability to random give users different
skins. How much math do you really need beyond Math.rand(0,100)?

This instead is task that is solved by being able to formulate an expression
that is general enough and can communicate with the existing system and fit
the existing organization without breaking the big structure. No computations
is really taking place ( in the seance we want to get some mathematical result
from different values ) it's only flag checking at the bottom line.

You learn to learn when you grow up. The things you mention are different ways
to look at problem and thus different ways to express solutions.

~~~
kh_hk
I do not think our views on the issue contradict at all.

Maths may not relate on how you structure code, but (as I see it) an early
start with a little dose of maths on understanding algorithms, may make your
brain understand the elementary stuff easily, and make you think like a
programmer. Are there any other introductory ways to get there? Sure, why not.

My third paragraph was referring to what I have seen on the people that do not
think as programmers. Not as a matter on how they got there, but as how they
never got there.

------
arocks
I agree and disagree here. Disagree because you need to master the rules
before you break them. This is surprisingly true for most skills say Cooking,
Novel writing, or Sketching. Though you might think that it is best to be
completely creative and unconstrained, there are written or unwritten rules in
every artistic endeavour which is inherent in the medium.

I agree with him that working on a project is often the best ways to learn a
new skill. But for heaven sake, heed to what the experienced folk say and
deliberate on whether to follow them or not.

~~~
coldtea
> _I agree and disagree here. Disagree because you need to master the rules
> before you break them._

This is a truism, but I don't think it's all that applicable to everything.

There are lots of musicians who paved their own way, breaking the rules,
without first having "mastered" the previous ruleset. Same for painters (there
are pioneer modern painters who could not paint anything Renaissance style for
example).

And, yes, there are programmers who created succesful software, even multi-
million businesses, without really knowing the whole craft of software
engineering.

Isn't, for example, Derek Shivers (who frequents HN from time to time) a
classic example of "fake it till you make it"?

Or even this Zuckenberg guy. I don't think Facebook was the result of a master
software craftsman.

~~~
girvo
Some early FB code got leaked a while back... Master software craftsman,
Zuckerberg was not.

------
georgemcbay
I agree with the basic idea but would advise most people who are in it for the
long haul to still go through the process of learning to "think like a
programmer", just always be prepared to reach a point where you get good
enough that you can throw some of what you learned out the window some of the
time, when it benefits whatever it is you are actually trying to accomplish.

While some people see that middle bit where you teeter on the precipice of
becoming a full blown architecture astronaut as wasted time or effort I see it
as a Journeyman phase that most people need to go through to become true
masters.

Sort of like photographers should fully understand the rule of thirds and
other classic rules of composition because really understanding them gives
them better insight into when and why they are better off breaking them.

~~~
einhverfr
I think the larger issue is that it is hard to learn how to use rules
effectively if you haven't learned hard lessons by breaking them.

Part of the problem is that human language is by nature imprecise as is human
knowledge. The rules have very fuzzy edges and people need to be introspective
about these sometimes. But you can't be introspective about the rules if you
aren't pushing the limits. Otherwise you are in safe territory and there's
nothing to be introspective about.

------
dromidas
While the article does make a good point... I find it happens very rarely. Or
if it does its almost never vocalized.

It's a little like saying don't think like an English speaker or you're going
to pedantically correct everything. Sure it happens, but it's not a lot of
people that do it and among those its 1 in a million that actually do it to a
point where it gets in their own way.

------
mimog
If you make your living as a programmer you most certainly want to be thinking
like one. Those patterns, best-practices, design principles, preference for
static typing etc. all matter when you are paid to develop systems that
perform well, are easy to maintain, have few bugs and are delivered on time.
If however you are not burdened by such obligations, by all means, go full
cowboy.

------
jaimebuelta
Don't we tend to discuss a lot about semantics and words?

"Don't think like a programmer, think like a developer"

"Learn to code, not to program"

Most of the cases we have words that mean basically the same, and we almost
randomly say, more or less, that X means "a good Y".

~~~
jaredklewis
Agreed! I liked the OP, even though it was quite guilty of this. But what is
it about programmers (hackers/coders/developers/whatever it's called in your
idiolect) that makes us so pointlessly passionate about diction?

------
pjmlp
Fully agree, even when I bash C and Go design decisions regularly, I wouldn't
care using them if that is what customers require me to do.

What matters at the end of the day is delivering working solutions to a
problem, not how beautiful such solutions are.

------
bomatson
Understanding the software of your domain will allow you to be more informed
on how to improve it. A mastery of Midi to make better and new musical
instruments...

The foundation for these abilities comes from habits this OP encourages you to
avoid

~~~
coldtea
> _Understanding the software of your domain will allow you to be more
> informed on how to improve it._

Or it could result in you getting lost in its minutiae and losing the big
picture.

After a while you might not even care to improve it at all, just end happily
exploring niche advanced areas.

------
dutchbrit
Related: I once had a girl in my bed and I'd done so much programming that day
that I started to think of her in code whilst half asleep... Didn't make any
sense at all...

------
tick113
It sounds like a strong parallel with people who like to talk about doing
great things, and people who actually do things.

------
benburton
> Take time to line up your equal signs so things are in nice, neat columns.

Anyone else have a gag reflex when they see code like this?

~~~
delinka
If I've got an enum where all values require assignment, lining up the = makes
it easier for my human eyes to scan names and values later.

~~~
benburton
I don't buy this argument at all. Let's start with what might be a fairly
typical set of variable declarations in Javascript:

    
    
        var firstVar = "one";
        var secondVar = "two";
        var thirdVar = "three";
    

I think you'd be hard-pressed to find anyone that would say this is
significantly more difficult to read than the lined-up alternative:

    
    
        var firstVar  = "one";
        var secondVar = "two";
        var thirdVar  = "three";
    

But, of course, variable names aren't always going to be short. You might have
something like this instead:

    
    
        var firstVar = "one";
        var secondVar = "two";
        var thirdVar = "three";
        var thisVarHasAReallyLongName = "four";
        var thisIsAnotherVarWithALongerName = "five";
    

My eyes don't have any problem scanning the variable names to see what's what.
I also find it very natural to scan the variables by-value to determine which
value is assigned to which variable (which I believe is typically the argument
for the lined-up-equals alternative). Lined up, this looks like the following:

    
    
        var firstVar                        = "one";
        var secondVar                       = "two";
        var thirdVar                        = "three";
        var thisVarHasAReallyLongName       = "four";
        var thisIsAnotherVarWithALongerName = "five";
    

I don't know about you, but I find it significantly more difficult to visually
parse these by-value to determine the variable corresponding to values "one",
"two" and "three". The whitespace gap means that visually I may forget the
vertical alignment of what I was looking at on the right when I try to find
the variable name on the left.

I also think that it's at least a little bit harder to consistently enforce a
convention of this kind. If a variable name gets too long, and you're not
using some sort of auto-formatting, you have to manually space-out the
variables to a new name length every time you introduce a newer, longer-named
variable.

I think the real impulse people have for this kind of convention is that it
actually just looks a little bit neater, because things are aligned in
columns. In reality I think it's much more difficult to parse and maintain.

~~~
prakashk
Your last example could be aligned a little bit differently to make it easier
to visually parse:

    
    
        var                        firstVar = "one";
        var                       secondVar = "two";
        var                        thirdVar = "three";
        var       thisVarHasAReallyLongName = "four";
        var thisIsAnotherVarWithALongerName = "five";

~~~
vinceguidry
My solution:

    
    
        var firstVar  = "one";
        var secondVar = "two";
        var thirdVar  = "three";
    
        var thisVarHasAReallyLongName       = "four";
        var thisIsAnotherVarWithALongerName = "five";
    

The different length of the variable names should be derived from the fact
that the variables themselves are different. So they should be grouped
separately.

------
Fasebook
I would love to agree with this, and I do in principle, but the reality of IT
technology today means we can't make such assumptions in our abstractions
without knowing the implementation details to extreme degrees.

