Hacker News new | past | comments | ask | show | jobs | submit login
What Makes Code Hard to Understand? (arxiv.org)
97 points by tchalla on April 25, 2013 | hide | past | favorite | 88 comments



We need to have more research like this. Programmers spend a lot of time arguing about features, syntax, paradigm, methodology, etc., but no one really has any scientific proof. I laud the researchers who are coming up with new paradigms and extending current paradigms, but we really need researchers going behind and finding what's beneficial and what's not in terms of human factors.


I agree, and I was astounded when I found out they had never done an end to end study of how people use Visual Studio while I was at MSFT.

It surprises me how little we really know about the human aspects of our own field. The practical/pragmatic side of understanding how we really program, what's hard, where we stumble, what we do easily, etc is something that we're trying to address with Light Table. LT happens to be the perfect playground to do this sort of experimentation, so at the very least we'll be making it easier for researchers already in the space. But as we get further along and have a few more hands to spare, we'll start picking off research questions ourselves.

The more the merrier, right? :)


> I agree, and I was astounded when I found out they had never done an end to end study of how people use Visual Studio while I was at MSFT.

As someone who uses Visual Studio everyday, I am not surprised by this at all. Don't get me wrong, Visual Studio is one of the best IDE's out there, but sometimes it's inspite of itself.

Like removing functionality from the Text Explorer in VS 2012, which they have finally been patching back in. Dumb, dumb, dumb. If they HAD end-to-end user testing, they never would've remove the functionality in the first place.


FWIW, while I was there I ran the first end-to-end usability study. It was... enlightening :)


Wow, what did you learn?


There is tons of research like this. There are multiple conferences dedicated to research like this (e.g.: http://www.program-comprehension.org/ ). You just need to go out and find it instead of waiting for it to appear on Hacker News.


It's cool to see this research, but it really needs to become more accessible and widely applied. I'll start digging in, thanks for the pointer.


>We need to have more research like this.

There is a lot of research done in this and other areas of Software Engineering. Problem is that it's usually locked away behind a huge paywall. If you're not affiliated with some university, you essentially have no access to it.


Another problem is that the research hasn't been collected into a manager-friendly tome like the Gang of Four book. The software industry is overflowing with snake oil, and we need to replace it with science if we want to see this turn into a real engineering discipline. I'm tired of bikeshedding and holy wars -- show me the science!


If it involves studying human behaviors, it's probably not going to be science. I would prefer the holy wars over some manager telling me it's been "proven" that one side is correct.


Good point. The thought of a manager saying that sent a chill down my spine.


There's "Making Software" by Oram and Wilson: http://shop.oreilly.com/product/9780596808303.do


> Problem (as with all academic papers) is that, it's usually locked away behind a huge paywall.

This may be true for research in other domains but as a researcher in Computer Science I can assure you that the number of academic papers locked behind a paywall is negligible in CS. Only once was did I find a paper not available and I promptly received a copy by sending an e-mail to the authors of the paper.


My experience is the same. I work in NLP, and I've found that at least everything administered by ACL (probably our largest professional organization) is pretty much always available.


I love that about ACL. Not only is open-access good and nice and all that, but it also means I don't have to screw around with VPNs and such when I need to get an article while I'm at home.


The only thing I do not like about ACL is the fact that is not indexed in DBLP. I have a paper in one of the ACL affiliated conference and it does not show up. I have since then shifted from core NLP and it doesn't really matter but it would be nice to be consistent with other indexing mechanisms.


almost everything I need I find in CiteSeer or ACM Digital Library (which does have a very small pay wall)


ACM charges $200/year or $15/paper. A paywall is a paywall.


Out of curiosity - which area do you work in? Can you also post a couple of examples (from your area) of academic research papers not available for free download?


I've been out in the wilds of industry for a while now, so it's been a while since I needed to pay for something professionally... back in the halcyon days of academia, I worked in computational medicine and had (still have) interest in lingustics/neuroecon/neurosci and I still run into walls there occasionally. I can't cite specific examples for you, if I can't find it somebody I know can get me a copy.


> I worked in computational medicine and had (still have) interest in lingustics/neuroecon/neurosci and I still run into walls there occasionally.

Like I said earlier areas like computational Medicine, Biology and other such areas run into pay walls. CS research areas like theory, data mining, NLP etc. will rarely run into such problems.


Thanks! We'll be following up soon with an analysis of our eye-tracking data. I'm getting permission to post the data, so I hope to have the complete data set and videos up when the paper comes out.


Research is great except when its results are misinterpreted, its tests are poorly designed, etc.

I could easily redesign the horizontal whitespace test to show that horizontal whitespace matters.

Compare: spot the error

    class FruitCollection 
    {
    pubic:
      FruitCollection()
          : num_apple(0),
            num_apricot(0),
            num_avocado(0),
            num_banana(0),
            num_breadfruit(0),
            num_bilberry(0),
            num_blackberry(0),
            num_blackcurrant(0),
            num_blueberry(0),
            num_currant(0),
            num_cherry(0),
            num_cherimoya(0),
            num_clementine(0),
            num_cloudberry(0),
            num_coconut(0),
            num_damson(0),
            num_date(0),
            num_dragonfruit(0),
            num_durian(0),
            num_elderberry(0),
            num_feijoa(0),
            num_fig(0),
            num_gooseberry(0),
            num_grape(0),
            num_grapefruit(0),
            num_guava(0),
            num_huckleberry(0),
            num_honeydew(0)
            num_jackfruit(0),
            num_jettamelon(0),
            num_jambul(0),
            num_jujube(0),
            num_kiwifruit(0),
            num_kumquat(0),
            num_legume(0),
            num_lemon(0),
            num_lime(0),
            num_loquat(0),
            num_lychee(0),
            num_mandarine(0),
            num_mango(0),
            num_melon(0) 
      {
      };
    };
Now spot the error here

    class FruitCollection 
    {
    public:
      FruitCollection()
          : num_apple(0)
          , num_apricot(0)
          , num_avocado(0)
          , num_banana(0)
          , num_breadfruit(0)
          , num_bilberry(0)
          , num_blackberry(0)
          , num_blackcurrant(0)
          , num_blueberry(0)
          , num_currant(0)
          , num_cherry(0)
          , num_cherimoya(0)
          , num_clementine(0)
          , num_cloudberry(0)
          , num_coconut(0)
          , num_damson(0)
          , num_date(0)
          , num_dragonfruit(0)
          , num_durian(0)
          , num_elderberry(0)
          , num_feijoa(0)
          , num_fig(0)
          , num_gooseberry(0)
          , num_grape(0)
          , num_grapefruit(0)
          , num_guava(0)
          , num_huckleberry(0)
          , num_honeydew(0)
            num_jackfruit(0)
          , num_jettamelon(0)
          , num_jambul(0)
          , num_jujube(0)
          , num_kiwifruit(0)
          , num_kumquat(0)
          , num_legume(0)
          , num_lemon(0)
          , num_lime(0)
          , num_loquat(0)
          , num_lychee(0)
          , num_mandarine(0)
          , num_mango(0)
          , num_melon(0) 
      {
      };
    };

Similarly the rectangle with tuples. I'm only guessing on this one but I suspect if you change it to vectors then

    a = Vector(1, 2, 3, 4)
    b = Vector(3, 4, 5, 6)
    c = mul(a, b);
Is far more understandable than

    a = Vector(1, 2, 3, 4)
    b = Vector(3, 4, 5, 6)
    c = mul(a.x, a.y, a.z, a.w, b.x, b.y, b.z, b.w);
And that the more complex the math the worse it would get. Even for rectangles a larger example that did various intersections and unions I suspect the results would change.


I haven't had time to brush up on C++11 yet. Is pubic a new keyword?

In all seriousness, you make some good points, but I don't think they invalidate the findings so much as they demonstrate avenues for further inquiry.


Interesting paper.

The following code is probably my favorite, "What will this program output?" example. This is taken from the Quake III Arena code in "q_math.c" [1].

Note line 561 [2]. Non-obfuscated code, and one is left wondering... just what, exactly, is going on at that line?

I understand that the point of the paper is to analyze code without having comments to help, but it serves as a reminder to me that commenting is important in helping not only other developers understand the code, but to help myself when revisiting code particulars that may have faded from memory.

I find this to be a great example when I hear, "I don't comment; the code itself is self-documenting."

  552 float Q_rsqrt( float number )
  553 {
  554     long i;
  555     float x2, y;
  556     const float threehalfs = 1.5F;
  557
  558     x2 = number * 0.5F;
  559     y = number;
  560     i = * ( long * ) &y; // evil floating point bit level hacking
  561     i = 0x5f3759df - ( i >> 1 ); // what the fuck?
  562     y = * ( float * ) &i;
  563     y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
  564 //  y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
  565
  566 #ifndef Q3_VM
  567 #ifdef __linux__
  568     assert( !isnan(y) ); // bk010122 - FPE?
  569 #endif
  570 #endif
  571     return y;
  572 }
[1] https://github.com/id-Software/Quake-III-Arena/blob/master/c...

[2] https://en.wikipedia.org/wiki/Fast_inverse_square_root


I would probably write it this way today (well, actually I wouldn’t write it today, I would instead use the reciprocal square-root instructions provided by all the common architectures, but let’s pretend):

  static uint32_t toRepresentation(float x) {
    uint32_t rep;
    memcpy(&rep, &x, sizeof x);
    return rep;
  }

  static float fromRepresentation(uint32_t rep) {
    float x;
    memcpy(&x, &rep, sizeof x);
    return x;
  }

  static float rsqrt_linearApproximation(float x) {
    uint32_t xrep = toRepresentation(x);
    uint32_t yrep = 0x5f3759df - xrep/2;
    return fromRepresentation(yrep);
  }

  static float rsqrt_newtonRaphsonStep(float x, float y) {
    return y*(1.5f - (0.5f*x)*y*y);
  }

  float Q_rsqrt(float x) {
    float y = rsqrt_linearApproximation(x);
    y = rsqrt_newtonRaphsonStep(x, y);
    y = rsqrt_newtonRaphsonStep(x, y);
    return y;
  }
The only semi-cryptic thing about it, to someone versed in numerics, is line with the magic number 0x5f3759df. While I would expect a professional to be able to quickly re-derive it and understand the intention, I would still write a comment, along the lines of “We approximate the floating-point representation of 1/sqrt(x) with a first-order minimax polynomial in the representation of x.”

All that said, I tend to write a lot of comments. Much of the code I write professionally is math library code, and it’s not uncommon for me to have a ratio of several paragraphs of error analysis or correctness proofs to a few lines of code.


It's not just semi-cryptic, it's magic. Your example above includes "xrep/2" instead of i>>1, but this does nothing to increase the understanding of what the fuck is going on here.

This is a floating point number and some deep juju bit twiddling is happening with it, it's not just being divided by 2. The exponent is being effectively divided by 2 and the fractional part of the mantissa is also being effectively divided by 2. Except in the case where the representation of the exponent ended in a 1, in which case that 1 is shifted down into the fractional part of the mantissa. Exactly why this is ok and doesn't hurt the approximation is a subject that could fill a report.


Floating point representations are approximately logarithms. It should be clear to anyone with working knowledge of floating point numerics that dividing by two and negating the representation of a floating point number corresponds to taking the reciprocal square root of the number. That's not magic, that's math.

The constant simply accounts for the bias in the floating point representation and balances the error over the domain of interest. Again, not magic.

There is no "bit-twiddling" at all.


Explaining the details of the bit-twiddles does not make it any less bit-twiddling. And explaining the details behind the opaque steps does not make them any less opaque to the uninitiated.

You're using bit operations on a data type with a non-trivial bit structure. That's pretty much the definition of bit-twiddling. The fact that blunt bit operations on such a finely-defined structure can correspond to useful mathematical operations is non-obvious.


The fast inverse square root trick is clever, but this is right on. It's bypassing the provided interface for FP operations and mucking around with implementation details. That's not to say that that's a bad thing, or that floating point numbers aren't a leaky abstraction, or that the real world doesn't demand this sort of optimization from time to time. But it's still true.


>And explaining the details behind the opaque steps does not make them any less opaque to the uninitiated.

Emm, by definition it does.


> You're using bit operations on a data type with a non-trivial bit structure.

It’s using integer operations on an encoding that has a meaningful (approximate) interpretation as an integer.


I think a lot of people use the "self-documenting" excuse without knowing what "self-documenting" really is, and what it brings to the table. Self-documenting code is a good fit for a reasonably small and straight forward function. If you have a function called get_thingy() in the context of some sort of "thingy container" then you probably don't need to go out of your way to explain "This method gets a thingy from the thingy container."

This can even be expanded to more complex single-purpose functions that are not exposed directly to any external APIs. For instance, if you have a functioned called restart_thread() which first tries to stop a thread, then stats a new one then you might be safe.

However, as soon as you leave the realm of the obvious, comments are a must. Clearly in situations where you're trying something clever, like that Q_rsqrt function, missing comments will boggle all but the most specialized professionals.

Unfortunately people often forget about another important type of comments; those that explain what function this piece of code serves in the program. I've lost count of the number of "Context"s or "Interface"s or "get_data"s I've seen when trying to read someone's code. In those situations a single line like "This context links the rendering pipeline to the physics simulation" would go a very long way.

Instead when I see this sort of code it will either not have any comments at all, or it will be a dissertation that all the theoretical uses of the piece of code (Omitting what it's actually used for in the current context).


# In those situations a single line like "This context links the rendering pipeline to the physics simulation" would go a very long way.

Why not rewrite your code so there is a rendering_pipepline variable and a physics_simulation variable and a function called link? Then the comment is redundant.

That's what refactoring for self-documentedness is to me.


While that is a perfectly reasonable solution to the minimal example I provided, there may be reasons to abstract away the concept.

Personally, I like having some sort of structure in memory that allows me to access and query contextual information. It's amazingly useful if you are trying to link a variety of different elements. In fact it's practically unavoidable if you are writing highly dynamic code. Trying to manage this sort of system with a few variables and functions would become a nightmare.

In fact, those dynamic situations are where comments describing layout are absolutely critical. In these situations you may be using your class as a generalization, and tracking the full extent of this sort of interaction could take a ridiculous amount of time.


Indeed. Document the Whys, and let the code itself document the Whats.


That is a great example. Some code just absolutely needs extra documentation around it. IMO, this is mostly true with more complicated or non-obvious algorithms like your example. There aren't really much more intelligible names you can give those variables; the algorithm just needs them.

Still, I find that with 90% of the code I write for work, it's not so difficult to come up with variable and method names that make it pretty clear what's going on. I take the approach that I'll add comments anywhere I think that naming might not be enough, but that very rarely happens.

Granted, I'm not writing low-level, math heavy code. I think it really just depends.


So beautiful. And yet, sadly, "for every 0x5f3759df we learn about, there are thousands we never see"[1]

[1]: http://www.xkcd.com/664/


I love that one, such a simple yet profound commentary on the life of an engineer.


As I think others have pointed, code that is confusing on an algorithmic level is not the common case. Also, I do not think that 'normal' methods of documentation would work all that well in these cases either. What you would really want as documentation for this type of function is more along the lines of a proof of the algorithm being used. Then you just implement the algorithm in as close a way possible to the construction in the proof and document deviations.

At one point I was browsing through a crypto-library (I forget which one). Almost all of the crypto functions used short variable names, and had only a one line comment. That comment is a citation for the paper that the algorithm came from. Ignoring the fact that the code authors were not the source of the crypto (so putting the explanation in the source may be plagourism or such), source code is not a good format for this type of thing.


"We were surprised to find a significant effect of Python experience on the probability of making a very specific error (base = 0.13, OR = 1.44, p < 0.5). Instead of [8, 9, 0] for their last line of output, 22% of participants wrote [8] (for both versions). After the experiment, one participant reported they mistakenly performed the “common” operation on lists x btwn and y btwn instead of x and y because it seemed like the next logical step"

I made this mistake too. In fact, even after reading that most people wrote [8] instead of [8,9,0], it took me a solid minute or so to figure out how anyone could possibly think the correct answer was [8,9,0].

This is something I've noticed happens very often while debugging. I make certain assumptions about the code and even when staring it right in the face it's hard to see where my assumptions differ from the code as written.

My assumption in this case was "why would you calculate the between lists if you weren't going to use them right away".


I like the ff => ff ligature that you've managed to paste from the .pdf. :)

EDIT:

I also made the mistake, but in their defence the between lists are used straight away: They're printed. But the code seems to do three unrelated things in a row, so I'd expect an implementation like their one to have three separate function calls if there's no dependency between them.


My "between" implementation would be like this:

    def between( a, low, high ):
        r = []
        for x in a:
            if low < x < high:
                r.append( x )
        return r
or even

   def between( a, low, high ):
       return [ x for x in a if  low < x < high ]
instead of their:

    def between(numbers, low, high):
        winners = []
        for num in numbers:
            if (low < num) and (num < high):
                winners.append(num)
        return winners
The more variable is local the shorter it should be. Also by using mathematical notation you display "functionality" instead of distracting readers with the terms you coined. "Winners"?


Honest question, why? What do you think you gained? Why use short variable names in this day and age given text editor auto-completion, refactoring tools, etc.

The very plurality of `numbers` conveys so much more meaning than `a` in the method signature. Is that a Python idiom, to use `a` to mean array as a parameter? What happens if you don't use `a` until line 23 of the function, far from its original declaration?

Being someone who likes reading random code in random languages, I find single or 2 letter variable names a horrible curse to the uninitiated of a problem domain. It means you have an extra step figuring out a program, what the abbreviations even mean in that particular domain.

The only exception being when it's the standard way of using the syntax of a language (i in a for loop for example).

Obviously these code examples are trivial, but when you start getting a slightly longer function, it starts becoming hard to remember what's what half-way down a function or where they even came from if they're just virtually meaningless single or dual letters.


What happens if you don't use `a` until line 23 of the function, far from its original declaration?

Then you have horribly violated the previous poster's note that the more local a variable is, the shorter its name should be.

I find single or 2 letter variable names a horrible curse to the uninitiated of a problem domain. It means you have an extra step figuring out a program, what the abbreviations even mean in that particular domain.

It is a trade-off. If you master that domain, you'll be using that all the time, and you'll get the savings all the time.

You need to amortize the cost of memorizing the notation over the expected usage of that notation. Programmers, by profession, have to use a lot of notations from a lot of fields, and therefore benefit from using notations that require a minimum of memorization. But that is not always the right trade-off.

As I pointed out in https://news.ycombinator.com/item?id=5157539, The rule is not that you need long or short variables. You need meaningful ones. A short variable name is inherently ambiguous, which can lead to confusion and mistakes. Thinking through anything with a long-variable name abuses your working memory, limiting how complex your thoughts can be.


I felt the same way about short variables names until I started using Haskel (which tends to have extremely local scopes). The idea is that if a variable is only used in 2 lines, it is not much mental effort to keep in your head what the variable is doing, but it is a relatively large amount to read the long name. However, most languages make it almost impossible to use variables with small scopes (in terms of spacing), so we only see good use of short variables in idiomatic places, such as for(i=0; i<10; i++).

For example, consider the parents original code: def between( a, low, high ): r = [] for x in a: if low < x < high: r.append( x ) return r Even as someone who 'likes' small variable names, the r make that difficult because of the all distance between its definition and use.

Compare that to this haskel implentation

between a low high = filter f a

where f x = x>low && x < high

(Actually, I would probably use 'as' instead of 'a')


In the given program "a" doesn't have to be an array, just something that can be iterated with "for x in a". So whenever you name it something definite you degrade the quality of your expression and obscure the potential for more uses. It goes so far that I actually time and again observe the projects where the same constructs are copy and pasted and the variables renamed once as "cows" an another time as "horses."(It's even worse people introduce different classes or types or whatever). Just like when you'd write:

2 * cows ^ 2 + 3 * cows = 48

instead of 2 * x ^ 2 + 3 * x = 48. In the history of the mathematics there was a time when x didn't exist, just "cows" - you weren't able to write an expression involving "the one unknown, no matter what it's supposed to represent." That was very long ago, still a lot of programmers today don't recognize the power of abstraction.


Yes, I thought so too times ago, but when your numbers become more complex things, and one has overridden gt(), and you need to debug in a hurry, because what you are trying to do is not even remotely related, then the verbose version if much easier.


If the code is small, than short variables are easier to debug, because they are easier to manipulate, have less visual clutter, and are short scoped enough that you do not get a benefit from using a longer name. Not to mention the fact that you do not have the descriptive name obscuring what the code is actually doing.


> descriptive name obscuring what the code is actually doing.

Sorry but descriptive names are clarifying what the code is doing, or are not descriptive.


>Sorry but descriptive names are clarifying what the code is doing

Generally speaking this is true. However, when you are debugging, this no longer is true. When variables say what they are supposed to be, then it is more difficult to see that they are something else.


Descriptive names clarify what the code is supposed to be doing. Sometimes there's a world of difference between that and what it's actually doing.


> The more variable is local the shorter it should be.

I disagree, as someone with low experience with python, I found their example much more understandable than both of your examples.


It's a tradeoff between readability for novices and readability for experts.


I don't agree with that either, even in the languages I am well versed in not having to guess intentions or deduce variables is much easier. I think the tradeoff is really just keystrokes and laziness.

Even if I'm wrong, how often is it an expert, that is not yourself, left to maintain your old code?


Well, that's the beauty: however you define "between", once you have it, you can just say "if between(...)" and it becomes very readable.

If you want, you can subdivide this further and more clearly define the concepts used in the between function. But we don't generally choose to do that if the result is already a scant 5 lines.

Point: you should also take into account good modular design, something you have control over.


For me, the middle one is the most readable, but it's the hardest to figure out what data structure is being returned.


I agree with the middle one being the most readable (assuming you have python experience).

But isn't it obviously returning a list? It's a list comprehension!


Ahh, that's what it's doing! Yeah, I have no python expirence.


When reading the paper's verbose versions, I have to act as a human compiler and keep track of state which leads to mental errors. Contrast this with a shorter version reducing the LOC count from 21 to 6:

    x = [..]
    y = [..]
    between = lambda ls, low, high: (n for n in ls if low < n < high)

    x_between = between(x, 2, 10)
    y_between = between(y, -2, 9)
    xy_common = set(x).intersection(y)
Experienced users made mental stakes because they skim code and make assumptions from previous sections. This can be solved by taking advantage of a language's expressiveness to clarify intent.


I've come to believe that it is the cognitive dissonance between the programmer and the machine. The language used often obscures what is actually executed. Often we find this useful but without proper cues we can write programs that make the divide between program and process really wide.

Personally I find declarations about properties, relationships, and categories of things easier to understand than descriptions of the tedious processes that calculate them. The former give me the ability to reason about the program elements in a logical way. The latter requires me to become a stack machine and execute the program in my head... a process that is error-prone and full of false assumptions.

eg: how many people believe that "all arrays are just pointers," in C? When is an array not like a pointer?

Research like this is good. Are there any studies that go beyond trying to trick programmers with trivial programs in the Algol family and look at the difference in performance between categorically different styles? I mean languages that are declarative vs imperative vs functional vs concatenative in nature. I think that would be very interesting to read.

updated for clarity


> eg: how many people believe that "all arrays are just pointers," in C? When is an array not like a pointer?

Ok I know this wasn't your point but I got stuck on this and am just curious to know: what you were thinking of here? One thing that occurs to me is that in a recursive function you will run out of stack space a lot faster if you are using arrays rather than allocating from the heap. I don't think that is what you were referring to though, hence the question.


Well my example was meant to support how programming languages can create a cognitive dissonance between what the programmer thinks will be executed vs. what is actually executed.

One area where I think there is a high dissonance is in how C uses the same syntax for defining arrays, indexing into arrays, and referencing pointer offsets. Other examples are the array decomposition rules, array function parameters, etc. Even experienced programmers get tripped up by them:

http://c-faq.com/~scs/cgi-bin/faqcat.cgi?sec=aryptr


If you're asking about differences between pointers and arrays, one example is:

  int a[10] = { 7 };
  int* p = a;
  
  assert(a[0] == p[0]);
  assert(sizeof(a) == sizeof(p)); // FAILS


Interesting study. Here's a summary of their results:

1: "between". Experienced programmers were more likely to incorrectly assume that the results of earlier calculations would be used in a later calculation.

2: "counting". All participants, regardless of experience, were more likely to assume that a statement separated by vertical whitespace was outside of a loop, when it was actually still inside the loop. (Note that the programming language is Python, which doesn't have a loop termination token.)

3: "funcall". Whitespace between operators (e.g., 3+4+5 vs. 3 + 4 + 5) had no effect.

4: "initvar". No interesting result. (This one seems to have been mis-designed.)

5: "order". Respondents were slower when functions were defined in a different order than they were used. Experienced programmers didn't slow down as much.

6: "overload". Experienced programmers were more likely to be slower when faced with a "+" operator used for both string concatenation ("5" + "6") and addition (5 + 6), rather than used for just concatenation.

7: "partition". There was a result, but I don't think we can draw meaningful conclusions from it.

8: "rectangle". Calculating the area of a rectangle using a function that took tuples (e.g., "area(xy_1, xy_2)") took longer for programmers to understand than a function that took scalars (e.g., "area(x1, y1, x2, y2)"). Calculating the area using a class (e.g., "rect = Rectangle(0, 0, 10, 10); print rect.area()" took the same amount of time as using scalars, despite being a longer program.

9: "scope". No conclusive result.

10: "whitespace". Using horizontal whitespace to line up similar statements had no effect on response time. (There was another result relating to order of operations that deserves further study, but it wasn't thoroughly tested here, so I don't think it's conclusive.)

Note that all programs were exceedingly simple (the longest was 24 lines), so be cautious applying these conclusions to real-world work.


Forgive my ignorance - IANAP.

Why can't/don't we have more tools along the lines of App Inventor [1] that provide a visual structure for programming?

The vast majority of time I've spent writing anything has been spent struggling with specific syntax, not any particular function of the code.

Coding feels like trying to manipulate a clear model that exists in the machine by spelling my intentions out one letter at a time while peering through a straw.

1: http://appinventor.mit.edu/


It sounds to me like you never got over the hump. At some point, there is an inflection point where you simply become fluent in the syntax of a language, and the relationships between your modules takes shape in your head.

Someday, I do think there will be tools that let non-programmers create things that only programmers can create today, but by that time, I think there will also be _other_ tools that let the "real" programmers still do the most powerful stuff. Maybe no typing though.


Possibly because while we have a rich ecosystem of tools for dealing with text in myriad form and ways, and have conditioned ourselves over the years on how to read text in a general form, I don't think the same can be said for more graphical representations of programs.

While a representation that uses graphical cues in lieu of textual ones may be easier to learn and understand initially, the disadvantage of not being able to use the thousands of utilities for searching, refactoring examining code are then less useful, if not entirely useless.

P.S. In a way, idiomatic structure to code puts a graphical representation on the textual form. Programmers learn to recognize this. Python goes as far as to enforce it.


>Possibly because while we have a rich ecosystem of tools for dealing with text in myriad form and ways, and have conditioned ourselves over the years on how to read text in a general form, I don't think the same can be said for more graphical representations of programs.

Yes, I completely agree.

To go off on a bit of a tangent, I wonder if it's just a matter of the right technology and implementation? Is there an analog to the rise of touch interfaces waiting to happen in the way we record, manipulate and relay information?

>While a representation that uses graphical cues in lieu of textual ones may be easier to learn and understand initially, the disadvantage of not being able to use the thousands of utilities for searching, refactoring examining code are then less useful, if not entirely useless.

That's a good point. I was thinking of this purely from the standpoint of creating as an individual, not maintaining or sharing.

>P.S. In a way, idiomatic structure to code puts a graphical representation on the textual form. Programmers learn to recognize this. Python goes as far as to enforce it.

Certainly, learning that word (idiomatic) was a boon to my understanding of code.


When you've spent more time programming the feeling reverses, and visual structures become tedious. Because you know exactly what you want, and finding it in a visual environment becomes tedious.

It's like smileys. In the beginning it may be nice to have a pop-up "insert smiley" list, but in the end it's easier just to type >:-O or whatever you're trying to convey than find it in a list.

Admittedly the more verbose programming languages do end up being fairly IDE-dependent, so they get a lot closer to the visual programming you talk about.

See also: http://www.catb.org/esr/writings/unix-koans/gui-programmer.h...

DISCLAIMER: I haven't used AppInventor


Time spent wrangling syntax goes down with time, dependent on language. With experience, coding does start to feel more like painting a picture. You spend less time worrying about getting the brush strokes just so, but you still have to apply each and every single brush stroke to get the painting you want.


Programmers often feel like the physical properties of notation have only a minor influence on their interpretation process. When in a hurry, they frequently dispense with recommended variable naming, indentation, and formatting as superficial and inconsequential.

It's ironic that the authors of this statement can't be bothered to typeset their opening quote marks in the right direction-- a typical noobie mistake in TeX.


I'm not a TeX "noobie". This was a result of collaborating with multiple authors, one who preferred Word. Looks like I missed the quotes when pasting in his contributions.


I am not sure if I understand you correctly. Is it ok if I ask you for an example?


At the beginning of the second paragraph on page 2:

    The second question is: ''How are programmers
In LaTeX, `` is used for open quotes, and '' for close quotes. [0]

[0] http://www.maths.tcd.ie/~dwilkins/LaTeXPrimer/QuotDash.html


Thanks, I do know how open quotes are used in LaTex though. I couldn't understand the initial comment. Thanks for the clarification. Up-voted.


From the paper: "This program prints the product f (1) ∗ f (0) ∗ f (−1) where f (x) = x + 5. ... Most people were able to get the correct answer of 60 in 30 seconds or less."

Unless I'm losing my mind, the correct answer for the code as stated is 6 * 5 * 4=120, not 60. It would be 60 if f (x) = x + 4.


You're correct. This was a (very) unfortunate typo. We had changed from x + 5 to x + 4 during the design phase to try and lessen the mental burden without making it too easy.

Thanks to you and a few others, I'm making corrections and will be uploading a fixed version of the paper soon.


Lots of great discussion here! If anyone is interested, I added a blog post about the paper (http://synesthesiam.com/posts/what-makes-code-hard-to-unders...) and put the data up on github (https://github.com/synesthesiam/eyecode-tools).


"expectation-congruent programs should take less time to understand"

this statement feels tautological, I don't think there's a single correct set of expectations


It's not a tautology. If the brain's benefit from "branch prediction" is low, then some other factor (e.g. length of code) could overwhelm the effects of predictability. So they're making a genuine claim, which is that expectation-congruence is the most important factor.

My experience bears this out. In most of my professional life, I've abided by coding standards, even if I consider the standards imperfect. That's because of the very real improvements to the team's productivity when code does what it looks like it does.


not quite a tautology, I was being a bit facetious... imho the concept of understanding is tied up with the meaning of expectation, so much so that by definition the code I don't understand doesn't do what I expect - if it did, then I'd understand it!


I was thinking along those lines the other day.

I guess the biggest hurdle is that people essentially think "functionally" not procedurally

Because we fundamentally think x = x2 in terms of "the value is being doubled" without thinking of all the steps needed and everything that can go wrong there.

The article is certainly interesting, from what I can quickly glance, naming is very important


I think functional programming and immutability goes hand in hand, so I wouldn't call x = x*2 an example of the functional thinking.

With the same logic, C is functional because you don't care about the bit fiddling in x++...


  Yes, x = x*2 is not (y = x*2 would be better)
But expecting a while loop to exit as soon as the condition is met (as opposed as while the loop block finished) is a common case


Interestingly, I remember reading an article saying that people who have never programmed before could learn haskell faster than they could learn java. But people who already learned java take longer than someone with no experience to learn haskell. I will see if I can find the link.


Code is hard to understand because most programmers only learn how to write it and never take the time to learn how to read it. If you can't read something with ease, it will always be hard to understand.


Also, if you never learn to read it, you can't begin to learn to write for the reader.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: