

How language gives your brain a break - user_235711
http://newsoffice.mit.edu/2015/how-language-gives-your-brain-break-0803

======
ThePhysicist
The concept of dependency distance is actually equally useful in programming
languages. For example, instead of writing

    
    
        if condition:
          #...a lot of code...
          value = process()
          return value
        else:
          #...return error information (e.g. in a web app)
          return error_information
    

I always try to write something like this instead

    
    
        if not condition:
          return error_information
    
        #...a lot of code...
        value = process()
        return value
    

This make reading and understanding the code much easier since things that
belong together are close together. This seems trivial but there is a lot of
code that is written like the first block and therefore harder to understand.
Sometimes, small things can make a large difference.

That said, I am probably a bit over-sensitive to this kind of problem as a
German, since we also have the habit of holding back the vital information in
a sentence until the very last word.

~~~
vog
This fits quite well in the large advice (expressed e.g. by Martin Fowler)
that a function should not contain deeply nested if/then/else constructs, but
a flat series of "if" statements each finishing with a "return" statement. And
if this is impossible, break the function into smaller functions such that
each of them is composed like that.

This advice is especially helpful when coding complex conditionals in business
logic. (but small enough so you don't need/want a generic rules engine and
rules DSL for that.)

------
kazinator
Dependency distance is why it is good to have programming language constructs
like

    
    
      for (init; test; increment)
        body-statement
    

and their ilk. Though the evaluation is out of order, (test, body-statement
then increment), certain related things are kept together.

It also explains why infix notations (and 2D math notation) are effective:
they physically clump semantically near things.

Postfix notations with no parentheses are absolutely the worst; you don't know
what goes with what at a glance:

    
    
      1 2 + 3 4 + 5 6 + * 7 8 / - *
    

If my eyes land on, say, the inner * , it's not immediately clear what the
left operand is.

Prefix with parenthesization lets us fold and indent the darn thing for a 2D
visualization:

    
    
      (* (+ 1 2) (- (* (+ 3 4) (+ 5 6)) (/ 7 8)))
                     ^__ orig. distance__^
    

The (- ...) expression jumps down and gets indented. Thus it is visually lined
up with its sibling, and _near_ the connecting operator:

    
    
      (* (+ 1 2)
         (- (* (+ 3 4) (+ 5 6)) (/ 7 8)))
    

We recursively apply this:

    
    
      (* (+ 1 2)
         (- (* (+ 3 4)
               (+ 5 6))
            (/ 7 8)))
    
      (* (+ 1 2)
         (- (* <---
                ... reduced distance, and visual alignment
            (/ <---

~~~
jonsen
Prefix (with fixed number of arguments) need no parentheses

    
    
      * + 1 2
        - * + 3 4
            + 5 6
          / 7 8
    

Postfix can be indented similarly

    
    
          1 2 +
      3 4 +
      5 6 + *
        7 8 / - *
    

Or parentheses less infix

    
    
        1+2
      *     3+4  
          * 5+6       
        - 7/8

------
mherrmann
One thing I find interesting about the example in the article:

(1) John threw out the old trash sitting in the kitchen. (2) John threw the
old trash sitting in the kitchen out.

Yes, (1) is easier to parse (and I prefer it) but (2) is less ambiguous: (1)
could equally well mean that John threw the trash out _while he_ was sitting
in the kitchen.

I also find the title of the post a little confusing. I thought the article
would be about how learning languages helps your brain recover.

~~~
almightysmudge
Wouldn't there be a comma if he was sitting in the kitchen? Isn't there an old
adage as well, "don't end a sentence with a preposition"?

edit: Apparently that's an old wives tale, the preposition one.

~~~
jessaustin
That's the kind of grammar prescription up with which we will not put.

~~~
igravious
While I agree that the rule "Thou shall not end a sentence with a
preposition." seems to be a borrowed Latinate prescription and is at variance
with modern English usage, _inhale, deep breath_ , I think your example of
using the phrasal verb (put up with), though common, to demonstrate your point
is not a good example. :)

If we consider put-up-with as a verb that just so happens to be a compound
verb-postposition-postposition _unit_ then the sentence "That's the kind of
grammar prescription which we will not put up with" actually ends in a verb.
If you see what I mean. Does this make sense? I'm just punting this
intuitively.

I'm trying to think of an actual case where a bare adpositional word
terminates a sentence... which is not in question form... and I can only think
poetic uses... and after having Googled a bit... verb+prep or phrasal verb or
question are the only examples I've _come across_. :)

~~~
jessaustin

      s/which/that/
    

Sorry, but that drives me crazy. I think your "verb+prep" gives the game away,
as that is very common idiom. What about this: "That's the store we got the
candy from." Perhaps a sentence using "where" would be more stylish, but you
can't very well throw out a prescription of grammar just to immediately
replace it with one of style.

~~~
igravious
Do you mean where I said, "which is not in question form..." I should've said,
"that is not in question form..." My inner ear is not ringing any alarm bells
there. Care to explain?

I like your example. Never mind about style. I can riff on that: "There's the
table we put the apple on.", "Here's the bag I keep my laptop in." And so on.
Nice!

~~~
jessaustin
I would have sworn that your "verb that just so happens" was once something
like "verb which just so happens". Hmmm.

------
mziel
_> That means language users have a global preference for more locally grouped
dependent words, whenever possible._

YMMV, but for German (I'm not a native speaker, although fluent) you can't
figure out the meaning of the sentence until the very end.

The authors' even acknowledge this:

 _> German, which has some notoriously indirect sentence constructions, is far
less optimized, according to the analysis.

>And the researchers also discovered that “head-final” languages such as
Japanese, Korean, and Turkish, where the head word comes last, show less
length minimization than is typical._

German, Japanese, Turkish seem like quite a big chunk of languages
(population-wise), so I wonder if initial statements are not too English-
specific.

~~~
rndn
_Gödel, Escher, Bach_ has a great paragraph on this in the context of
recursive structures and processes (i.e. things that can be modeled with
stacks, i.e. push and pop operations):

    
    
        Recursion in Language  (p. 138):
    
        Our mental stacking power is perhaps slightly stronger in
        language.  The grammatical structure of all languages involves
        setting up quite elaborate push-down stacks, though, to be sure,
        the difficulty of understanding a sentence increases sharply with
        the number of pushes onto the stack.  The proverbial German
        phenomenon of the "verb-at-the-end", about which Droll tales of
        absentminded professors who would begin a sentence, ramble on for
        an entire lecture, and then finish up by rattling off a string of
        verbs by which their audience, for whom the stack had long since
        lost its coherence, would be totally nonplussed, are told, is an
        excellent example of linguistic pushing and popping.  The
        confusion among the audience out-of-order popping from the stack
        onto which the professor's verbs been pushed, is amusing to
        imagine, could engender.  But in normal ken German, such deep
        stacks almost never occur-in fact, native speaker of German often
        unconsciously violate certain conventions which force verb to go
        to the end, in order to avoid the mental effort of keeping track
        of the stack.  Every language has constructions which involve
        stacks, though usually of a less spectacular nature than German.
        But there are always of rephrasing sentences so that the depth of
        stacking is minimal.
    

Here is an example of such a (grammatical) beast:

    
    
        Der Mechaniker,
            auf den wir wegen dem Unfall,
                den wir,
                    bevor wir die Stadt,
                        die im Norden liegt,
                    erreichen konnten,
                hatten,
            gewartet haben,
        ist endlich angekommen.
    

This can’t really be translated to English:

    
    
        The mechanist,
            for whom because of the accident,
                which,
                    before the city,
                        which lies in the North,
                    we could reach,
                we had,
            we have waited for,
        has finally arrived.

~~~
to3m
The mechanic the accident affected is well known in the city he was late to.
The people the city the mechanic the accident affected was late to contains
sing songs about it, in fact. The tourist guide calls them the songs the
people the city the mechanic the accident affected was late to contains sing.

The buildings the money the tourists the songs the people the city the
mechanic the accident affected was late to contains sing attracts spend paid
for are also a big draw.

------
tmalsburg2
The way the study is presented makes the reasoning sound somewhat circular:
Language A is optimal because the dependencies are short. The dependencies are
short because that's optimal. Not sure what we learn from that. I also find
statements like "language A is more optimized than language B" questionable
because (i) it's not clear how to measure optimality and (ii) there are no
good reasons to assume that some languages evolve to be flawed, the pressure
to perform well is just too high. Anyway, I suppose that these are just
problems of the dumbed down press release. Looking forward to see the actual
paper.

~~~
jxramos
I think the criterion for optimization came about from this line... “If there
is a large amount of time between one word and another related word, that
means you have to hold one of those words in memory, and that can be hard to
do.” They're basically saying you can pop things off your mental stack more
readily with smaller DLM languages.

~~~
tmalsburg2
Just had a look at the paper. They indeed assume that long dependencies are
hard and that languages therefore tend to optimize for short dependencies.
They also argue that some languages (like German) may have longer dependencies
because other aspects of their grammar make it easier to process those
dependencies than would be the case in short-dependency languages like
English. This suggests that languages that are more optimized for dependency
length may be less optimized with respect to some other aspect of processing.
Languages may thus make different tradeoffs with respect to the various
sources of processing difficulty. This means that German may have seemed hard
to Mark Twain not because it is intrinsically harder but simply because it
makes different tradeoffs than English.

------
tmalsburg2
Link to the (paywalled) paper:
[http://www.pnas.org/content/early/2015/07/28/1502134112.abst...](http://www.pnas.org/content/early/2015/07/28/1502134112.abstract?sid=39bcf418-68c6-474c-a21a-a6ceec737679)

Abstract: Explaining the variation between human languages and the constraints
on that variation is a core goal of linguistics. In the last 20 y, it has been
claimed that many striking universals of cross-linguistic variation follow
from a hypothetical principle that dependency length—the distance between
syntactically related words in a sentence—is minimized. Various models of
human sentence production and comprehension predict that long dependencies are
difficult or inefficient to process; minimizing dependency length thus enables
effective communication without incurring processing difficulty. However,
despite widespread application of this idea in theoretical, empirical, and
practical work, there is not yet large-scale evidence that dependency length
is actually minimized in real utterances across many languages; previous work
has focused either on a small number of languages or on limited kinds of data
about each language. Here, using parsed corpora of 37 diverse languages, we
show that overall dependency lengths for all languages are shorter than
conservative random baselines. The results strongly suggest that dependency
length minimization is a universal quantitative property of human languages
and support explanations of linguistic variation in terms of general
properties of human information processing.

------
UhUhUhUh
Effort carries meaning. It shifts focus around and introduces exquisite
nuances or even a sub-text (i.e. what if John has mixed feelings about his
current girlfriend?) It boosts thinking. It works visually as well with, for
example, what R. Barthes called "punctum."

