
Study: An aspect of language looks to be hardwired in our brains - prostoalex
http://qz.com/529865/study-a-fascinating-aspect-of-language-looks-to-be-biologically-hardwired-in-our-brains/
======
tokenadult
As a reader of the article kindly submitted here who speaks and reads multiple
human languages from more than one language family, I'm not convinced that
this article passes the linguist's or polyglot's smell test. The sample size
reported in the article is wholly inadequate for showing that there is a real
effect here. That the underlying study was reported in a rather minor journal
is a further indication that there isn't any real-world significance behind
the statistical significance claimed by the researchers. For good background
reading on why you shouldn't look merely to statistical significance claims
for evaluating research on human subjects like this, see the articles by Uri
Simonsohn[1] and his statistically astute co-authors, who have identified
phenomena called "researcher degrees of freedom" (there are a lot of
researcher degrees of freedom here) and "p-hacking" (a news release that
mentions "more accurately than by random chance" is almost always a sign of
p-hacking) that allow many weak, underpowered studies of human behavior to be
published in journals with poor statistical skills among the editorial staff.

[1] [http://opim.wharton.upenn.edu/~uws/](http://opim.wharton.upenn.edu/~uws/)

[http://www.p-curve.com/](http://www.p-curve.com/)

~~~
canjobear
> That the underlying study was reported in a rather minor journal

Cognition is one of the top journals in cognitive science. Definitely not a
minor journal for this kind of work, and plenty of their reviewers are
statistically competent. So my prior would not be that the stats here are
bogus.

I share your skepticism of their result about more synesthetic people being
more able to guess the meanings of foreign words. The effect size there looks
tiny, the motivation is a bunch of hand-waving, and I wouldn't be surprised if
it doesn't replicate.

On the other hand I'm inclined to believe their result that people can guess
the meanings of certain foreign certain words with above-chance accuracy
(about 60% for big vs. small). It's just a version of the well-replicated
bouba/kiki effect[1], plus the idea that this can influence words in a
language, which doesn't seem far-fetched at all.

What I'm most concerned about is that people might be picking up on
similarities between words beyond what the authors identify as "cognates". The
sample languages are Albanian, Dutch, Gujarati, Indonesian, Korean, Mandarin,
Romanian, Tamil, Turkish, and Yoruba. 4 of those are Indo-European and might
have obscure etymological relations that show up as a few overlapping letters.

[1]
[https://en.wikipedia.org/wiki/Bouba/kiki_effect](https://en.wikipedia.org/wiki/Bouba/kiki_effect)

------
ekidd
The article makes some claims about childhood language learning:

 _But some researchers argue that synesthesia, which appears in 4% of the
general population, is actually an exaggerated manifestation of associations
we all make from an early age—an ability most of us lose over time, and one
that may help explain why children are so good at picking up other languages._

I clicked through to the linked paper, and I could find no evidence that
synesthesia provided any specific, significant advantages during childhood
language acquisition.

Now, other research supports that children _do_ have real advantages when it
comes to learning accents, and that they may have lesser advantages when
learning grammar. But beyond that, the evidence becomes controversial. And
adults who are socially and professionally immersed for, say, 5 years will
often acquire a very solid command of a new language, especially if they're
also voracious readers.

In general, English-language media often overrates the ease with which
children learn languages[1], and it's insufficiently critical about scientific
theories which claim to explain this advantage.

[1] Trying to raise bilingual children, for example, can be surprisingly
challenging unless you live in a bilingual society.

~~~
TazeTSchnitzel
> Now, other research supports that children do have real advantages when it
> comes to learning accents

Childhood seems to be the only time when people can acquire certain new
phonemes, in fact. Japanese speakers cannot properly learn to distinguish
English /l/ and /r/ in later life. Other phonological distinctions are also
difficult to learn in later life: tones, for instance.

~~~
brobinson
>Japanese speakers cannot properly learn to distinguish English /l/ and /r/ in
later life

Not sure I agree with the absoluteness of "cannot" here as I've successfully
taught a few Japanese (and a Korean!) to properly enunciate the /l/-based
sounds from English. It's simply training your brain to have two new pieces of
muscle memory with regard to tongue movement. Learning the Japanese /r/ sound
is the same process, although we have it a bit easier than they have it. :)

The fact that they try to approximate the English /l/ sounds using their roof-
of-mouth-tongue-flicking /r/ sound is a failure of how English is taught in
their schools. Every Japanese I've spoken to said their English courses in
school had no focus on pronunciation, only on reading/writing, and none of
their teachers were native English speakers.

As far as tones go, I've been living in Taiwan for about six months and I'm no
closer to being able to hear them in everyday speech. I can pronounce them
decently though, according to a friend here.

~~~
TazeTSchnitzel
> taught a few Japanese (and a Korean!) to properly enunciate the /l/-based
> sounds from English

You can teach people the physical motions of producing the sound, but that's
not the same as acquiring the phoneme. They will have to consciously produce
the /l/ and /r/ sounds and will still not be able to distinguish them when
listening.

~~~
riffraff
Is there a reason for being unable to tell the difference between "r" and "l"
that makes them special?

Otherwise, as an italian, I have gone through the motions of learning to

* distinguishing long and short vowels in english. Ship vs sheep took me a looong time.

* going from ~7 vowels in italian to 14 in hungarian

* learning a few more consonant sounds in both hungarian, spanish, and english (I still suck at "h")

while I would agree they all still require conscious effort for me to produce
them, distinguishing them is not a major issue with enough practice.

~~~
alphonsegaston
Again, the "cannot" here is probably imprecise. In Japanese, r and l exist as
a single sound, making differentiation difficult for the average Japanese
speaker in other languages. The original research by a Japanese linguist into
this phenomenon in the 70s suggested it was impossible because of the way
phoneme and allophone acquisition happens in childhood. But this claim had
been mitigated by subsequent research that suggests it is context dependent,
as well as less certainty about phoneme vs. allophone distinctions.

------
erispoe
The main problem with the nativist hypothesis is that it's breaching Occam's
razor rule. Is there a simpler explanation than being hardwired with
information, which would be a fairly complicated thing to explain, considering
what we know of the brain today? Yes, there is: languages are related and
sound symbolism could be passed from language to language, or the perfectly
reasonnable explanation offered by Christine Cuskley at the end of the
article. In the world we live in, some set of sounds are associated to small
animals, other to big animals.

Nativist hypothesizes suffer from a lack of imagination in looking for
alternative explanations. I cannot explain it? Must be innate! Or, children do
it? Must be innate!

Cognitive research points to a much more promising explanation: we are really
good at learning, and we can learn abstract concepts pretty quickly, even as
very young children. See Stanislas Dehaene's work for instance.

Abstracting concepts for big and small from our environment and associating
sounds with it is much more plausible than passing around information about
abstract concepts in genes through natural selection based on random genetic
mutations.

~~~
copsarebastards
_That’s the danger of Occam’s razor and of simple answers. “Simplicity” is
often a hiding place for our biases. We reach the conclusion we want to reach
and then call it simple._ \--Plaza Garabaldi

I'm not saying you're wrong, I'm saying that I don't actually agree with you
that brains being hardwired with information is that hard to explain.

~~~
erispoe
Great, I want to know more about it. Do you have some references on that?

~~~
briantakita
Try interpreting the observation of animals & humans through a lens of their
behavior being similar given certain contexts.

~~~
erispoe
I don't want to solve a riddle here, but I am genuinely interested in the
evidence supporting the nativist hypothesis in this context, especially on how
you'd pass abstract information through genes. Do you have any reference, a
good scientific paper that could be an entry point?

------
sawwit
This recent publication about evidence for a proposed model for basic
grammatical processing in the neocortex, was also fascinating:
[http://www.pnas.org/content/112/37/11732.long](http://www.pnas.org/content/112/37/11732.long)

 _An architecture for encoding sentence meaning in left mid-superior temporal
cortex_

    
    
        Human brains flexibly combine the meanings of words to compose
        structured thoughts. For example, by combining the meanings of
        “bite,” “dog,” and “man,” we can think about a dog biting a man,
        or a man biting a dog. Here, in two functional magnetic resonance
        imaging (fMRI) experiments using multivoxel pattern
        analysis (MVPA), we identify a region of left mid-superior
        temporal cortex (lmSTC) that flexibly encodes “who did what to
        whom” in visually presented sentences. We find that lmSTC
        represents the current values of abstract semantic
        variables (“Who did it?” and “To whom was it done?”) in distinct
        subregions. Experiment 1 first identifies a broad region of lmSTC
        whose activity patterns (i) facilitate decoding of
        structure-dependent sentence meaning (“Who did what to whom?”)
        and (ii) predict affect-related amygdala responses that depend on
        this information (e.g., “the baby kicked the grandfather”
        vs. “the grandfather kicked the baby”). Experiment 2 then
        identifies distinct, but neighboring, subregions of lmSTC whose
        activity patterns carry information about the identity of the
        current “agent” (“Who did it?”) and the current “patient” (“To
        whom was it done?”). These neighboring subregions lie along the
        upper bank of the superior temporal sulcus and the lateral bank
        of the superior temporal gyrus, respectively. At a high level,
        these regions may function like topographically defined data
        registers, encoding the fluctuating values of abstract semantic
        variables. This functional architecture, which in key respects
        resembles that of a classical computer, may play a critical role
        in enabling humans to flexibly generate complex thoughts.

------
elijahz
Here's a fascinating documentary on the phenomenon:

[http://www.youtube.com/watch?v=-gwXJsWHupg](http://www.youtube.com/watch?v=-gwXJsWHupg)

------
hacker_9
I would have thought all living creatures with ears would assign the same
meaning to loud/quiet or long/short sounds (i.e. loud sounds are more
important as they signify a big predator, or a large object running/falling
nearby. Small sound just confirm confirm 2 twigs collided when moved etc).
With this common basis for sound, I don't think it is too suprising that our
use of vocalisation relate in some way at the very basic level across
languages.

------
titzer
Half the languages in the list are branches off the Indo-European tree. I have
a hard time believing this is a result of "language hardware" in the brain as
opposed to a much simpler explanation like the shared heritage of the words.

They don't seem to have controlled for native language; E.g. Dutch has a lot
of similarities with English...

------
jack9
The reason I thought ca-chook meant small is because it's fairly difficult to
say. When saying small vs big, there's a few incentives (physical, contextual,
etc) to have one easier to say than another.

e.g. We have a big problem. We have a small problem.

~~~
danblick
I thought küçük ("coo-chook") sounded small because it reminded me of baby
cooing.

I'm not really surprised by this result. Some words are imitative.

Probably words that are more imitative are evolutionarily selected to remain
in human languages because their speakers find them easier to recall. To me
that seems like a perfectly good explanation for the phenomena that has little
to do with "hardwiring into our brains".

------
dreamfactory2
did they exclude esp?

------
craigjb
Sample size of 76....

~~~
iopq
What does that have to do with anything? Statistic significance has to do with
confidence intervals and sample sizes at the same time, so even a small sample
size with a large effect has statistical significance, but a very small effect
at a large sample size might not.

~~~
colordrops
Could you point to a good article or tutorial on statistical significance,
sample size, confidence intervals etc? I'd like to know how to analyze
experimental results myself.

~~~
cmarschner
Start here (and its references).
[https://en.m.wikipedia.org/wiki/Statistical_hypothesis_testi...](https://en.m.wikipedia.org/wiki/Statistical_hypothesis_testing)

