Hacker News new | comments | show | ask | jobs | submit login

My problem with upvoting is that I have read the underlying paper, and I'm not impressed at all. This paper takes a hugely nurturist approach, and at parts verges on dishonesty.

I consider myself - if not an expert, at least very well informed - on the topic of dual n-back and I was flabbergasted when I read pg11 of the PDF. There is not a single mention of the many failures to replicate, methodological critiques, or skeptical meta-analyses about n-back. Some of that can be explained by the January 2012 date, but only a very some of it because a lot of that long predates 2012 or 2011, and it can't be explained by the people involved being ignorant because they thank Jaeggi, Duckworth, Jonides, and Sternberg - who certainly would know!

Someone reading this paper would come away with a very different understanding and expectation for dual n-back training than someone reading my http://www.gwern.net/DNB%20FAQ . So all I can think is, "if this is how Nisbett et al act on the one section where I know almost as much as them, how are they misleading me in all the other sections?"




Labeling the paper with the epithet "hugely nurturist" tells nothing about the paper but everything about what you are doing. You are attempting the early dismissal of some viewpoint you disagree with on a more or less political grounds but this has not very much to do with science.

Most cognitive tasks used inside an MRI are about as meaningless as benchmarks. It seems gratuitous to turn this into evidence of intentionally dishonest malfeasance and frame the whole thing as some kind of pseudo-political debate. Just engage the data, please?

Setting aside the political tone you've taken here: the idea that nature and nurture are dichotomous (or can even be meaningfully separated) is not supportable.

It has been observed many times that because heritability includes a genes x environment interaction term, it does not mean that something is 'genetic' - and real-world examples have been provided in this thread.

You cannot build an adequate understanding of development on the caricature that there is a dichotomy between 'nature' and 'nurture'. DNA and RNA cannot do anything without a developmental environment. You do not have any organism without development, which necessarily occurs in an environment. There are strong adaptive reasons for learning in big primates and it is well known to occur. On the other hand, you do not have development or learning without machines to implement them; these are not built without genes but neither can genes build them in a vacuum. It is a perpetual interchange and the components are viciously complementary and embedded in the same feedback loops to a point where it is nearly meaningless even to argue about nature vs. nurture.


> Labeling the paper with the epithet "hugely nurturist" tells nothing about the paper but everything about what you are doing. You are attempting the early dismissal of some viewpoint you disagree with on a more or less political grounds but this has not very much to do with science.

Not relevant to my point about the one-sided presentation.

> Most cognitive tasks used inside an MRI are about as meaningless as benchmarks. It seems gratuitous to turn this into evidence of intentionally dishonest malfeasance and frame the whole thing as some kind of pseudo-political debate. Just engage the data, please?

I don't see what MRIs have to do with this... the DNB studies that matter are, at core, very simple: you give someone a Raven's or BOMAT test, randomly train them on DNB or not, and give them a second test.

> Setting aside the political tone you've taken here: the idea that nature and nurture are dichotomous (or can even be meaningfully separated) is not supportable.

Everyone knows that there is in IQ or psychometrics in general a continuum of positions from nurture to nature, and while only ideologues take a 100% position and everyone sensible acknowledges that there's substantial contribution from both, there's still a very big difference between someone like Nisbett and, say, Arthur Jensen.

I don't know what you're talking about with your dichotomies and caricatures, and I don't really care: because it's not relevant to my point about how accurate a presentation of evidence this Nisbett et al 2012 paper is.


From the article: "The American Psychological Association created a panel of eminent researchers to write a summary of what was known about intelligence, which would presumably contradict many of these claims."

"...would presumably contradict" is right, of course, because the APA is one of the most political organizations in America. An article published in The American Psychologist (the APA's own journal) describes how "a past APA president urged psychologists to advocate radical leftist positions and 'explicitly blend our data and values in order to make strong arguments for the kinds of [radical] change we think is necessary.'"

The people the APA chooses as "eminent researchers" to summarize "what is known" just aren't going to include anyone who doesn't promote their political agenda, leading Willingham to the "would presumably contradict" comment. What they would "find" would be a foregone conclusion, by policy, given the nature of the APA.

A study was conducted in which a liberal, a conservative, and a centrist independently rated the political leaning of articles published by the APA in The American Psychologist. The results, published in The American Psychologist itself, found the APA's published articles to be 97% liberal vs. 3% conservative, with a 93% correlation among the different raters.

This doesn't make any particular claim about IQ right or wrong, but it does mean that you might want to take the APA's word about "what is currently known" about a politically controversial issue with a grain of salt. If other things had been found (and they have), would the APA's hand-picked "eminent scientists" be sure to tell you?

[ See a lot of Philip Tetlock's articles on the topic (use Google Scholar, I'm lazy.) Also, see this in The American Psychologist on the APA's own website: http://psycnet.apa.org/index.cfm?fa=fulltext.journal&jco... ]


Surely you're aware that a tremendous amount of ink and (social-)scientist hours have been spent doing "science" with the presumption that there "are no genes for IQ" etc., so it makes sense to be on guard against biased writing (at least by publication/file-drawer bias) on the topic.

You have gwern wrong. He's an unbiased scientist on this topic.


At best this is "the other guy swung first." At worst it is just a straw man. What you claim the other guys do is irrelevant. Territory defense is irrelevant. Facts are relevant and fist-fights do not uncover them. Why can't the critique proceed on a higher level than "those Xists are deliberately lying again"?

If your way of judging a paper is to look at who wrote it, then conclude that the paper is bad based on some kind of guilt by association with the "wrong school" then you are not being honest with yourself.

The nobler objective is not to discredit and guard against "nurturists" but to do justice to reality. Cognitive science shouldn't be so much like a kung fu movie...


[To gwern, unprovoked] You are attempting the early dismissal of some viewpoint you disagree with on a more or less political grounds but this has not very much to do with science.

[Next reply] Why can't the critique proceed on a higher level than "those Xists are deliberately lying again"?

You seem to be suffering from some cognitive dissonance here.


I'm more worried about this:

> There is not a single mention of the many failures to replicate, methodological critiques, or skeptical meta-analyses about n-back.

Can you refute that?


If you think I am defending the paper because I am its ideological ally, you have not understood what I said


Oh, this wasn't a jab at you. Rather, that's the part of grandparent's post that I think is the most concerning. I just don't have the knowledge and skills to see if it's true, so I was asking you.

Usually, part of the science process is to analyze results in the context of similar work. If the authors are indeed cherry-picking and ignoring counter-results instead of acknowledging them, then that's bad.


To check your biases what percentage of someones Height is based on DNA vs environmental factors? Ok, now compare that with IQ.

In the US environmental factors represent ~20 point shifts in IQ. However 50 point differences are not all that uncommon (75 vs 125) even among family members. Which suggest huge differences based on DNA.

PS: When the average nutrition level increases the importance of DNA also increases.


Please step back a bit.

Let's say you are measuring IQ (whatever it indicates) in some giant between-subjects design or meta-analysis which also includes some number of "independent variables" outside your experimental control. Then you do some kind of multiple regression.

What you are doing is building a very high-level, black-box model finding maximum likelihood parameters to fit some observed data. Each underlying data point is a slice of a snapshot of the behavior of a very complex system.

This is worthwhile insofar as we can account for variance that we might see in future samples and can therefore predict and confirm that we have an understanding of the system. It doesn't actually give us visibility into the complex system because regression terms rarely correspond to anything concrete. This offers only the vaguest possible constraint on attempts to decompose the overall effect into causal factors.

The weights in the regression do not, individually, have a meaningful interpretation outside of the model. (The implication and meaning of a term in a theory depends on the meanings of other terms in the theory).

But for some reason we decide to disregard this, and are primarily interested in which weight is bigger. And the reason we are interested is as a way of showing which ideological school is right or wrong: where the ideological schools are some version of "DNA is so important" and "environment is so important". If one school is vindicated - what? Everything they say is true?

We are testing hypotheses, not people. To badly paraphrase Popper: we send our theories to die in our stead...

Are you getting an idea of what I am saying about nature vs. nurture here or are we just talking past each other?


The value of such a simple model is not what it directly tells you anything but when it stops working you have a strong hint that something interesting is going on. Suppose you find a town where the IQ is 7 points above what you would expect after accounting for the children's socioeconomic situation AND their parents IQ scores. Sounds interesting right, except without the proper weights you can't really do that analysis in the first place.


I hope most people want to identify the influences on IQ so they can try to change them, not so they can vindicate their preferred school of thought.


At least you're not succumbing to the Gell-Mann Amnesia effect, gwern.


Heh. And I started reading the paper with such high hopes when it came out months ago, too.. For those not familiar, realitygrill is referencing a classic Crichton quote:

"Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I call it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.) Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them. In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know."


I never knew it was the "Murray Gell-Mann Amnesia effect" but I've been using this exact line of reasoning for many years when discussing the validity of stories read and watched in the media. Interestingly I discovered it as a kid reading a newspaper article about video games something I was very much into at the time. The were talking about the Sega Genesis and the Super Nintendo and at least half of the article's "facts" were just flat wrong and many of the opinions were far left field of what a legitimate gamer would really think. That's when I made the connection. If this story which is something that I am intimately familiar is so utterly wrong then...what about the things I don't know about. That was about the time I stopped paying so much attention to the general media.

I didn't know it had a fancy name though.


> I didn't know it had a fancy name though.

Well, by giving it that, I imply greater importance to myself, and to the effect, than it would otherwise have!


It sounds like you are hijacking the paper to talk about a pet theory. Did you find a cite on p11 about DNB? They refer to cognitive exercise, but not using that particular method. Whether that method is effective or not is orthogonal to their point.


Huh? They spend almost an entire page talking about DNB and WM training! (And BTW my points about DNB apply just as well to WM training in general.)

> Whether that method is effective or not is orthogonal to their point.

You are wrong. It is very germane whether it works.


Okay, I found the reference on page 10. Somehow my keyword search didn't work and we didn't number pages the same way.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: