Hacker News new | past | comments | ask | show | jobs | submit login
Dual N-Back Meta-analysis (gwern.net)
99 points by gwern on Sept 30, 2013 | hide | past | web | favorite | 14 comments

For those who haven't yet taken notice, Gwern's blog is a treasure trove of thorough, intelligent, compelling analysis and meta-analysis on this and many other topics. It's one of my favorite weekly reads.

It's not always the easiest reading in the world, and the formatting and structure often require a dive down a fairly deep rabbit hole of terminology, methodology, and sourcing. But it's fascinating stuff.

I was actually coming here to comment on the sheer wonderfulness of this blog. I would personally rate this as better than many psychology papers I've read, and over the course of a PhD, that's a large sample of psychology papers.

Additionally, the exposition is clear, the graphs are simple but effective, and its just wonderful in most respects.

The only (minor) criticism I would have is that I would have liked to have seen more of the input code (I didn't spot a link to it anywhere).

> The only (minor) criticism I would have is that I would have liked to have seen more of the input code (I didn't spot a link to it anywhere).

What do you mean by input code? If you mean the original R code, the entire analysis is provided in http://www.gwern.net/DNB%20meta-analysis#source - when I add new studies, all I do is add an entry to the table of data, re-run the R code, and then copy-paste the relevant output. (I could probably automate it even more with Knittr but it doesn't seem worth the work of integrating Knittr with Hakyll.)

Ah, I see. I personally thought that it was LaTeX converted to HTML with Sweave et al, in which case I would have enjoyed seeing all the code. Nonetheless, thank you for the link (and some of the other articles too, which I readily enjoyed).

I know you say in the comments that this is still in progress, but it would be really helpful if you had a conclusions section that ties the results together. From your HN headline, I see that you conclude that the evidence points to DNB being ineffective for improving general intelligence, but that was not the sense I got from actually reading the analysis; rather, I had the sense that evidence supports an improvement in intelligence, but that this evidence is much weaker than it appears to be at first glance.

So it would be helpful if you had some explanation of your final conclusion.

My own basic conclusion is included in the first section, the bits starting with "The confidence interval excludes zero, so one might conclude that n-back does increase IQ scores. From a Bayesian standpoint, it's worth pointing out that this is not nearly as conclusive as it seems, for two reasons:..."

As it happens, the active-control-group studies seem to be converging on zero, so I don't even have to appeal to priors. But if the result had been to find, say, d=0.4, I still wouldn't believe it: it would be more likely that the IQ tests are being corrupted, or publication bias had caused this, or fraud was involved, than we would have found a simple WM exercise which genuinely increased IQ after countless failures.

I do, of course, want to finish up the details (correct the Clouter data, and include the Seidler et al 2010 and Colom et al 2013 data) before I write up any kind of definitive conclusion & do a post-mortem of dual n-back.

Does DNB increase working memory? It need not increase IQ for it to be effective.

Apropos of this article following Gwern's discussion of spaced repetition:

I wonder if a Dual N-Back task that was based on material-to-be-memorized, rather than just positions/letters, would have interesting effects on working or long-term memory. (You'd be stressing two related/entangled recall systems at once.)

Anyone done that test?

I've never heard of that. You could probably script up something. Brain Workshop lets you specify a set of images to use, so I could imagine writing a script to extract the images of the next 30 flashcards and dump them into the folder each time you start up BW, and you would n-back with that...

I would be fascinated to hear how it went.


"Brain Workshop is a free open-source version of the dual n-back brain training exercise."

What exactly does "near WM" mean? Does this improve working memory on a long term basis or does near mean near in time? I feel like that might make it worth doing for me as a programmer as WM feels like a useful thing to improve even if it doesn't directly translate to IQ.

There's a bunch of WM tasks floating around, all slightly different: forward digit span, backward digit span, verbal spans, complex spans using arithmetic, game-like WM tasks designed for children, suites of WM tasks like Cogmed, tests included in quantified-mind.com or Lumosity or Cambridge Brainsciences, etc. These are what I mean by n-back may 'near' transfer to other WM tasks which are not themselves n-back.

As to how valuable WM is, I'm not sure. There were a bunch of positive papers from people like Klingberg, but these tend to use subjective measures or WM-related measures, and there seems to be an absence of studies tying the WM training to concrete useful changes like improved grades or standardized exam scores.

This is completely unsupported intuition, but I won't be surprised if programming especially benefits from working memory more than other aspects of intelligence. That is, it seems possible improved working memory improves programming, without improving grades or exam scores.

I think that's possible too, but I have no idea how one would study that. The empirical study of programming languages is still in its infancy - we can't even say with high confidence whether dynamic or static languages are better to work in, much less show that WM increases cause programming improvements.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact