My personal theory is that solving difficult math and programming problems is a better use of time than doing working memory training exercises. I'd guess that solving math and programming problems improve working memory just as fast as playing 'brain games', and they have the additional benefit of improving one's math and programming skill.
Of course, this is just a guess of mine that is unsupported by any experimental evidence.
Higher math is about logic, abstractions and applying them to solve problems. Basic math is about holding bits of information in your head and manipulating them accurately. I absolutely believe that basic math taxes your working memory system more than advanced math does.
He also brings up the issue of regularity of eastern number systems making it much easier to do calculations in these languages - to an extent that it gives eastern children real advantage in math. Developmentally speaking.
I'll have to take a look at that book, sounds fascinating. Thanks.
Naturally, the full article is behind a paywall.
the chein task can be found here
Please let me know your suggestions on how to improve this
. I need to make a few more changes for this to be played on ipad but for now works with the keyboard
I'd make the instructions pop up at first or at least make the link a little more prominent. I struggled with how to play the color game at first and it took me a minute to find the instructions.
The games are fun and I could see them being used for brain training or even just as teaching aids for middle school aged children.
What most people seem to fail to realize is just how different people are from each other cognitively, and how many different ways there are of getting to a particular conclusion. "Intelligence" is like a country's GDP -- a complicated, non-stationary mess that, taken together and measured in a way our culture deems important, ends up representing our "Gross Cognitive Product".
So, dual n-back probably improves working memory in most people, which will probably improve their problem solving ability, most of the time. But there are, without a doubt, many other subtle, complicated, and idiosyncratic aspects of cognition that are likely to have far more dramatic effects if appropriately tweaked.
A very astute question. IQ scores are specific to a particular test taken at a particular time, even when comparing IQ scores according to today's standard score definition of IQ for test-takers taking several tests very close together in time.
(See the sortable table, with reference to the original publication, at this page anchor location on Wikipedia. The same table appears at the page anchor link below. ALL of the Wikipedia articles on human intelligence and IQ testing need a lot of updates, because they have been subject to frequent edit-warring, but that table is quite useful.)
Nobody has an IQ score from more than about a century ago. The current standard score definition of IQ, performance on a cognitive test with the population median set at 100 and performance two standard deviations above the median being called IQ 130, began with the Wechsler adult tests in the 1950s and spread to child testing by the 1970s, and is now pretty nearly universal. But even with that definitional issue kept straight, an individual person's IQ score can bounce up and down over time, and by any kind of testing theory we can never be completely sure of a person's "true" score, as any score on any occasion of testing is an estimate of the test-taker's behavior on other occasions or with other item content. Honest IQ test-givers report scores with an error band around the score, as has always been done, for example, by the psychologist who has tested my children for appropriate educational placement.
The very excellent book Terman's Kids: The Groundbreaking Study of How the Gifted Grow Up by Joel N. Shurkin
gives the full back story to the silly estimates you see of historical figures who lived before the era of IQ testing, which were mostly made up by Terman's collaborator Catherine Cox. Her procedure was justifiably laughed at by Shurkin as he described it: she counted the lines in biographical reference works on different historical figures, and supposed that the people in history who got the most ink probably had the highest IQs. There were always plenty of anomalies in her results from the very beginning, and no one takes them seriously anymore.
Lately I've been thinking about the limitations that being human puts on a programmer. We work very hard (and should) to reduce cognitive load on ourselves through good development tools that act as crutches for memory---REPL's and good debuggers allow us to try something and see what happens, as opposed to simulating a multivariate operation in our heads. Intellisense and easily-available docs allow us to cheat a bit on learning(and what I really mean is memorizing) API's.
But what if we could do these things without relying on JIT computer aids? What if I could simulate more levels of abstraction in my head? The private dream of many a Lisper(this one, anyway)---writing programs that write programs that write programs---would be a bit more attainable.
I've been using Anki with great success to learn API's and keyboard shortcuts (Anki+emacs is a match made in heaven), but I despaired at my inability to hold the whole stack, from top to bottom, in my head at once.
So I posted a badly phrased question on Stack Exchange, ("How can I increase the number of levels of abstraction I can reason about at once?), and kept Googling. Eventually I came across Jaeggi's research. It looks promising, but hasn't passed the wide-replication test yet. I'm glad this came up on HN, because I'm eager to see more research in the area and get come confirmation or refutation of the findings.
In the meantime, the premise of Jaeggi's conclusion raises two questions---if working memory can be trained, can it decay with disuse? In that case, are with our fancy debugging tools mere shadows of Real Programmers that used to walk the earth? The other question is this---if the brain is likened to a computer, working memory corresponds to RAM. If we are successful at training working memory and making people "smarter," will we in the future face a bottleneck of processing speed rather than space?
One thing I've noticed about Anki is that the pain of memorizing and retaining stuff has been significantly reduced. As such, I tend to be much more willing to "just memorize the whole thing" in a lot of cases. I guess a good analogy is the impact faster CPU's/more memory has had on programming---when they stop being the limiting factor, we start using languages adapted to us, rather than them. Similarly, as memorization has become "cheaper" to me, I find myself making choices that involve more memorization. A few days ago I made a deck specifically for all the gmail keyboard shortcuts. It would not normally be worth my time to commit those to memory, but because the cost has fallen so much, it didn't seem like a bad idea.
Personally, I think that attaining the state of flow is what makes programming enjoyable, and I haven't experienced it in many years. There are just too many APIs, too much poor documentation, too many bugs, and too many languages I have to switch between for me to ever get into the 'zone'.
I cope by taking every chance I get in my day job to experience these short bursts of pure creation. Do I need an interesting algorithm implemented? I'll hand code it myself rather than spend an hour or so trying to find and shoehorn the "standard" implementation into my application. I love the rare occasions where I notice a, dare I say, "clever" solution* to a problem. Coming up with and implementing these cases are a joy. I admit this isn't always the best way to solve a problem from a "software engineering" standpoint. But it keeps me in the game.
*Not clever as in obscure, WTF worthy; but clever as in elegant and expressive, albeit perhaps inaccessible to less skilled coders.
(My USCF rating did go up fairly significantly around then but it's no proof of any causation, plus I was doing plenty of other things at the same time to increase my chess results anyway.)
Depending on what we mean, "Can you make yourself smarter?" has fairly obvious answers.
Start by looking at children. In one regard, we rarely learn faster than when we are kids. Everything is foreign to us, and we are constantly learning, our brains little sponges in a wet world. But clearly, our 25-year old selves could solve far more complex problems than our 6-year old selves. Did we get smarter between 6 and 25?
In the same vein, think about how severely retarded people are described: "He has the mind of a 4 year old." Whether that description is medically accurate or not isn't the point. We certainly think of children as intellectually inferior, even though all of our brains started out that way.
So what changed? Why is a 25 year old "more intelligent" than a 6 year old? Is it the creation of new neural pathways? Is it simply the way they've learned to look at the world or quickly apply answers and processes they already know to fit new problems?
Maybe you can provide more answers? Because it seems to me that the fact that we got to where we are today indicates that you can absolutely make yourself more intelligent, depending on how you define that. But I'm open to objections.
But why? Most people seem to think that this rapid development is, for the most part, genetically driven. Doesn't it seem strange to reach this conclusion when we don't even have a solid idea about what "intelligence" really means (other than "performs above X level on some cognitive test")?
For the same reason that people grow taller until they're in their late teens and get stronger until they're in their early 20's.
>Most people seem to think that this rapid development is, for the most part, genetically driven.
It is. Depending on which studies you cherry-pick, the heritability of intelligence is somewhere between 0.5 and 0.8. The best way to be smart is to choose your parents.
>Doesn't it seem strange to reach this conclusion when we don't even have a solid idea about what "intelligence" really means (other than "performs above X level on some cognitive test")?
People can argue about the definition of strength just as easily. Who is the strongest person in the world? Is it whoever can bench-press the most weight? What about leg press? Clean-and-jerk? Maybe some average of these measures? Maybe we want to factor the person's weight in as well. While "strength" doesn't always have a precise definition, it's usually pretty easy for us to tell weak people from strong ones. It's the same when people talk about intelligence. The precise definition varies, but there are lots of correlating ways to measure it.
1. Those who believe that intelligence is fixed from birth, and
2. Those who believe that intelligence can be improved
The finding was that folks in category (1) tended to be fearful of being wrong, and had trouble succeeding in life whereas the folks in category (2) felt it was OK or good to make mistakes, and tended to be more successful.
EDIT: article is http://www.nytimes.com/2008/07/06/business/06unbox.html
"In addition to working memory, researchers are seeking to improve fluid intelligence by training other basic mental skills — perceptual speed (deciding, in a matter of seconds, whether a number is odd or even), visual tracking (on a shoot-’em-up computer game, for instance) or quickly switching between a variety of tasks."
Sure, but just because it takes half a second or less doesn't mean it's not reflecting mental performance. For example, testing reflexes takes even less time, down to tenths of a second, but yet, reflex time still correlates with IQ.
You need a speaker to play
should say "Press A when the box appears in the same location _since the start of the game_"
IQ!=genius. Genius is what you do.
So the smart get smarter and the dumb get dumber; so to speak.