Hacker News new | past | comments | ask | show | jobs | submit login
Brain training exercises might just make you better at brain training exercises (bps.org.uk)
523 points by ingve on Oct 7, 2016 | hide | past | web | favorite | 211 comments

This is the best article I've ever read of the subject of how you can improve intelligence: https://blogs.scientificamerican.com/guest-blog/you-can-incr...

Kind of like how people have to be reminded that the best way to lose weight is to diet and exercise, the real answer for intelligence is challenging yourself frequently and putting yourself in uncomfortable situations that you need to think your way through.

At the end of the article there's a beautiful definition:

"Intelligence isn’t just about how many levels of math courses you’ve taken, how fast you can solve an algorithm, or how many vocabulary words you know that are over 6 characters. It’s about being able to approach a new problem, recognize its important components, and solve it—then take that knowledge gained and put it towards solving the next, more complex problem. It’s about innovation and imagination, and about being able to put that to use to make the world a better place. This is the kind of intelligence that is valuable, and this is the type of intelligence we should be striving for and encouraging."

I love reading about Feynman. How he would put himself in these novel situations and get completely absorbed. He learned how to play an instrument in Brazil and joined a Carnival school and wound up dressing up, and competing. He learned how to draw and managed to have art exhibits and sell his art. He'd go to dive bars and just wait for something interesting to happen.

He wouldn't just learn something new. He'd learn it and then navigate in its world.

It's not him but I still love this article about his thought process:


Who's that Feynman? He sounds interesting

Richard Feynman. Here's a video about who he is: https://www.youtube.com/watch?v=JIJw3OLB9sI

And the wiki article: https://en.wikipedia.org/wiki/Richard_Feynman

There are lots of videos of his lectures and interviews on youtube. You should check 'em out.

I'm not sure any introduction to Feynman can be complete without this letter he wrote to his wife: http://www.lettersofnote.com/2012/02/i-love-my-wife-my-wife-...

(Warning: you may end up with something, ahem, in your eye...)

Thank you for this. I've never seen it before.

I'm curious about how the letter was found after his death. Who decided to make it public? I'm glad I had the opportunity to read it, but I doubt it was ever intended to be released to the world.

oh man, the postscript. :(

I've read that letter at least ten times. The last five or so times, I've patted myself on the back for not crying, until I get to the postscript and then... off I go again.

Thanks for looking after the ten thousand.


Oh, I had googled him but didn't expect to be this guy. He actually did a lot of accomplishments. Definitely not a wasted life.

Here's the book I'm reading about him. Great read! https://www.amazon.com/dp/B003V1WXKU/ref=dp-kindle-redirect?...

--and BY him. Great read.

Does he have a twitter?

Surely you're joking?

I am serious - and don't call me Shirley.

EDIT: It's Friday. Voted down for one of the greatests movie quotes of all time?


Hacker news is not reddit, and conversation is supposed to stay on topic and not be interrupted by memes.

Get off my lawn!

"Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character is an edited collection of reminiscences by the Nobel Prize-winning physicist Richard Feynman.


There is a twitter. He's not very active these days though https://twitter.com/richardfeynman

Surely you must be joking?

surely you both read the book

He died in 1988 at the young age of 69. He will be missed.


He passed away a few decades ago, but was important in building the Connection Machine[1], and had an astute analysis of computer addiction[2].

One of the really impressive and astounding contributions of his, despite have no formal training on computer engineering:

Heavily edited excerpt:

"The router of the Connection Machine was the part of the hardware that allowed the processors to communicate. It was a complicated device; by comparison, the processors themselves were simple. Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires. Instead, we planned to connect the processors in a 20-dimensional hypercube so that each processor would only need to talk to 20 others directly. Because many processors had to communicate simultaneously, many messages would contend for the same wires. The router's job was to find a free path through this 20-dimensional traffic jam or, if it couldn't, to hold onto the message in a buffer until a path became free. Our question to Richard Feynman was whether we had allowed enough buffers for the router to operate efficiently.

"Richard began studying the router circuit diagrams as if they were objects of nature. He was willing to listen to explanations of how and why things worked, but fundamentally he preferred to figure out everything himself by simulating the action of each of the circuits with pencil and paper.

"By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange.

"Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

"The decision to ignore Feynman's analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman's equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.

"Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway's game of Life."



I never thought of Feynman in this way. At first those quotes on his view of router mechanics seems average at best, but the Danny Hills followups were really interesting and give me new light on Feynman's quotes. Thanks!

I learned a lot from his lectures even though I didn't go to Cal Tech (only UC Berkeley, but his lectures where still available).

EDIT: I've written Conway's Game of Life in at least 20 languages by now (including oddities like PostScript and C++ templates). I find it an excellent way to view a machine,

I'll also add to your comment that the Learning How to Learn course on Coursera is helpful regarding the concept of challenging yourself, as you can enhance your ability to take on challenging topics in a way that is sustainable and truly productive. For example, the idea of working on a problem eight hours a day straight seems efficient, but it's actually better to take breaks and let your mind wander and make neural connections in the background while you exercise or sleep.

And instead of perhaps trying to solve a hard challenge at once in a day, such as learning something very complex, it's helpful to comprehend parts of it and spread it out over a course of a week, not unlike how it's better to learn an instrument by practicing a little bit everyday instead of cramming it into one day a week.

Furthermore, instead of just jumping for a quick answer (e.g. StackOverflow, forums, answers sheet), it's helpful to try to solve it on your own even if it takes a much longer time. There's a big difference between 'knowing' something because you saw the solution and reverse engineered the method, vs. coming up with it yourself.

Some of this might be stupidly obvious but it's nice to know there is science/other anecdotes behind it.

That course is something I wish I had known decades ago before I went to University and in the workplace.

The hypothesis that one's progress is directly proportional to the amount of pain experienced is actually a rather strange one and tends to be an expression of the just-world hypothesis, in which I absolutely do not believe.

Maybe I'm missing something here, but the linked article doesn't really seem to present evidence for its claims, either.

The article brings up children, who are already known to have very variable IQ compared to the IQ of adults. From what I've seen, the evidence for IQ as such not being very fluid is more solid than for it being fluid. At the very least, the statement that anyone can increase their IQ would sound really strange in light of current research.

Furthermore, raising the IQ of disadvantaged children and then saying that means IQ is like the analogy in another comment where being able to take a broken car from 0 MPH to 100 MPH by fixing it implies you'd be able to take a Honda and make it run 100 MPH faster. It doesn't work that way. Consider that the average person gets no such help. The more intelligent ones often get negative help.

IQ, though, is not necessarily the same as intelligence, or even the ability to learn new information. I don't disagree with the advice the article suggests, and I think it has a high chance of raising one's intelligence, I'm just not sure if it would raise one's IQ. Given that we don't really know the relationship between IQ, g, and "intelligence" yet, even that claim is definitely suspect. There's nothing obvious about it, unless you sign up for the pain == progress hypothesis, which is the just-world hypothesis.

The greater danger is that we have convinced quite a few people that intelligence == IQ and not being able to raise one means you can't raise the other. I'm pretty sure my IQ hasn't risen one point for the past 6 years but I definitely feel more intelligent per the definition in the italicized paragraph.

And I think the point of brain exercises was specifically to raise the raw processing power of the brain without information aids, i.e., IQ.

The problem with the authors principles is that they are based on a study[0] that has failed to replicate and was faulty to begin with[1].

[0] Susanne M. Jaeggi, M. B. (2008). Improving Fluid intelligence With Training on Working Memory. Proceedings of the National Academy of Sciences. doi: 10.1073/pnas.0801268105

[1] Courtesy of it_learnses: https://www.scientificamerican.com/article/brain-training-do...

I'm trying to address the issue without relying on how well given studies replicated, since that's a bit of a minefield in general. A lot of the principles in the parent are much older than the studies. While it would be really great to have solid studies that would confirm one or the other position, it doesn't seem like they exist, so I'm addressing it on a more philosophical level.

If the study result cannot be replicated then the "principles" you refer to are nothing more than superstitions and you can "address them on philosophical level" all you want, that won't make them any more true.

The fact that those things (like challenging yourself, seeking novelty, etc.) are often quoted in context of IQ increase just comes from observing the behaviour of highly intelligent people and then confusing the cause with effect.

Strongly agree. I identify very strongly with that quote. I'm a geographer with near-zero math or computer science education from university. I work as a software engineer, surrounded by brilliant engineers. I have what I describe as a "swiss cheese practical knowledge of software engineering." I pride myself on being very adaptable, able to understand the components of a problem and the ability to learn how to solve the problem. I also feel exceptionally good at communication and the social aspects of problem solving. I've been doing very well at my job.

But I think if I grew up being taught that intelligence != knowledge + trivia, I wouldn't have spent my earlier years fighting the impostor syndrome on a near daily basis.

this article also references the Dual-N-Back study which suggested that it could be used to increase cognitive skill that was transferable. But that study was debunked. Here's a Scientific American article on that: https://www.scientificamerican.com/article/brain-training-do...

It there a term for this phenomenon, where a researcher publishes an extraordinary result, some how gets past peer review, and then avoids pursuing the research further, to avoid embarrassing themselves and losing the fame and credit? Often there is a bonus step where they create a company to commercialize a product based on the conclusions.

The scientific method.

I agree that according to the article you linked it seems that the Dual-N-Back game is no different than any other brain training game which seems to differ from the authors view on it.

However, I don't think that debunks the main points of the authors article (the last 3/4 of the article).

I think it does. If indeed Fluid Intelligence cannot be trained, then you could say that people who seek out Novelty, Challenges, etc. are actually the sort of people who have high intelligence, than the other way around. If these 5 principles he states actually did increase intelligence, then Lumosity would work for people as it exposes you to novel brain training games that are challenging.

Now that I reread the article I see that you are right. The authors whole argument depends on the Jaeggi study which has not been successfully replicated after multiple attempts.

That article contains no evidence you can raise intelligence, and also references the now discredited n-back brain training games.

Isn't that really a variation on being good at patterns you've encountered before?

Perhaps the only "real" secret to intelligence is avoiding malnutrition or any type of poisoning such as from lead.

I would pay money for a subscription service that would give me way to challenge myself every... month? (hopefully the system would tell me how long I should work on each new topic for).

Perhaps part of it is being creative enough to work that out?

Actually diet and exercise are pretty much completely ineffective at long term weight loss.

"Why can’t an average person make those kinds of gains as well? Or even more gains, considering they don’t have the additional challenge of an autism spectrum disorder?"

This betrays a complete lack of understanding of the body of knowledge around intellectual improvement. No one would say "we took this broken Honda and were able to get a 100% improvement in it's speed, and gas efficiency by just fixing the leaky gas hose, if we can make these kinds of gains with a broken car imagine what we do with a new Honda". This would of course be as silly.

Of course diet and exercise are effective for long term weight loss.

People stop dieting. That's the problem.

Calling a diet that people can't follow "effective" seems like a fine exaggeration to help maintain a nutritionist practice but impractical for any other purpose

That's like saying that quitting smoking is ineffective because many people can't follow through with it.

Limiting caloric intake (diet) below caloric output (exercise, etc) is the only effective way to lose weight, excluding literally cutting it off. Just because people struggle at doing so doesn't mean that it's not effective.

Depends on how you are measuring effectiveness. From the perspective of either sociology or public policy, calling a treatment that doesn't work effective makes no sense. It's like saying that the only reason we have problems is because people are not always completely rational and benevolent in every interaction. That may very well be true, but what use is that? I don't understand what purpose that kind of comment may serve outside of making the commenters feel better that they do not share the problem.

The issue with the human body is that psychological effects are actually physical effects and therefore entirely relevant. The human body is not a mere automobile into which you pour gas, you push a gas pedal, and it just goes. That's the entire problem. You can't even treat a cat like that, let alone a human.

The psychological consequences are part of the equation and disregarding them and trying to pretend they don't exist is essentially taking out all the complex variables out and solving a simple equation nobody cares about. Kind of like programming GPS using Newtonian physics and then being surprised it's off constantly. There are more complex laws at stake here and we don't understand them very well. Moralization is effectively a cop out. "They're too lazy, stupid, or otherwise inferior to me to do this theoretically simple thing" is the same as: "I have no idea what's going on."

Limiting caloric intake leads to weight loss. So why are people not limiting their calorie intake? Then consider what kind of people are more or less likely to limit their calorie intake. So on and so forth. The equation becomes a lot more complex and also a lot more interesting.

It all comes down to effective in theory vs effective in practice.

>effective in shitty theory

This is meta, but there really needs to be a better phrase for ideas/models that only seem good when ignoring important details. A good theory is also good in practice because it actually factors in the relevant details.

I have no particular problem with you clinging to a falsehood, but repeating it can't make it true. There is ample evidence that carbohydrate restriction is also effective at causing weight reduction, no matter how much people try to deny or ignore it.

Can you point to any evidence? AFAIK, low-carb diets mainly work by increasing satiety (i.e. indirectly limiting calories consumed).

Restricting carbohydrate intake let's you control blood sugar levels and insulin response to meals, which generally leads to weight loss in most people.

I get that the wiki article on low carb is rambling, but why HN :?

Lower insulin spike after a meal results in lower hunger pangs later on, i.e. less hunger and less calories consumed, leading to weight loss.

The problem is quitting smoking is not an intervention. You cannot push a button and cause someone to quit smoking. You can "Talk to someone about quitting smoking", or "Send them to quit smoking meetings" or "Give them nicorette". These things are interventions. Quitting smoking in an intermediate result that is linked to positive outcomes.

For simplicity's sake lets focus on bugs created when users fail to consider edge cases.

We could say 100% of these bugs are caused by people failing to consider edge cases, and if developers accurately considered all edge cases then we would have 0 bugs.

To then think that this problem is solved because we can tell a developer "consider all edge cases" even though it has 0 effect on the actual bug count caused by edge cases is absurd.

The only effective way for weight loss is making sure that one's caloric intake is less than the amount of calories they burn. And the only way of achieving that is diet and exercise.

Also comparing the human body/brain to a car/mechanical device would be incorrect and is probably an apple to oranges comparison.

Calories in - calories out is true, but it's hard to measure either with any much accuracy. You can get close with calories in, but measuring calories out isn't easy: There are charts for different activities, but our bodies are not perfect efficient machines that always use the same amount of energy for the same tasks.

Additionally, our bodies can make subtle changes in response to caloric intake: If you ingest fewer calories, you may fidget less, possibly lowering your caloric use more than you lowered your input.

>And the only way of achieving that is diet and exercise

Or increasing the metabolic rate. Or decreasing the calorie storage rate.

Doesn't increasing your metabolic rate require increasing your body temperature? This is something that has always puzzled me when I hear that some people have faster or slower metabolism than others.

Your metabolic rates at birth, adolescence, or your teen years are probably drastically different than your rate now if you're past 30. Is your average body temperature way less than 98.7 now?

Body volume scales roughly with the third power of height, while body surface scales roughly with the second power. Therefore, it is possible for the body temperature to stay constant if the basic metabolic rate (i.e., heat generation) per unit of volume decreases between birth and adulthood.

Besides, the human body has several ways of regulating core body temperature: sweating, restricting blood flow to extremities, goose bumbs, and putting on different clothes.

All good points. Do we have measured values for metabolic rate per unit volume as a function of age? This does seem plausible -- building a new body should take more stuff than maintaining an old body.

But I wonder if this translates into different adults saying that they have fast or slow metabolism as an explanation for their weight.

Perhaps I just don't understand what "metabolic rate" means. My body is a metabolic zero-sum game, unless I'm gaining or losing weight. What comes in as food goes out as heat or mechanical work. I suppose a higher heat output could be achieved at constant temperature by sweating more. A larger person (more skin) could of course generate more heat, but the energy has to come from eating more food or being less active.

Do you simply mean that I was gaining weight at birth, adolescence, and teen years? I could agree with that.

"What comes in as food goes out as heat or mechanical work." Or out as undigested food waste. Don't have an answer for you on metabolic rate, just adding the third way food calories can leave your body.

"The only effective way for weight loss is making sure that one's caloric intake is less than the amount of calories they burn."

This is probably true, but a better question is whether this is even important.

One guy said "it's a lot easier for a fat person to become fit than it is for a fat person to become thin."

Generally the press (and maybe scientists, though I'm not so sure about them) seem obsessed with thinness rather than health or fitness.

For people who disagree I would really love to see a study showing successful long term(3 years plus) weight loss( as an intervention 5% of people retain the weight loss but 95% fail). To my knowledge no studies exist, but I would love to revise my opinion

I played around with Lumosity as part of a research project a few years ago. The games were somewhat fun and it was possible to improve one's score by repeated play.

So little of what humans do is similar to those games, so I don't see how "context" could possibly be similar unless those sorts of brain speed/recognition tasks were part of your day to day work.

I'd be curious about scenarios like:

- Practice identifying the semantic bug in a 20 line snippet of code. How effectively would practicing this help a person identify real bugs in actual code?

- Chess problems with 5 pieces on the board. How helpful would practicing these be to solving problems with 6 pieces?

- Essay writing. Suppose for a moment that a human essay could be judged accurately enough to create a 500 word essay trainer. How effective would training on it be to quickly being able to articulate one's thoughts?

Similarly, I'd be curious about a sunk cost rationality trainer, logical proof trainer, and reading comprehension trainer. These would be harder to build than the simple video games on Lumosity, but I suspect the competency obtained could be a bit more useful in real world tests of ability.

However, there are fairly few cases where specific characteristics of human intellect or rationality are measurable by longer term human performance. If your job entails long term project planning, trade-offs, etc., it's pretty hard to prepare in a meaningful way using short-term training sessions.

> I'd be curious about scenarios like: > > - Practice identifying the semantic bug in a 20 line snippet of code. How effectively would practicing this help a person identify real bugs in actual code?

For a couple of years I've been a high school computer science teacher. Having 50 students making the same errors over and over, made me very good at spotting syntactical errors in other people's code. Furthermore, I was also able to easily identify "beginners' bad patterns", which are basically the inverse of the usual best practices. The downside of this was that some students tried to use me as a human compiler/linter because I was seemingly able to spot the problematic areas in their code in almost no time at all. Of course, at such moments I had been monitoring their progress for some time. Nevertheless, it helped me enormously to support my students in learning to debug their own code because I could have them reflect on their own practices and design choices.

Just an anecdote.

+1 on what you said.

I was a TA in college for intermediate c++ object oriented course for 16 classes approximately over 2 years. (about 500 students). So many basic and core programming practices almost always stick out like a sore thumb to me. I usually graded in a text editor without any IDE support, and towards the end there, got to the point of just glancing through there program I could easily mentally simulate the compile errors and if fixed, the runtime issues/behavior as well.

It definitely help make me a much better programmer and developer. Now working in the industry, I still see a lot of things I can't help but wonder "WHY?!" though technically they are often not wrong, but just unnecessarily complex or convoluted.

> - Essay writing. Suppose for a moment that a human essay could be judged accurately enough to create a 500 word essay trainer. How effective would training on it be to quickly being able to articulate one's thoughts?

This exists. It's called an audience and publishing something every day will get you good real quick.

A lot of famous authors started as journalists and more recently bloggers for exactly this reason.

My college's placement test used some kind of algorithmic essay grader. I found it strange. It checked for typos, vocabulary use, etc. probably.

I would be interested in learning about any algorithm capable of grading an essay. Typos and vocabulary use should only account for a small portion of the grade in my opinion. To me, an essay should be graded based on whether or not it accomplishes it's goal, whether that is to inform or persuade. I've always felt that this is subjective and difficult for even a human to put a real grade on.

It was a very basic "essay", only a few paragraphs if I remember right, answering an easy prompt. The goal was probably only to score technical writing ability while disguising it as a normal essay assignment - it wasn't revealed that what you wrote would be graded by an algorithm until afterwards.

I feel like it would be really easy to game an automatic grader. I mean, it can't understand what you're writing, so if you use good vocabulary and complex sentences, it won't know if you're writing nonsense.

More info about the one used by ETS: https://www.ets.org/erater

Gimpel Software used to run "Bug of the Month" magazine ads showing code with subtle C++ bugs that would be caught by their Flexelint linter. Here is their Bug of the Month archive (2002-2012):


>Similarly, I'd be curious about a sunk cost rationality trainer, logical proof trainer,

A strong understanding of cognitive biases, logic and informal fallacies allows one to better reason from first principles. Without this, even an intelligent person will only think derivative thoughts because their truth value will come from their similarity to those of authority figures or what they read in a book.

If one can reason from first principles, one can come up with ideas that haven't been thought before and develop them in a structured way without getting stuck in delusions. It's also important to be able to detect mistakes in one's thinking, especially when there is no one else to ask if you're wrong.

That sounds rather unrealistic. The chance that you're the first one to think thought x is pretty low. Rely on other's work and step beyond it, don't try to ignore it.

> Chess problems with 5 pieces on the board. How helpful would practicing these be to solving problems with 6 pieces?

Another anecdote: I experienced a noticeable improvement in my abilities at Chess by studying and playing Go. I'm still not real good at Chess because I haven't studied it, but I can now beat people I didn't used to be able to beat.

I've had a similar experience where playing Go made be better at living life, and being away from the game and living made me better at Go when I expected to be set back. This probably only applies at kyu levels but I did find it surprising.

A completely arbitrary related concept: I'm much better at running my startup now than before I played StarCraft seriously.

StarCraft made me a more strategic thinker and it's why I think it might not surprise people that Emmett Shear (ceo of twitch) is really good at StarCraft.

I played casually in the early days, but a few years back I wanted to get decent enough to play online regularly, and it changed our company for the better.

Is it your skill at StarCraft, or could it be that you developed strategies for learning, resilience, etc while trying get better, and those are the actual success factors?

Conceptual stretching is my guess. More strategy than normal for me.

Do you micro the workers or focus on the macro game?


Sounds like Go is the Lisp of board games?

"Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot."

It will be a similar enlightenment experience with Haskell?

> playing Go made be better at living life

Would you mind talking about that some more, please?

If you want to hear something similar from someone who achieved a very high level in chess, I'd highly recommend reading Josh Waitzkin's The Art of Learning.

Do you know specifically how your play changed?

As I said, I'm not a strong chess player and don't know much about strategies. I think my reading ability is primarily what improved.

Hey - I have been working on chess game with lesser pieces on board - www.halfchess.com ... I initially thought it would be a great way to teach people/kids chess. If you have any ideas for my little game - please share them with me - navalnovel at gmail com

My wife teaches math at a community college level, meaning she often needs to teach students who are paralyzed by fear of fractions (or worse, negative numbers). Many of her students are also adults. She tells me things like "If I ask one of these adult students a question involving negative numbers, and I phrase it in terms of money and debt, they answer it immediately. If I ask them using just numbers, they have no clue." These students do not know how to take a concept they've used their entire lives to balance their checkbooks, and abstract it beyond money.

I would conjecture that in most cases of "activities that stimulate your brain", the key to generalization requires another skill: being able to abstract a skill you have learned from one situation, and then specialize it to another situation. And this skill needs practicing. I also conjecture that a small fraction of the population actively does this, and this could explain why the results aren't statistically significant. E.g. meditation actively encourages introspection and generalized mindfulness in any situation, whereas crosswords and Chess are played for their own sake.

>If I ask one of these adult students a question involving negative numbers, and I phrase it in terms of money and debt, they answer it immediately. If I ask them using just numbers, they have no clue.

This doesn't just happen with students who are bad at math. If you sit in on most introductory advanced math classes (eg. introduction to abstract algebra), you will see the same thing: students imminently understand what is being asked when it is phrased in terms of something concrete, but have more difficulty answering even basic questions when they are asked in the abstract.

The main difference, in my experience, is that in advanced classes, a "concrete" example would be something like the set of 2x2 real valued matrixes; so non mathamaticians do not recognize it as being simple.

There is also the difference that experienced mathematicians have learned that "simple" things will become simple with enough exposure; and have honed the ability brute force their way through while they get that exposure.

>"If I ask one of these adult students a question involving negative numbers, and I phrase it in terms of money and debt, they answer it immediately. If I ask them using just numbers, they have no clue."

There's an even more extreme example with street kids in Brazil who had de facto knowledge of mathematics (they worked as peddlers) but weren't able to do the same thing they did during daily life in a school setting.

I wouldn't say that concept transfer is a skill, however. The two things that come to my mind when I think of the issue of transfer are: the feeling of a student being hung up on a particular problem, anxiously waiting for a solution to come up to his mind; people who seemed to try to tackle lots of different problems from lots of different angles (Wheeler, Feynman). So it seems to me to be more constrained by openess to experience on one hand (the willingness to try and fail) and creativity (willingness to look for and propose solutions) on the other, both of which are more like inclinations than skills.


My belief is that humans are pattern matchers, even in the context of abstraction or logical thinking. The pattern of skill transfer is the experience of transferring skills. That opens the mind to the pattern of colliding different learned patterns into situations that seem not relevant just as patterning the logic leads to being able to develop logical proofs because the pattern of doing logic is being recognized.

For example, proving an identity looks like solving an equation. So that is why many students naturally put down a proof that looks like solving an equation. And it can be a proof if the observation is that for each x, the statement holds as all steps are observable. Students rarely make that last step, however, because the pattern they are following breaks down: 1=1 has no expression of x. They kind of just move on at that point.

By contrast, someone trained in mathematics would most likely prove an identity by taking one side and transforming it into the other side as they have the experience of that flow of proofs and the goal there is clear: one stops with the right-hand side.

It is not logical thinking, but logical pattern application. Transfer skills are similarly learned. We all have them, after all. I have never met anyone bothered by counting markers being a different skill than counting pebbles. But that is a skill transfer too. It just happens at such a young age and is so ubiquitous that we don't even notice it.

I would be surprised if meditation helps with any of this. It helps with many other things, such as being more aware of events, particularly emotional states, as that is the pattern being developed, but it is hard to imagine it will help bridge the gap of checkbook balancing to general numbers. What one needs is the experience of numbers in multiple contexts.

The pattern in the patterns is the path to abstraction and it no doubt takes a number of patterns to get there. And learning the different kinds of abstractions requires different kinds of patterns.

I highly recommend watching this: https://www.ted.com/talks/james_flynn_why_our_iq_levels_are_...

Flynn believes that the rising IQ scores over the generations are due to us dealing better with abstractions and using logic on these abstractions.

So meditation is actively improving your "introspection and generalized mindfulness", but people doing crosswords and playing Chess are just being silly and wasting time?

I was with you until I read that. Maybe I just don't buy into the whole meditation thing, but from where I stand, at least Chess seems like it actively improves/encourages/practices strategic thinking skills.

Playing chess builds up your strategic thinking in chess and it's barely transferable to other activities, even to other games.

Meditation is about watching your mind and training to recognize your subtle thoughts and emotions. It's universal and can be applied to any activity in one's life because everything is being processed inside our minds.

I don't meditate, actually, I was just including that in re some other thread that was mentioning it. I'm saying that yes, Chess does improve strategic thinking skills, but it probably won't help you be more strategic in other situations unless you're good at the "skill of skill transfer."

Here's a mystery to me: the failure of brain training games vs. the success of meditation.

The science for brain training games is not encouraging. A number of studies have found that they do not produce generalizable mental improvements.

Meanwhile, every time I turn around I run into a study of mindfulness meditation that did produce a generalizable improvement to mental abilities.

For example, I was just reading one about how meditation can reduce pain perception by 40% and that measurement was backed up by MRI imaging of reduced brain activity in pain centers.

Let's call meditation a brain game that works.

The mystery then is why is there only one game that works? Will we ever find a second?

I think I can explain the mystery to you: You have a skewed view of the scientific evidence.

The evidence for mindfulness and meditation is questionable at best. There's a lot of bad science, undeclared conflicts of interest and weak studies in that space.

> For example, I was just reading one about how meditation can reduce pain perception by 40% and that measurement was backed up by MRI imaging of reduced brain activity in pain centers.

This is a completely meaningless statement. What kind of study is that based on? An RCT or observational data? How many participants? A single study or a metaanalysis? Has it been replicated? Was it preregistered? All of that matters to decide whether that's just "a study" or reliable science.

There are plenty of studies showing that brain games work, too. They're just of low quality.

(warning, violent imagery)



He doesn't move or make a single sound as he's being burned alive. Sure, it's anecdotal, but it certainly seems like there's something to meditation that strengthens conscious control over the body.

I was quoting from this Atlantic article, which easily could have misinterpreted the science (I admit this is common): http://www.theatlantic.com/health/archive/2014/04/treating-c...

Here's the original paper: http://www.psych.uncc.edu/pagoolka/seminar/jofpain2009.pdf

Most of the confidence in my post comes from personal observation. I don't claim that should be convincing to you--but it is convincing to me.

You'll always find studies both ways, and in a field as big you'll also always find junk, but i think you make rather bold claims (undeclared conflicts of interest, etc) which seem unlikely to hold true for a significant portion of the large number of studies (eg meditation is really not a big business, certainly something that anyone can learn for free). Can you point to any meta-analysis or other evidence to back up your viewpoint/claims?

Have you ever meditated?

If he had, would the points he made be stronger or weaker? Do meditation today improve the reliability of past studies conducted by other people?

I was just wondering.

I guess it depends on your definition of meditation, since it's not an actual physical or mental state someone can be definitively in or not in. I might say that I meditate every night when I sleep for 5-8 hours.

> since it's not an actual physical or mental state someone can be definitively in or not in

How do you know that?


I cannot know that, and neither can anyone else. The reasons being that meditation does not have a good scientific definition and is not falsifiable. We shouldn't consider it as science until both of those reasons change.

I don't understand why rigorous science is required to answer a simple question like 'Are you meditating?', as long as someone has provided you with a common-sense definition.

Science hasn't unraveled all the mysteries of consciousness, or ethics, or economics either, but that doesn't force us to remain silent on those topics.

You might think that meditation is a bunch of hocus-pocus and isn't real, but science hasn't decided either way.

As a person who loves games and has some experience of mindfullness/zen meditation:

Games are about reasoning and reacting to external stimuli. Meditation usually involves more passive observation of self's inner self.

To put it another way - games are dissociative, while meditation (the mindfulness kind anyway) aims to be anti-dissociative.

One more point of view: I can play a game hours without being aware of the time that has gone by. When meditating I'm aware of each breath I take.

I have no vocabulary to exactly specify how games and meditation are different - I can only state that the experience is different.

I'd be hesitant to call meditation a game in the same sense that other brain training exercise seem to be. Those games tend to have distinct end points (or multiple)/goals or something you are ambling towards. Meditation on the other hand, is based on focused observation of how your mind works. When done right, there is no goal or judgement - no end state that you are uneasily pushing towards.

But is that the important part of meditation that leads to the results? I don't think we understand it well enough yet to know.

We have in fact understood it for millenia, but scientific instrumentation can't probe it like human inquiry has.

And this is what I find worrying concerning the time we are living in. Society by and large moved from rejecting science to demanding scientific studies on everything. While this has certainly been a move in the right direction it led many people to having a very limited perception of the real world. There is more than words and numbers and I am not talking about some bullshit magical powers but this stuff has to be experienced. And why do I say this? Because I believe that this stuff which exists besides words and numbers is, like words and numbers, a subset of the resources that enable intelligence.

So there is nothing wrong with brain training games and perhaps they do increase intelligence but don't be surprised if training only a tiny portion of what enables intelligence does not lead to enormous results.

In my opinion meditation can be an overwhelming perhaps even painful experience and thus it challenges you more seriously than, say, finding the next number in a sequence. No pain, no gain.

Meditation is about focus, that's it. A person is born with some innate intelligence, but, thanks to things like television, email, Facebook, etc. we are far from our full potential, because we are constantly distracted. Once you regain your focus you hit the ceiling of your possible IQ. Meditating further will not do you much good past that point.

People meditated far before there were "things like television, email, Facebook, etc.".

It's not just about "quiet time".

Yes, agreed. Distractions don't have be television, email, Facebook... We spend a lot of time thinking about hypothetical situations that may occur in the future & how we'll react to them. Or reliving our versions of past events over and over. All of these distract from the present moment. Meditation still helps in these situations.

But there were other distractions. We can always find distractions. This is as good of a hypothesis as any I've seen, but I agree that we don't really know yet why meditation works. Does walking meditation do the same thing? Meditative Tai Chi? Playing basketball "in the zone?" Or even playing video games "in the zone?"

Emotions are the universal distraction.

The practices focus on different parts of the brain.

The games usually focus on the frontal cortex development (http://www.webmd.com/balance/features/believe-it-or-not-comp..., admittedly with with mixed success https://www.ncbi.nlm.nih.gov/pubmed/20407435) while meditation seems to lead to an increase of brain cells in hippocampus https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3004979/

Your referenced study on meditation doesn't suffer from decontextualization.. While you are meditating you are acutely aware of senses like pain and you are managing them.

I currently do language learning and music apps, AFAIK I am actually learning things that are directly applicable to the real world and that supposedly have cognative benefits correlated with having learned them traditionally. Do those benefits evaporate when I learn them in "games"?

There's actually a simple explanation for this: meditation trains brain functions that operate at a sufficiently general level to see cross-domain impacts, while brain training games don't (or at least the range of domains is much smaller).

Let me give you an example of a popular form of meditation:

You sit up straight in a comfortable place, not doing anything special, just listening to what's going on in your environment and tuning into sensations in your body (this happens automatically as long as you haven't 'assigned' yourself something else to do; once you get into this state, you might recognize it as familiar from when you were younger, or from a time when you were feeling particularly like "everything's okay" and you are just at home). Before starting this, you understand that you are going to watch/listen/feel your breath—and if you've had some practice, you don't try to do anything about it: you just wait and feel whatever comes up as time passes. Eventually it's like the queue of other things clears and you start tuning into your breath more and more until the sensations become very rich and relaxing.

Thoughts will continue emerging for a good long while during this, and meditation teaches a certain response to noticing their arrival: you acknowledge them, as if you'd heard a small sound in the room, turned around and saw what it was, and then returned to what you were doing (often times meditators are encouraged to use a quick label for whatever mental thing comes up, e.g. 'thinking' or 'fear')—then, assuming you haven't got too wrapped up in the thought (either by following it, or by trying somehow to making it go away), the sensations of your breath return to the forefront.

Two big things come from this: you become more relaxed afterward, which aids task performance in most domains directly; and, you learn all these pitfalls and techniques for maintaining focus on what you choose to focus on. One big one, for instance, is that a lot of people approach focus by seeing distractions as problems to be solved (whether they are thoughts, pains, environmental sounds—whatever), but eventually you see that what works is maintaining an equanimous attitude toward distractions, and simply returning your attention to what you're doing, while understanding you might get pulled away again shortly (and it's not enough to just 'know' this; you have to practice it).

Lastly, meditators are typically encouraged to practice being in this 'aware but not trying to do anything' state while doing other things than sitting, which has got to help with generalizing. (Not to mention the desired mental state is just domain agnostic to begin with. When you're doing a task you either believe you know how to do it and go at it confidently, with little meta-thought; or you're observing/critiquing how you're doing it and trying to change routines you've stored for doing it. In the first state you're way more tuned into sense data which is relevant for actually executing the task in the present moment. Meditation helps you learn to enter the first state intentionally.)

That's an interesting hypothesis that brain games are too specific.

Brain training games use too much of the left, 'linear', analytical side of the brain. Meditation brings the right, 'rich', creative side to bear on a problem.

I recommend drawing as a meditative activity. The following books have been helpful:

Pragmatic Thinking and Learning by Andy Hunt

Drawing on the Right Side of the Brain by Betty Edwards

Search Inside Yourself by Chade-Meng Tan

Well, I think there are different types and methodologies of meditation that are believed to work. My guess is studies tend to focus on one format to be consistent and build on each other. Meditation at its core is nothing but practicing focus and self control, so my guess is a whole range of 'games' is possible to achieve the same effect but sitting down and focusing on your breath is basically the most practical. I have not played the brain games and so can't comment on why they appear not to achieve the same effect - perhaps they put too much focus into the abstractions and visuals of the game, and not the introspection of meditation.

> Meditation at its core is nothing but practicing focus and self control

You nailed it. Mental acuity is all about the ability to remain focused. Probably most of what gets written off as stupidity is really just an untrained mind that can't stay focused on a subject, task, conversation, etc.

Meditation has the benefit of being restful as well as enhancing focus. Intensive game playing, while not necessarily relaxing, does inspire a greatly increased measure of focus. I am not convinced that brain games are ineffective. Weren't there recently numerous studies asserting that video game playing provided cognitive benefits?

I would suggest that focus and self-control are not the core of meditation. It can profoundly influence the way you see the world, way beyond what you describe.

The article begins from the premise that psychology has shown that brain training can not cause generalized benefits, and then goes on to review more than 130 peer-reviewed publications, meticulously finding fault with each one, and concluding that nothing can be learned from the entire field. It's as if the authors went through an entire forest, finding fault with each tree, but never noticed there was an entire forest (of evidence).

I'm an author on one of the studies discussed, and for what it's worth, there are two factual errors in their review of my study alone.

The authors then go on to state that people seeking to improve cognitive function would be better served by exercising or going to college. Exercising is an excellent idea, but the evidence for cognitive improvement is certainly no better than for cognitive training, and is arguably worse [1]. College is fine as well, but from a methodological perspective there's never been a randomized controlled trial showing that college improves cognitive function, and arguably all college does is select high-achievers and then further filter them with low-performers dropping out. Endorsing college (with no RCTs) over brain training (with RCTs) suggest a biased review.

Disclosure: I work at Posit Science, where we make a brain training program. My work is specifically criticized in the article.

[1] http://www.cochrane.org/CD005381/DEMENTIA_aerobic-exercise-t...

You describe the process of meta-analysis as of there is something wrong with it.

The point about exercise or college is they are guaranteed to have some benefit, unlike playing a brain training game, which has at least a 50-50 chance of being a waste of time.

I like meta-analyses quite well, but the APS article was not a meta-analysis - it was more like a broad review. Here are a few meta-analyses that disagree with the APS article:

Computerized Cognitive Training with Older Adults: A Systematic Review http://journals.plos.org/plosone/article?id=10.1371/journal....

Computerized cognitive training in cognitively healthy older adults: a systematic review and meta-analysis of effect modifiers. https://www.ncbi.nlm.nih.gov/pubmed/25405755

I liked college and benefited from it, but it the US (at least) it costs tens of thousands of dollars, and 45% of people don't complete a degree in 6 years: http://www.slate.com/blogs/moneybox/2014/11/19/u_s_college_d...

The entire brain training market seems to be based on a willful failure to understand Goodhart's Law.

The basic pitch is "performance on these games correlates with intelligence, so practice these games to become more intelligent". That's not how anything works! If you influence a metric directly, it stops being a good proxy for all the things it used to correlate with.

It's like fly-by-night companies improving retention by selling at a loss, or running up user count with expensive perks for each new client. Those numbers are important as a reflection of business health, not a cause of it.

Thanks to fMRI and other brain scanning technologies, we now know that when we learn, our brains physically change. This fact is the basis for the claims of the "brain training" industry.

The reality is that our brains are constantly changing. I could memorize a thousand fart jokes and my brain would have changed. Does that make me more intelligent?

I think the "brain training" apps are only really good for older or other people who are at risk of dementia, or other cognitive diseases, and even then I'm pretty sure the science is out on most of that.

Instead of Games that may or may not map to some useful thing and some cognitive improvement...why not just learn more math and or a 2nd language? The time invested here will at the very least provide you with some useful skill or a better understanding of your world.

I know people who just don't make critical thinking a part of their daily routine, and they tend to be sloppy and thoughtless when confronted with any sort of mental exercise. If these "Brain training exercises" can get people thinking critically and dedicating time of the day to using parts of their brain they wouldn't have otherwise, then it seems like a good thing to me.

The weird thing is that there is an initial lift but only because you are learning something new. You have to figure it out, it takes effort and creates new pathways, etc. Once you have learned it, you tend to be on auto pilot. The thing to do is keep learning new things: photography, baking, woodworking, etc. It is the challenge and discovery that are important.

I believe this also, if you dont learn new things/practice exercise (something where you need to think), you will loose IQ (like in a long holiday). So in long term brain training exercises are essential.

I'm totally grooving on the linux bsd games like galaxies and keen that I've recently discovered. At first there was this "high" from trying to figure out the new patterns. Now even the more difficult stuff is becoming systematic (although still absorbing).

Good. Now perhaps make a study about sensationalized article titles like this one. The conclusion in the title is inaccurate.

FTA: "Overall, Simons and his colleagues conclude that the evidence [...] is “inadequate”."


"it’s possible future research will provide new evidence that is more favourable to brain training"

Sure, but it's impossible to prove that brain training has no benefits, all we can say is that essentially every well-run study has failed to find any benefits. They're just saying that stuff to cover their bases - most neuroscience researchers don't believe that "brain training" is anything more than just wishful thinking.

It is impossible to prove, and yet the title claims it to be true.

Given the evidence, it is a reasonable conclusion. Very little is ever "proved". Rather, we make conclusions based on the available evidence. If the evidence changes, we revisit our conclusions.

Whoever downvoted you, has probably never heard of the axiom: "Nothing can ever be proven true, because that means you would have to consider every possibility. The best that we can hope for is to prove something is not true by finding exceptions. After many failures at finding exceptions, we just state that some things are true until an exception is found."

The thing is, researchers commonly put in throwaway lines like your latter quote to avoid themselves from being embarrassed by some new study that comes out in the future. The key to comprehension is appreciation which portions are fundamental to the message and which parts are throwaway.

I thought it was to ensure on-going funding. If you don't conclude that further research is necessary then that's surely the worst possible thing for your continuing to be paid to do said research.

This is a shoddy write-up which sensationalizes the more rigorous underlying scientific work.

The actual research here appears to find methodological shortcomings in many papers purporting a broader effect on intelligence or problem-solving ability from some popular brain training games. It does not, however, conclude that there is no effect.

It is unfortunate (though very common) that the article oversimplifies and sensationalizes. The headline "brain training exercises just make you better at brain training exercises" is far too definitive. I'm glad the "might" qualifier at least was added here on HN. Similarly, the article saying "the same is not true", definitively, for the benefits of physical vs mental exercise games. Examples abound throughout the article and it reads as sloppy, biased, and exaggerated. Not uncommon today, but always unfortunate still.

It's a shame many brain-training companies make exaggerated claims themselves. So, in some ways, it's fine to see some counterfire in online press. Unfortunate if that's what it comes down to, though.

Nothing about N-back, which is the one braintraining thing that had evidence for it.

I wonder if part of the reason brain training appears to have little benefit is similar to the reason most people who visit the gym make little gains in athleticism; because they don't push themselves hard enough. A lot of people feel like simply showing up is all that's required, and they don't break a sweat.

This is supported by the fact that many of these brain training apps/games often have addictive traits, similar to other video games. This might lead to their usage being met with undue reward.

I used one of them, and the goal (unduly rewarded) was simply to come back and, essentially, play each day. It didn't matter how hard I worked.

This could be tested by measuring brain activity during brain training (compared to some control activity) and seeing if the delta positively correlates with increased cognitive function.

I have schizophrenia and I create a program that combines mnemotechnics with 3d n-back and facial expression memory system to understand.

http://vernetit.blogspot.com.ar/2016/10/eo-and-peo-memory-sy... with 15 minutes of training my mood changes and i understand the sitcom tv series emotions and the people around me intentions and emotions. I experiment also more willingness to do things. Here the program link


Sorry my poor English. I am from Argentina.

I worked at an academic neuroscience software company back when Lumosity first emerged on the scene. Lumosity was spurred on by I believe statistically significant results regarding the n-back test improving working memory. There was a lot of hope that other games/tests might be designed to improve other cognitive features. However the n-back test is the only one I have ever heard of having a significant effect on cognition, and even then it's temporary. There was a lot of disappointment in these results. Nevertheless, Lumosity et al continued to market minigames as health elixirs.

You're right. The amount of "exercises" that have proven to have some degree of transfer are very limited (n-back being the most typical not the only one). It's not only a problem of Lumosity et al, it's a problem on researchers too. There's an alarming amount of papers that are really bad, both methodologically and in terms of the type of tasks and training duration.

No wonder the brain training industry gets bashed every now and then when a new study debunks most of the allegedly benefits. So we have a group of folks that don't understand what they are doing and are publishing bad papers, and another group of folks that read those papers without fully understanding what is going on under the hood and create a "brain training" app. The fine on Lumosity by FTC was fair and expected.

I've been working on "brain training" stuff for almost 8 years and just recently (around a year ago) decided to make a startup out of it. I'm a psychologist and software engineer and my cofounder is a clinical psychologist and neuroscientist. We've done our own research (and have a published paper), read a lot of the stuff that's out there and have evaluated and kept track of the existing products for years.

First thing that baffles me on most commercial products is the frequency and duration of each training session. Let's think for a minute. On average you have this short sessions that range from 5 to 15 minutes. Even in something as evident as physical activity... how fit can you get by doing 15 minutes of low intensity jogging on average 3 times a week? How fair is to say based on that example that jogging is "worthless"?

That said, most critiques to Brain Fitness are correct.

If I could summarize what we've learned so far:

- Transfer effect (i.e. to gain benefits outside of the activity you're doing) is hard to achieve, it takes a lot of time, has to be a n-back and heavy working memory activity (is the only think I can think of having a chance to do transfer) and it doesn't work all the time. On our own experience probably 20% of the population will never benefit from something like this. Also, transfer is limited to some executive functions, not all of them.

- Training duration and frequency. Think of a physical activity like jogging but instead of doing 15 minutes of low intensity jogging 3 times a week, do 35 high speed jogging 4 times a week for two months. You will get fit.

- Training activity. Most n-back and working memory stuff is boring and hard to do for players. It's difficult to engage on a "game" like that and the dropout rate for users is very high.

Most games out there that are part of the package of games in brain training softwares are based on stuff that has not been proved to have any transfer effect.

Our mobile app will focus mostly on working memory and sustained attention games (fewer but with more complex game dynamics that current ones). And I don't think we will advertise it as a "life changer" or a way to "make you smarter". We just want to build a suite of games for people to be challenged and have fun while doing so.

I don't doubt it, but people shouldn't take this to mean that there aren't any possible exercises to strengthen mental ability. You've just got to listen to people who are actually successful, and not some corporate-funded research that might be null in three years. I don't see anyone at the World Memory Championships citing Lumosity.

Yup. Just like being great at Chess just makes someone great at Chess, and not much more.

I taught my kids to play Chess early on. My first-born was beating 12 year old's when he was 6. They had to move him up the age groupings. I insisted they keep moving him up until he lost. Knowing how to deal with losing is very important.

In all cases I pulled my kids from competitive Chess after a few seasons and a good balance between winning and losing. Past a certain point, getting better at Chess requires becoming a human database engine. That, for me, is when Chess demonstrates this idea that getting better at some of these games teaches nothing useful.

BTW, I apply this to the type of programming puzzles typically used in interviews. It's pure nonsense that says nothing about how creative someone can be about solving new problems. Anyone can become a human database with enough effort. True creative intelligence is a quite a different matter.

I would disagree. I am not much into Chess, but I used to play a fair amount and consider myself fairly good.

At the very least, playing a sufficient amount of Chess against sufficiently skilled opponents will teach you things that are easily generalizable.

Just one example, thinking strategically over the long term (several moves ahead), rather than short term (only thinking about your next move).

No, I think we agree. The level of proficiency you are describing can be attained by a kid in six months to a year. Going past that to become a human chess database is a waste of time.

Example: I taught my kids to ask themselves "Is there a better move?" before making a move. I also taught them to apply that idea to non-chess situations. And they do.

Honest question: so if brain training exercises don't work, then what else can genuinely improve brain functions?

For particular brain functions (learning guitar, coding, speaking another languages) actual practice. (This is also how training exercises work, only their particular function they improve the brain on is solving them, which makes them useless).

In general, better physical workout and diet does genuinely improve brain function.

Hard for the brain to function well in a 500lbs diabetic body...

Physical exercise. I'm not kidding. I recall hearing about a researcher that worked with people age (65+) to see how they could recover and/or improve their cognitive skills. He tried other things (including brain training) before jumping to exercise. Its research lab ended up being a Gym.

Brain training does work for some people; around 20% of the population won't gain anything from it, regardless of what you make em do. And even for those that works it has limited scope on the benefits.

Meditation works. We've known about the usefulness of it for thousands of years, and more recently there is a growing body of scientific proof [1].

Everyone really should try it. I firmly believe there is nothing that you can do in 20 mins a day that will improve your life more.

[1] https://en.wikipedia.org/wiki/Research_on_meditation

I remember reading studies about a few different video games improving cognitive performance: Rise of Nations, Starcraft, and an FPS (maybe Unreal Tournament?).

Could this perhaps just be a rationalization for some to play video games under a doctrine of "I'm getting smarter" when in turn it's just mindless consumption of media? I am all for FPS games improving my cognitive performance, but we should not be delusional about this.

If I wanted to (at least try) improving cognition, I would spend hours reading math textbooks and doing exercises, learning languages, perhaps reading about philosophy in my off-time; not strafing left and right with my WASD keys and trashing on kids all day in Counter-Strike.

Sounds like you already made up your mind.

Actually learn new stuff? Learn a second language, an instrument, math, programming—anything that takes you out of your comfort zone.

Dual n-back training is the only thing that I've read about that shows transference to unrelated tasks.


There are some good apps out there:

Desktop -> http://brainworkshop.sourceforge.net

Android -> https://play.google.com/store/apps/details?id=com.tyrske.dua...

And I make a pitch for my app, IQ boost for iOS -> https://itunes.apple.com/us/app/iq-boost/id286574399?mt=8

This article actually points out that the IQ/fluid intelligence increase findings from the original Jaeggi paper about dual n-back failed to be replicated.

> Finally, at the end of the study, we gave everyone different versions of the cognitive ability tests. The results were clear: the dual n-back group was no higher in fluid intelligence than the control groups. Not long after we published these results, another group of researchers published a second failure to replicate Jaeggi and colleagues’ findings.


Does this research also consider actual real Brain training games like Elevate? It teaches you techniques like how to get better at estimating, improving your WPM, your thesaurus, etc. Nothing to do with memorizing a pattern or serving coffee to customers as fast as possible.

The article quotes the paper, which says “… learning things that are likely to improve your performance at school (e.g., reading; developing knowledge and skills in math, science, or the arts), on the job (e.g., updating your knowledge of content and standards in your profession), or in activities that are other-wise enjoyable.”

Using an app to learn a new language, extend your vocabulary, or learning new arithmetic tricks would seem to fall into this category. If the Elevate app actually does any of these, then I'd call it "learning" instead of "brain training".

I applaud your pinpoint application of the 'No True Scotsman' fallacy.

We've never seen truly flawed studies before right?

It wasn't my intention to train my brain but I started using DuoLingo (it's free) language learning and I find my concentration has improved.

Maybe it's just because I do a different activity at a regular rate per day but I do feel brainy plus I am learning three languages.

Why, any training, it seems, has the same nature. Highly specialized task, such as joggling will make you better only at joggling (and, perhaps, will improve coordination of your hands) while training at, say, triathlon will make one generally stronger, more coordinated, with better stamina and endurance.

Similarly, mindfulness (awareness) and concentration (focusing) meditation techniques in general will make one better at variety of tasks, while specialized training, lets say, math tricks, will only affect specialized areas of the brain.

The training of a musician requires a lot of listening to the classical music, not just trying to play "mechanically" . All this is known for ages by Greeks and by Indians.

Title change suggestion: Bad brain training has limited scope. The question remains what good brain training would is. Paired with electrical or magnetic stimulation? There's almost all fertile ground here and, I believe, our great advances as humans this century will come from better ways to grow brains. Consider mobile apps that adapt to your cognitive strengths and weakness, when you haven't slept well or are under stress or need coffee, and where learning becomes lifelong and personalized. By building our intelligence we also become good at developing artificial forms. To me it's the problem of our time, these early attempts just help to develop better approaches.

A guy I used to play soccer with always said "train as you play!" Now, he was upset because on cold rainy Thursdays a lot of us were wearing tracksuit pants to training, and he felt we should be wearing shorts... and for some reason he felt that shin pads were exempt from the "train as you play" ethos. Which was funny, because he was a good lad but he was always getting injured. I think he played three matches the whole time I was at the club, and that was a few years, because the rest of the time he was sidelined by injury. But I digress.

I think "train as you play" is excellent advice. We don't learn the training. We learn the playing.

It seems that meta-analysis confirms the usefulness of things like dual-N-back training. Does anyone disagree?

https://www.gwern.net/DNB%20meta-analysis https://www.quora.com/How-does-dual-n-back-actually-increase...

For an n-back like exercise: http://cognitivefun.net/test/22

You need to read gwern's link more closely. Second paragraph:

This indicates that the medium effect size is due to methodological problems and that n-back training does not increase subjects’ underlying fluid intelligence

Most damningly, the review didn't turn up dose-dependent results. If n-back works, I would expect either "more is better" or "flat gain past some minimum". Instead, the effect vs training graph is a shapeless cloud, which seems like a major warning sign.

I think this is the only proven "brain training" that works, and the effect is temporary.

A friend showed me Lumosity and I thought "there's no way this can work", when in life do you have to distinguish the direction moving colored arrows are pointing? The reason I so quickly came to this conclusion is because of the concept of specificity in strength training -- your exercises have to be picked so that they support your goals. I imagine that an approach like this in the brain-enhancing-games space would look a lot more like what we think of as homework.

What caught my eye in your post is that you have accumulated some useful knowledge from strength training that you just applied elsewhere in your life. Don't you think it's possible that the exact same could happen to people solving puzzles? I doubt that a scientific study would find that strength training leads to improved problem solving skills, and yet it has for you.

I doubt that a scientific study would find that strength training leads to improved problem solving skill

Here are some that do.




Heh, well that'll teach me for not being more precise in my wording. Of course your fitness level will have a positive effect on your mental abilities. What i meant to highlight was his use of specific insights _about_ strength training that he then found could be abstracted and applied elsewhere. The same possibility of which he had dismissed in the puzzle solving case.

I do think that collecting anecdotes from other fields can help you think better in your own field of work. I'm just not so sure that, for example, doing Sudoku makes you any better at thinking about things. Brain training exercises aren't general enough.

A friend of mine believes that the reason she didn't do her best in academics is because the "subject was presented in a very boring way". I tried to persuade her that overcoming that feeling of boredom is already a good sign that you are learning the material well - granted, it is really hard to read poorly written stuff, but academic material isn't exactly a heart pounding thrill ride and it is quite a task to make it interesting, particularly as the material gets more advanced. But you know that you can rarely change people's views on such abstract things.

I once mentioned a plan to develop a little learning app (based on spaced repetition) for her kids and asked her if she felt it was something worth paying for. She pointed out this new app called Lumosity and asked me "Why don't you make something like this? It is very interesting, and kids will actually want to use it." I sort of gave up at that point because I didn't quite believe that it was all that effective. After a little while, the topic of this article started floating around the internet and last I heard, my friend had stopped using Lumosity.

On a more cheery note, I am surprised to find no one mentioned the book "Make it stick" by Peter Brown et. al. which would probably be a hard pill to swallow for the Lumosity fans. Someone should find a way to "appify" the principles in the book. It would be one seriously boring app, but very effective. :-)

Commenting on HN just makes you better at commenting in HN.

Perfect^W Purposeful practice makes perfect.

If you really want to increase you intelligence just get a good night's sleep, eat well and go outside and walk

I've always found that if I want to get good at some endeavor, partaking in that endeavor does the trick. I wonder how much time and brain power is wasted on preparing. Maybe that's why the really good ones leave school early.

Opportunity costs are damning. Even if brain training games were modestly effective, they hold an opportunity cost: The time you could have been spent doing cardiovascular exercise that has demonstrated robust positive effects on cognition.

It's possible to be both physically and mentally active. You don't need to choose.

yes yes. Of course, but there people have only so much time...and many people are NOT physically active. Sure do your 'brain training' (if it works) AFTER you've done a decent amount of cardio, but cardio should be the goto.

Should be similar for IQ tests, I guess. So if new generations have better "IQ", some part of it could come just because of being more prepared/trained for those kind of tests.

Interestingly, this is the primary explanation for the Flynn effect, and why culture-loaded questions do not show the same improvements that completely abstract, non-culture-loaded questions do.

Just like playing chess makes you better at playing chess, no generic Intelligence boost from specific activities with narrow problem spaces

Kinda like practicing math, just makes you good at math?


I had the same reaction. For example, does taking a lot of "tests" make you good at ... tests? To me it calls into question the validity of tests to measure intelligence.

I would say that taking lots of tests may indeed make you better at taking tests.

You may find that working under pressure or under a time limit is something that you get better at with practice.

Practicing math will only make you better at math. Taking the time to truly understand the math in the most general terms will make you good at truly understanding anything in the most general terms.

This has to become some kind of meme...reading books might just make you better at reading books :)

incentives incentivize...

The entire "I'll find a mental analogue that's easier/more fun than real exercise" industry smacks if an industry dedicated to perpetual motion. Extreme scientific discoveries aside, it's not going to happen.

I think this proves that If we practice anything we will make the improvement.

Anyone who has used these could tell you that. If they are mindful enough to notice.

But then ... is life much more than a brain training exercise?

Neither headline nor URL contains the word "might".

Yup, hn has a rule about original headlines except where inconvenient.

In this case though they slipped up, "might just make you better" means "likely to make you better" and the "just" no longer conveys "only" at all anymore.

Maybe those brain training exercises are just not very good?

Well, what about spaced repitition?

Just like the SAT.

Wow, shocking.

I knew it :)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact