Hacker News new | comments | show | ask | jobs | submit login
Why Smart People Are Stupid (newyorker.com)
302 points by mshafrir 1642 days ago | hide | past | web | 213 comments | favorite



The research is using SAT score as a proxy for general intelligence... I wonder if this sort of heuristic short-cutting actually correlates with test-taking ability more than it correlates with intellgence.

A lot of "test-taking" training basically consists of saving time by training away from full reasoning, in favor of cheap-and-good-enough heuristics. Furthermore, those heuristics are over-fitted to the particular problem types on standardized tests. I wonder how much of this study is actually measuring their ability to trigger test-taking instincts on problem types they're not designed for.


This was my first thought as well. Especially since most standardized tests do not include "trick questions" of these sorts (or if they do, they are often painfully obvious), the more "intelligent" are just answering questions quickly thinking they are all straightforward.

The test itself aside, I feel as if often in real life people correlate speed with intelligence, and the "smart" people that we know are the ones that are able to come up with (usually) correct answers quickly. Therefore it makes sense that these smart people would have a lot of heuristics that allow them to do so. To be clear though, the causation here is more complex than what is implied in the article. People who have heuristic strategies are characterized as smart, as opposed to the other way around.


I was thinking something along these lines too. I would guess that people who make these sorts of mistakes when they're in test mode might not make them in a real-life situation

When you're taking a test, you have the expectation that the problem was designed by a human who wants you to demonstrate a particular piece of knowledge. You look for subtle clues in the words that point to which core problem was in the mind of the person who wrote the question.

The experimental setup seems like it would catch out people who make the assumption that the problems were designed to test knowledge.


The research is using SAT score as a proxy for general intelligence.

Despite a number of statements to the contrary in the various comments here, taking SAT scores as an informative correlate (proxy) of what psychologists call "general intelligence" is a procedure often found in the professional literature of psychology, with the warrant of studies specifically on that issue. Note that it is standard usage among psychologists to treat "general intelligence" as a term that basically equates with "scoring well on IQ tests and good proxies of IQ tests," which is why the submitted article has a point.

http://www.iapsych.com/iqmr/koening2008.pdf

"Frey and Detterman (2004) showed that the SAT was correlated with measures of general intelligence .82 (.87 when corrected for nonlinearity)"

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3144549/

"Indeed, research suggests that SAT scores load highly on the first principal factor of a factor analysis of cognitive measures; a finding that strongly suggests that the SAT is g loaded (Frey & Detterman, 2004)."

http://www.nytimes.com/roomfordebate/2011/12/04/why-should-s...

"Furthermore, the SAT is largely a measure of general intelligence. Scores on the SAT correlate very highly with scores on standardized tests of intelligence, and like IQ scores, are stable across time and not easily increased through training, coaching or practice."

http://faculty.psy.ohio-state.edu/peters/lab/pubs/publicatio...

"Numeracy’s effects can be examined when controlling for other proxies of general intelligence (e.g., SAT scores; Stanovich & West, 2008)."

As I have heard the issue discussed in the local "journal club" I participate in with professors and graduate students of psychology who focus on human behavioral genetics (including the genetics of IQ), one thing that makes the SAT a very good proxy of general intelligence is that its item content is disclosed (in released previous tests that can be used as practice tests), so that almost the only difference between one test-taker and another in performance on the SAT is generally and consistently getting all of the various items correct, which certainly takes cognitive strengths.

I still think Stanovich's point is interesting that there are very strong correlations with IQ scores and SAT scores with some of what everyone regards as "smart" behavior (and which psychologists by convention call "general intelligence") while there are still other kinds of tests that plainly have indisputable right answers that high-IQ people are able to muff.

(Disclosure: I enjoy this kind of research discussion partly because I am acquainted with one large group of high-IQ young people

http://cty.jhu.edu/set/

and am interested in how such young people develop over the course of life.)


This is a wonderful comment. However, what psychologists know as g is the result of over-interpretation of a descriptive method known as factor analysis (which is very similar to principal components analysis).

http://cscs.umich.edu/~crshalizi/reviews/flynn-beyond/ Major thanks to Cosmo Shalizi who opened my eyes to these issues.

Also, the SAT grew from IQ tests, so it really isn't surprising that the two measures are correlated, given that the questions for SAT were probably kept on the test because they correlated with IQ tests.

That being said, the work of Kahneman (And the new paper which I should probably read instead of commenting on HN) is pretty rock solid as far as it goes. It is worth noting that there is a converse position in this field, that of Gerd Gigerenzer who argues that these heuristics exist because they are useful, and only go wrong in artificial situations. www.cogsci.msu.edu/DSS/2007-2008/Todd/environments_that_make_us_smart.pdf

Personally, I incline far more towards the views of Gigerenzer than those of Kahneman, especially given that Gierenzer and colleagues attempt to model the mind given their theories computationally, which is something that psychology could do with more of.

Full disclosure: I'm a psychologist who's very frustrated with the lack of statistical sophistication and interpretation in my field.


Isn't there also another argument that could be made?

Suppose Y is only weakly correlated to X, Y might still be used to show that Z and X correlated if we can show that Y and Z are not otherwise correlated.

If only some smart people score high on SAT and most who score high on SAT are more easily are easily fooled by certain questions, it might indicate most smart people are easily fooled by these question but you would want to engage in further investigation to be sure...


It seems very strange to me that they would see these biases as inherently harmful, rather than the root difference between smart people (who can think effectively and quickly using non-linear reasoning) and stupid people (who can't.)

In general I would be reasonably satisfied with a definition of intelligence which described how usefully one is able to employ intuitive cognition.


One could also imply that by your definition smartness should be measured by how good someone is able to following rules and structure.


Not especially, although being able to comprehend rules and structure is a biproduct of systems-level thinking. However, the "non-linear" part of my definition also implied an ability to question, circumvent or simply ignore such systems as well, and includes the ability to reason within a set of rules without believing that set of rules to be accurate or true.

Systems and rulesets can be a useful method to structure memory and reason, but being bound by them is detrimental to long-term correctness and comprehension. Being able to write syntactically-correct programs is useful, but neither necessary nor sufficient to be an excellent programmer. A meta-understanding of the effect such syntax has on the program at hand is more useful than always correctly following it.


I'm surprised to read that people who did better on the SAT did worse with the bat-and-ball question in the article. That sounds like exactly the sort of simple trap I'd expect to find in an SAT math question.


I like that statement:

"...test-taking" training basically consists of saving time by training away from full reasoning...

A quote from one of my professors:

"Tests test what tests test."

Tests end up serving as an observable criteria for identifying intelligence. As long as their proper function is understood, they are useful. They are not not necessarily helpful in identifying who will be most successful in business, the political arena, etc. Because of the increased attention on testing, most reasonably informed people develop test taking ability as a somewhat independent skill.


Even if it does, it would be impossible for that effect to completely obscure the correlation between test scores and intelligence.


It is ridiculous to suppose that one's score on a multiple choice test is an accurate measure of innate ability or real-world intelligence.

The SAT, ACT, IQ tests, and all standardized test like them are socially constructed concepts that ATTEMPT a method of measuring intelligence. Intelligence (in the real world) reaches far beyond one's abilities to answer multiple choice reading comprehension, basic math and writing. Not to mention that problem solving in the real world has no time constraints.

Beethoven would not have gotten a perfect score on his SAT's. However, we all can attest to his innovation, creativity and musical genius. How can a multiple choice test measure the creative abilities of people like Sir Richard Branson, Steve Jobs, or Pablo Picasso?

The idea that "smarter people... were slightly more vulnerable to common mental mistakes" is a nonsensical conclusion. These findings are completely worthless.


How on earth would any test be anything more than a "socially constructed concept that ATTEMPTs [sic] [to be] a method of measuring intelligence". That just sounds like a definition to me, not an indictment. Do you have a better test?

No one claims the SAT, or any other test, is the final word. But at the same time pretty much everyone accepts that that "general intelligence" (or something like it) exists, and that tests are a reasonably good proxy for detecting it. To first approximation, students who do well on the SAT are successful in other ways associated with "intelligence".

And that -- the fact that the SAT correlates with something under study -- is all that is needed for good science. Even poor correlations can be enlightening if the data (and scientist) is good enough.


The conclusion that "Smart People are Stupid" is wholly inaccurate. My point was that there is no test that can accurately measure intelligence. As we know, intelligence is often intangible and abstract.

Wikipedia's Definition of intelligence:

Intelligence has been defined in many different ways, including the abilities, but not limited to, abstract thought, understanding, self-awareness, communication, reasoning, learning, having emotional knowledge, retaining, planning, and problem solving.

Sure, many people that scored well on the SAT are intelligent, but that doesn't mean that those that didn't score well are not just as intelligent or capable.

The findings more accurately reflect the conclusion that: "Those With High Scores on the SAT Are Stupid"


The title and your point are both just playing on the semantics of the words. The study is measuring "general intelligence", which is a better defined (if still poorly understood and somewhat controversial) subject amenable to scientific study. Basically everyone in the field accepts that it's real. Even informally, think back to your school peers: I'm willing to bet good money that, on balance, the ones that everyone called "smart" got the best grades, got the best test scores and ultimately got the best jobs. All those things are correlations, and they can be measured scientifically. And they're real.

Basically, if you want to quibble with the headline of the post, then I'll grant that it's a little confusing (intentionally so, as are most good headlines), even if IMHO that point is a little specious. If you really want to claim "there is no test that can accurately measure intelligence" as a matter of scientific fact, you're just plain wrong, sorry.


I wanted to upvote, but at this point you're just indulging a troll.


First off, I agree that the SAT is not a good indicator of intelligence. That being said, this argument is missing the point:

> Beethoven would not have gotten a perfect score on his SAT's. However, we all can attest to his innovation, creativity and musical genius. How can a multiple choice test measure the creative abilities of people like Sir Richard Branson, Steve Jobs, or Pablo Picasso?

These tests are not meant to measure creative abilities. They are meant to measure the ability to solve math, reading comprehension, and writing problems. The SAT is used as one of several criteria in the college admissions process, NOT to try to predict who will be on a short list of history's most innovative people.


My point was that intelligence isn't easily defined. And that using the SATs as a measure of intelligence is absolutely ridiculous. The title "Why Smart People are Stupid" is misleading. The title "Why Good Standardized Test Takers are Stupid" more accurately reflects the findings of the research.


But how come you accept the claim they're "stupid"? Are you saying that SAT is not a good measure of intelligence but Kahneman's tests are?


I completely agree to the point that any IQ-test like test is just a proxy to meassure what is commonly called intelligence. And these tests are just proxies, they bear in themselves the risk that any studiy based on them is more analysing the proxy (in this case the IQ-test, SAT or whatever) than the real thing (in this case intelligence).

On the orther hand I completly agree that is damn hard to meassure intelligence correctly. But as as long as you don't have proxy that works as well for "educated" westerners as it does for "uneducated" bush or jungle tribesmen you still have quite a high risk of error in your studies. Just my 5 cents.

P.S.: Upvoted rstevensons posts, don't see any reason to down vote him for being critical about multiple choice tests as a basis for such studies.


The questions, judging from the ones in the article, seem to be about testing one's logical reasoning skills. The SATs, at least the qualitative portion, attempt to measure one's math and logical reasoning skills. It seems entirely reasonable to me to assume that someone who scored highly on the SATs should not be tripped up by these kind of questions, and the finding that they are actually more inclined to be tripped up surprises me.

It might be better just to look at a student's qualitative portion of the exam, since one could score highly on the SATs while still getting a (relatively) poor score on the section most similar to these kinds of questions.


Don't worry, you're both right. No self-respecting researcher uses SAT scores as proxies for intelligence in the social sciences.


The study detailed in the linked article does exactly that...


Which is why the paper used "various cognitive measurements" which together could be taken as a proxy for intelligence instead of relying on only on S.A.T. scores.


warning: reading this post is a loss of time unless you're rstevenson542.

> all standardized test are socially constructed concepts that ATTEMPT a method of measuring intelligence

as opposed to ones that don't attempt and have not been constructed by a society? What are we hoping for here, exactly: some ray of light shone upon us by god almighty which will let us know that, without doubt, those men are smart and those other ones are stupid?

> The idea that "smarter people... were slightly more vulnerable to common mental mistakes" is a nonsensical conclusion. These findings are completely worthless.

> Intelligence (in the real world) reaches far beyond one's abilities to answer multiple choice reading comprehension, basic math and writing.

Good thing I'm in La-La-Land, then, so I can answer multiple-choice reading comprehension all day long.

> Beethoven would not have gotten a perfect score on his SAT's

And you know this because you dug up his skeleton and it wouldn't mark answers? I'm not sure what to make of your assertion.

You hadn't quoted any passages from the original study that would display inadequate methodology or statistical error. Your findings are completely worthless.


I don't have access to Kahneman's full paper to scrutinize (yet - I'm asking around) but it seems possible to me that they wanted to place themselves in opposition to the general practices of education at the time. Notice that since his work began, education has changed drastically, several times over. Several examples of approaches tried are the classical lecture being jotted down to study at home; study groups with two-way communication; exam preparation; test preparation; collaborative group work; coaching; a specialist could go on.

Bear in mind that 50 years ago the SAT was still in flux and a lot of it was being experimented with and new. Given that it was one of the biggest, and most repeatedly renewed concerns for researchers like Kahneman, there's little to doubt he's not only an expert on the SAT, but also knows as many of its downsides as anyone.

Therefore, I believe this paper is a sort of "lessons learned" story which shows that his approach is better than the form of education being undertaken by schools.

I find it ESPECIALLY curious that neither the article nor anyone in this whole thread (and have I tried reading most of it) has commented on the process, and consequences, of being caught on trick questions like bat and ball.

It is my understanding that our intelligence has evolved through use in situations where its impact was more or less immediately visible, and where this feedback could be acted upon.

Example: shaping tools. Is the flint stone sharp? No. Mash it against rocks. Is it sharp now? It's a bit sharper, but not sharp enough. Mash it against rocks some more. Is it sharp enough now? OK, you're done shaping your spear head.

Example: hunting. Approach the prey. It runs away. You don't notice why, you hadn't taken wind into account when looking for clues. No feedback, therefore you couldn't act upon it.

Example: hunting. Approach the prey. It runs away. You notice it did after wind turned and gave away your position. You got feedback from the grass and leaves moving in the wind. Next time you'll be able to act upon this.

Example: trying to shake fruit off a tree. You find a low-hanging branch and try to shake it. First you try this way, then that way, and finally you find the best way to get fruit without making too much fall down.

The last example extends to any sort of experimenting, tinkering, happy-hacking.

It is however notable that the bat and ball question does not test that. There's a question, and you give an answer. There's no feedback before it becomes final, and once it does that is the clear cut-off. This represents the stone-cold, immovable, monolithic machinery displayed by many technical subjects, such as science, mathematics, and some forms of computer programming, but also strategy, and some forms of art. There is no iterative process, you get one try, based on which you can in no way build a tangible mental model of how something works and what parameters of your thinking you need to adjust in order to better yourself. One very stupid example is when someone in a job interview asks you about standard library function names and argument orders. Either you remember, or you don't. I'll call this feedback the feedback gap.

Bear in mind I have mentioned "happy-hacking" above and "computer programming" below. In fact, they're both computer programming. The difference? If I'm presented with a python program where I can use an iterative process, a repl, and its help() command that immediately gives me access to documentation, then I can very easily build up a mental model of what's going on. Exceptions and errors give me constant, constructive feedback which comes immediately. This immediacy is extremely important and even a slight delay makes the learning process slower. Additionally, if some things aren't available as immediate feedback, I can find out. For example, when trying to get at the fruit, I saw immediately where the fruit was in the tree branches. When typing out python, I don't have this, I don't see what the functions are, so I need to use help(). That works well enough. Some people like intellisense for that. Works well too. It all fills the feedback gap.

There's a similar difference in ease of progress between experimental mechanics and similar physics, versus branches where experimentation cannot happen. That's feedback gap again.

It is my belief that this sort of immediate feedback is needed in other technical subjects, especially mathematics and physics. Approaches such as theorem provers are helpful in mathematics, but they're nowhere near being complete, and nowhere near the utility and immediacy of a repl. I am fairly sure there are other ways in which the feedback gap can be filled. Perhaps different methodology, or differently structured theories, can give us more immediate feedback? Perhaps mathematical systems in which theorems are easier to tentatively prove or disprove can become more successful in breeding new results?

Or perhaps the theories are not to blame, but we need more tools. For example, my abstract geometry teacher kept reminding us that we needed to come up with such quick checks. Non-linearities were always useful. His favourite was the binary distance function, which was 0 for two identical points and 1 for different points. A lot of stupid theorems can be disproved by checking some examples with this.

Can someone else comment on any such tools?

Writing this comment definitely came with an insight or two for me. If you read it, thanks for going on this trip with me.


> It is my belief that this sort of immediate feedback is needed in other technical subjects ... Perhaps different methodology, or differently structured theories

The traditional feedback method is teamwork: the next student, or colleague, makes a claim about physics, and you exclaim: "Bullshit! As the mass of the pencil reduces to zero, the whole universe gets pulled off course." Traditionally, physicists and hackers were thick-skinned freaks; the triumph you felt more than made up for your bruised ego when the tables were turned. Now, normal people do these things; their feelings, more of resentment than glory, distract them inefficiently. They react the same way when computers call their bullshit.


For the types of testing that he's doing, I suspect he's measuring boredom more than anything else, especially since he's testing largely in a university setting. Intelligent people are accustomed to being bored with endless entry-level evaluation exams, and at first glance this looks like it's just one more of them. And because the stakes here are so low (essentially zero), lots of people will just fly through without really reading and analyzing the question.

What he's seeing isn't something new, it's something so old that it's part of popular culture: the absent-minded professor syndrome. It's the stereotype of the brilliant physicist forgets what he's supposed to buy at the supermarket because he's thinking about their quantum properties. Analytic people are horrible at things that don't interest them.

Pay the students $50 for each correct answer, and there's not a doubt in my mind that the results will be the complete opposite of what he's seeing now.


Agreed. Perhaps the "smart" people are taking shortcuts for brainteaser questions. But when it comes down to it smart people are probably smart cuz they put in the effort to sit down and think about the problem at hand at some point. Give an incentive (grades, job performance, $50), and you're right, they'll probably get the right answer.


I also agree with your comment. I did both questions as fast as I could and only got the first wrong and the second right. It reminded me of a brilliant Civil Engineering professor I once had who was showing us his notes on the projector (the kind with the light bulb and magnifying glass over head) and someone asked him if he could turn off the lights. The student meant the lights, as in the classroom lights so he could see the projection better. The professor turned off the projector instead :)


I saw a pound coin and a 10p coin (UK) and got the first one wrong, I saw a bar graph with a lot of doubling bars, and saw the long history of doubling and got the second one right. I saw the money by denomination, and not as a quantity perhaps. Interesting.


Ok, so I posed both questions to my wife, and she answered both correctly, only pausing about 2 to 3 seconds before answering both. She then said the questions are stupid and are too elementary. Maybe this study is not foolproof and is only there to make people like me feel better about not being able to answer simple math questions better :P

Basically, once again, I've learned that my wife is smarter than me and that these studies should be taken with a grain of salt.

P.S. She is not a math person nor is she a tech person.


Just a sec.. where are you seeing that?


Sorry, I was describing the visual images I get when solving problems like this, the article is pure text.


Hmmm. Interesting that you point that out. I think if we were presented with a math formula (symbols), we'd have aced it.

I think the reason why my wife did it so fast is because it was fed to her as text only - in fact, I just read it to her. She's a lawyer and she's much better at interpreting text than most people.


> Pay the students $50 for each correct answer, and there's not a doubt in my mind that the results will be the complete opposite of what he's seeing now.

You've never tried tutoring kids who get kickbacks for good scores then. Absolute fucking nightmare. Motivation to study must come from within for it to be successful in any way.


Why's that? What makes it a nightmare?


The same reason why having a girlfriend that fucks you for your wallet makes it a nightmare. Sell-outs are never good work.


The bat-ball and lily-pad questions are 2 of the 3 questions on a short test called the 'Cognitive Reflection Test' (or 'CRT') meant to measure whether people make the effort to think beyond the obvious (but wrong) answer.

By using those examples, after its headline, this article seems to imply smarter people do worse on these CRT questions. But that is not what I've read elsewhere -- which is that the CRT is positively correlated with other quantitative measures of intelligence (including IQ scores, SATs, and high-school/collegiate grades). 'Smart' people (by those measures) do tend to do better on the CRT.

And if you read this article carefully, you see that while it uses these two CRT questions as examples of tricky questions, when it discusses the results about awareness-of-bias not helping alleviate bias, it isn't necessarily saying smart people do worse on those two CRT questions. It's a bit muddled in what it's saying, and reviewing the linked abstract doesn't help much either. The paper is evaluating some very specific things under the umbrella term 'cognitive sophistication', which might not map to what we usually call 'smart' or even 'test-smart'.

BTW, I personally think the CRT may be especially useful for evaluating software/systems proficiency. The bat-ball question probes understanding of algebra; the lily-pad question probes understanding of geometric growth (and someone accustomed to powers-of-2 will find it easier); the third question probes understanding of parallelism and projected-rates-of-work.

That third question happens to be:

"If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?"

(A software person might also think of it as: "If it takes 5 cores to compress 5 GB in 5 minutes, how long would it take 100 cores to compress 100 GB?")


This article has been cobbled together from a few threads here and there taken from Kahneman's own recent book popularising his research (Fast and Slow Thinking, 2011).

The book explores several interesting ideas, but I think I can safely say that "why smart people are stupid" is almost certainly NOT one of the more important themes in the book. Which I think is why, as you've noted, the results from the CRT testing don't line up with the conclusions in the article.

Actually I'll go a bit further and say that in simplifying a fairly nuanced and complex concept down to an attention grabbing headline, this article is ironically appeals to the intuitive bias trap that Kahneman describes in his book. "Why smart people are stupid" gives us an easy bypass to answer a complex question and saves us the mental effort of actually coming to grips with the problem. The explanation is satisfying, but it's also flat out WRONG.

No, the deeper theme is something of an inconvenient truth for both smart and not so smart alike. Best for you to read the book yourself if you're really interested, but I think it's not too big a stretch to say that PG was approaching the same idea in his wisdom (intuition, fast thinking) vs. intelligence (slow, deliberate thinking) essay:

"And while wisdom yields calmness, intelligence much of the time leads to discontentment.

That's particularly worth remembering. A physicist friend recently told me half his department was on Prozac. Perhaps if we acknowledge that some amount of frustration is inevitable in certain kinds of work, we can mitigate its effects."


Thanks for this. I kept feeling like the article was just completely lacking in depth. Now I know the research isn't. ;-)


> And if you read this article carefully, you see that while it uses these two CRT questions as examples of tricky questions, when it discusses the results about awareness-of-bias not helping alleviate bias, it isn't necessarily saying smart people do worse on those two CRT questions. It's a bit muddled in what it's saying, and reviewing the linked abstract doesn't help much either. The paper is evaluating some very specific things under the umbrella term 'cognitive sophistication', which might not map to what we usually call 'smart' or even 'test-smart'.

Fulltext: http://dl.dropbox.com/u/5317066/2012-west.pdf

No, the smart people essentially never perform worse on the CRT or bias questions than their stupider confreres. What the headline is boasting about is that when asked to guess whether they will do better on the biases than their cohort, the smarter people tend to think they will do much better than their cohort, but actually do only a tiny bit better or identical.

But as the authors admit at the very end, this may be a reasonable generalization from their lives, in which they observe smart people outperforming stupid people at just about everything and these biases were selected from the big heuristics & biases literature for being resistance to raw smarts...


Obvious seems to depend on conditioning. For me the correct answer to the lily pad question seemed completely obvious. On the bat and ball problem, I experienced a cognitive dissonance, the knee jerk answer appeared unbidden, yet something also seemed wrong about it so I stopped to actually think about it before answering correctly. Unconscious algebra would appear to be unreliable for most of us.


My guess would be familiarity with binary numbers and powers-of-two in a computer context (bytes, ints, longs, shift-operations, etc.) helps make the lily-pad answer 'obvious' to some people. Experience in a particular domain can hone a different reflexive intuition from the norm.


This was my experience as well. I was very surprised by both: my inability to accept my knee-jerk (because I knew it was a "trick" question) and b.) the 2 minutes it took me to puzzle out why it was 1.05. Which surprises the hell out of me, as I've had more than my fair share of high level math courses.


>>(A software person might also think of it as: "If it takes 5 cores to compress 5 GB in 5 minutes, how long would it take 100 cores to compress 100 GB?")

I don't think that most compression algorithms are Embarrassingly Parallel (http://en.wikipedia.org/wiki/Amdahl%27s_law), so it's not clear to me the software scenario is equivalent, or are you saying that the jobs come in 5 gigabyte chunks?


You could raise the same objections to the original 'machines'/'widgets' formulation. (There are always economies or diseconomies of scale.)

Typically such questions imply the unstated assumption, "answer to the same level of precision/abstraction as the question itself, and assume you have all the info needed to give an answer". With that assumption both questions should be answerable.

Compression tends to be pretty parallelizable, if by nothing else than choosing to break the input into separate chunks. You might lose a little bit of efficiency in output size – more restarts, each compressor has less global information – but those don't mean slowdowns (and in a few contrived situations might even mean speedups at the cost of size). See for example 'pigz' and 'pbzip2'.

If I were asking this question, I'd accept the 'rough, assuming perfect parallelization' answer as correct-enough in the spirit and level-of-precision implied by the question. If the answerer brought up the difficulties in assuming perfect parallelizability or specific to compression algorithms or choice of inputs, that'd be worth some extra credit, and would trigger followups along the lines of, "how would those factors affect the size?" and "what bottlenecks might you expect?" and "could it ever be faster when split among more machines?"


Make your own chunks, and the algorithm easily becomes embarrassingly parallel. Give each of the 100 cores 1 GB, let them run independently, and don't sweat the few bits of compression efficiency you might lose. If Amdahl's Law comes into play, it's because you're constrained by memory or some other I/O system rather than the CPU core.

Decompression may not be embarrassingly parallellizable for an existing algorithm, if it relies on state that persists through every bit in the data set. But a codec algorithm can be designed to be embarrassingly parallel for both legs.


My general response to those kinds of performance questions is to measure rather than predict - let me go off, cook up some data sets of appropriate sizes and try it out.


So they are testing if you are too lazy to think much about obvious puzzle questions without any purpose beyond being puzzle questions or not.

Yes, you could call that a way of intelligence testing... :-)


5 minutes!


Bear in mind the parent comment isn't saying that the two CRT questions are bad because they probe understanding of mathematics. Understanding of mathematics was presupposed in the objects under test. Everyone had it in them to answer the question correctly, but a large part didn't, and that's the actual kicker of the article.


Link to the study linked in the article (PubMed prepublication abstract):

http://www.ncbi.nlm.nih.gov/pubmed?term=west%20stanovich%20m...

The psychologist Keith R. Stanovich is quite controversial among other psychologists precisely because he writes about what high-IQ people miss in their thinking, but his studies point to very thought-provoking data and deserve to be grappled with by other psychologists. I have enjoyed his full-length book What Intelligence Tests Miss

http://yalepress.yale.edu/YupBooks//book.asp?isbn=9780300123...

which meticulously cites much of the previous literature on human cognitive biases and other gaps in rationality of human thinking.

And here is the submitted article's link to a description of the Need for Cognition Scale:

http://www.liberalarts.wabash.edu/ncs/


The book mentioned, "Thinking, Fast and Slow", despite the boring title, is quite good. If you're the sort to hang around lesswrong.com it won't blow your mind, those less exposed to those ideas will find it fascinating (and probably a bit more accessible in book form).


If he had his wits about him, he would have titled it "What Intelligence Tests Skip".


> The psychologist Keith R. Stanovich is quite controversial among other psychologists precisely because he writes about what high-IQ people miss in their thinking

Sounds interesting; care to summarize what those things are?


ITT: can the Need for Cognition Scale be used to weed through candidates for programming positions?

Can it help identify candidates who have little experience but will make good programmers once they're taught how to code?


"Although we assume that intelligence is a buffer against bias—that’s why those with higher S.A.T. scores think they are less prone to these universal thinking mistakes..."

This fallacy is at the heart of the matter. Intelligence and resistance against bias are only loosely correlated. Such resistance comes not from intelligence but from careful study and mental exercise, e.g. looking at various important ethical and philosophical arguments and analyzing them.

This is like saying all large people are strong. There is some dependance but a smaller gym-fly can kick a slacker giant's ass. The sad thing, while it is obvious that you have to exercise your body to be healthy and strong, the fact that the same is quite through fro your brain is often overlooked.


Isn't resistance against bias a very requirement for considering someone intelligent? What exactly is intelligence if not the ability to think clearly?

To me, this looks like a definition game. Smart/stupid is a black & white view of looking at it and hence, misleading. As one overcomes his primitive biases, we call him smart, even though he remains susceptible to other biases.

In other words, people aren't smart or stupid. People's actions are smart or stupid in a particular situation.


Kahneman divides our thinking into two subsystems: type 1 and type 2. Type 1 thinking is fast, intuitive, unconscious thought. Most everyday activities (like driving, talking, cleaning, etc.) make heavy use of the type 1 system. The type 2 system is slow, calculating, conscious thought. When you're doing a difficult math problem or thinking carefully about a philosophical problem, you're engaging the type 2 system. From Kahneman's perspective, the big difference between type 1 and type 2 thinking is that type 1 is fast and easy but very susceptible to bias, whereas type 2 is slow and requires conscious effort but is much more resistant to cognitive biases.

Traditionally, "intelligence" (as colloquially defined) has correlated with type 2 thinking. So, a reasonable conjecture would be that people who are better at type 2 thinking would use it more and, therefore be less vulnerable to bias. However, this research shows that even those who are very good at type 2 thinking (as measured by their SAT scores and NCS scores) are even more vulnerable to cognitive biases. This is a deeply counter-intuitive result. Why is it that people who have a greater capacity to overcome bias have a greater vulnerability to bias?


> Why is it that people who have a greater capacity to overcome bias have a greater vulnerability to bias?

Overconfidence. If you've become accustomed to thinking of yourself as being better able to avoid cognitive bias, you come to be confident in your abilities, to the point where you (perhaps unconsciously) think of yourself as not susceptible to biases.


That's certainly one possible explanation. Another possible explanation is that their brains are just faster in general, so that even though their type 2 systems are faster than others', their type 1 systems are faster yet and manage to override even more consistently than in others. In any case, I don't think it's something that's "obvious" or "expected" by any means, and I do think that it should bear further investigation.


> Why is it that people who have a greater capacity to overcome bias have a greater vulnerability to bias?

Why do you assume that type 2 people have greater capacity to overcome bias?


I think the parent comment uses the term "intelligence" to describe a very different concept than what you have in mind. It probably has to do with high IQ, long and short term memory, problem solving skills, etc.

As a matter of fact, it would be worthwhile to avoid using the term "intelligence" in this discussion altogether. Please read this reference to understand why: http://lesswrong.com/lw/nu/taboo_your_words/

The whole point is that people assume that "problem solving" and "memory" skills automatically protect you against bias. This assumption is false, since "bias prevention" is a different kind of cognitive skill that must be practiced and developed independently.


Agreed. Thinking intelligent people can better resist biases makes as much sense as saying that handsome people can. In both cases the halo effect is at work.


If you take out resistance against bias off it, what exactly do you mean by "intelligence" ?


Intelligence and bias resistance are at least both cognitive traits.


> Intelligence and resistance against bias are only loosely correlated

Have you read the article (rather than skimming)? It says exactly the opposite.


Intelligence is overrated as a metric, from the get-go. Being smart doesn't mean anything - accomplishing something, whether that be writing a book, founding a company, making a new scientific discovery, sculpting a masterpiece, etc., is a much better metric.

Unfortunately everyone seems to be hung up on the "idea" of being smart, as if having a high IQ somehow constitutes an accomplishment.


It does mean something. Specifically it means you learn differently than other people, because you store and process information differently. It is neither accurate to be expected to function the same as other people nor to expect other people to function the same way you do if you are a significant outlier.

Particularly during schooling it can be lonely, isolating and incredibly boring. It is easy, under those circumstances, to either build yourself up as better than people around you or come to hate and hide your intelligence so you can pass as one of them. Both are a loss to society.

It is not an accomplishment, any more than having diabetes is an accomplishment, but it is a fact of life and trying to pretend it doesn't matter is futile.


The point I was trying to make was that intelligence itself is a bad goal - it's a means to an end. Being smart makes things easier, sure, but it's not really something to be proud of by itself. 'Metric' was probably a bad word choice.


Then why does it correlate so high with income, even after being adjusted by the economic background of the measured person's parents?


Probably because the thing it correlates the most with is reading for pleasure, and that's easily done in a household that could afford lots of extracurricular books.

The myth that IQ tests measure any tangible internal capacity (or that they were even designed to do so rather than to justify the exclusion of eastern european immigrants and racism against blacks) will stay as the dominant view because the people who set the dominant view are people who get good scores on IQ tests.


> and that's easily done in a household that could afford lots of extracurricular books.

But that is already adjusted for.

IQ tests highly correlate to a number of factors. More likely to read, less likely to get divorced, more likely to go to university, more likely to eat healthily, etc. But at the end of the day, when you keep trying to control for each of these factors you end up at the null hypothesis simply because there is nothing left to measure.

Just because something is politically incorrect doesn't mean it is wrong.


They adjust IQ scores for income? I missed that.

It has nothing to do with political correctness - it was literally the purpose of general intelligence tests to justify exclusionary immigration, racism and eugenics, and they were designed (or redesigned in some cases, such as when blacks in the military scored better than whites) to do that well. It reified a general intelligence concept that has no actual evidence, and justified that with factor analysis, the leading tool of statisticians for creating a single thing out of a maelstorm of complicated, interdependent factors by just pretending that they are linear.

See: https://en.wikipedia.org/wiki/The_Mismeasure_of_Man

IQ tests correlate far higher with pleasure reading in childhood than any other factor IIRC.

This is not sour grapes - I'm a member of MENSA (which is really a boardgaming club); it's just historically accurate. IQ tests could be completely replaced with a tally of books read for pleasure (and the reading level of those texts), and you would end up with all of the same correlations without all of the self-important mathy-sciency tone.

I accept that there's a difference in the amount of knowledge that people have accumulated, and the amount of familiarity about how to evaluate common classes of questions that a voracious reader will have seen a million times before, and that a lack of those things may create a lot of challenges in college. I have no issue with the SAT. My issue is with the completely unjustified leap to a belief of differences in capacity, and the projection of this onto reified folk theoretical (theory-theory) internal states. This mythology is just another cultural construct to separate humans into "us" and "the others" and to alleviate the cognitive dissonance between our proclaimed ethics and open prejudice. Being a measure of the status quo, primarily, it serves solely to perpetuate it, offering no other benefit.

edit: early intelligence tests actually had questions that assumed you knew details about current baseball teams.

edit2: I also noticed that, other than reading for pleasure, the other examples that you listed for high correlation with IQ are degrees of adherence to cultural norms. Are high IQ people simply better at obedience?


Says tau/2.



I think there is a 'situational intelligence' or a set of schemas you develop for dealing with things within a certain activity; e.g. an old school photographer may not need to use a light meter at all to get well exposed negatives; a Unix administrator will know where to look when a system starts to behave oddly; a nursery nurse will know when a child needs attention and when they can be left.

I wonder what the results would look like if you wrote 'bias check questions' within a knowledge domain and some outside the domain and compared the scores of a group of practitioners and a group of non-practitioners, sort of a four way table.


I'm on the same page. IQ is property of the genetic dice roll, not something that a person earns. Tangible results based measurements seem more appropriate. Pure intellect is the raw material & needs to be refined/applied to be useful.

edit: grammar


The rest of your personality traits that allow you to accomplish great things (diligence, perseverance, focus, empathy, etc), as well as external factors such as being born at the right place and the right time, are also arguably a genetic / environmental dice roll.


I'd argue all of those areas you list are far more likely to be improved over the course of a lifetime than sheer intelligence. Pure intellect is pretty much set at birth, or at the least the ability to improve it isn't statistically significant. I'd argue that genetic dice roll and environmental dice roll are quite different things as well. You have FAR more ability to change your environmental situation than your genetic one. Does everyone have an equal chance to alter their environment? No, but who said life is fair?


The human brain has enormous capacity to develop in a wide variety of areas. Most of these areas are not measured in tests like the IQ tests. IQ tests mainly measure those brain functions we find beneficial in modern western society.

IQ tests, SATs, ACTs and other standardized test like them are improperly named. They should be called "Tests That Predict Success or Failure in the School System From Which the Questions Have Been Derived."


SAT originally stood for Scholastic Aptitude Test...so it seems properly named, even by your own standards. Similarly, ACT stands for American College Testing which doesn't have anything to do with intelligence.

You seem to have a pretty serious ax to grind with standardized testing.


The SAT was created in 1928 with the intentions of measuring a student's aptitude. meaning that the test measured an innate ability, rather than knowledge acquired through schooling. Today, the test administered by the College Board is still called SAT, but the name is just an acronym, with the letters no longer standing for anything. According to the College Board, the SAT now does not measure any innate ability.

the SAT and ACT simply provides a common yardstick for comparing grades at different high schools. Educators use them as way to judge an A at this school or this teacher versus an A at this school or this teacher. Nothing more. At best the SAT is said to predict freshman year grades in college (somewhat).


I'm not debating what they do or where their name comes from. You suggested they needed to be renamed (presumably because they imply they test intelligence). They are (or were) named appropriately.


If I'm born into a millionaire family, I have more purchasing power than someone born into a poor family in Ukraine. This is a result of the genetic dice roll. Following your logic, does this mean Armani should start pricing their clothes adjusting for parental affluence and/or by how many rungs of the social ladder a person has climbed in their life?


That is the result of a environmental/social dice roll. Not even close to the same thing.


If you'd rather not just accept your current level of cognitive bias, the web site Less Wrong has a bunch of articles by and for people trying to become less wrong about things. Anecdotally, I've noticed that people I know via the Less Wrong community tend to be decidedly less full of crap than average, so it seems to work. For example, here's a series of articles on the subject of avoiding excessive attachment to false beliefs, which I found to be generally entertaining and insightful:

http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_M...

Any of those articles are a good place to start, so don't be intimidated by the amount of stuff there.


"Logic is the beginning of wisdom, not the end" - Spock

Anecdotally I found that the Less Wrong community tends to be decidedly more full of crap than average. In the same vein as spiritual materialism[1], many people that engage in a bias witch-hunt seem to be falling prey to "logical materialism", where the whole exercise turns into people deluding themselves into thinking they're somehow "better" than others because they're less full of crap than average.

It's good to know thyself, but it's no use if your knowledge isn't tempered by wisdom, and you're not going to get that by reading blog posts about cognitive biases online, no matter how good the posts.

[1] http://en.wikipedia.org/wiki/Spiritual_materialism


It's nice to know I'm not the only one who thinks this. I think it's really telling that a community which is ostensibly concerned with the science of achieving desires spends so much time focused on the "problem" of akrasia.

It's also heartbreaking to see intelligent people getting so excited about ideas like cryonics and personality uploading. I mean, they're interesting things to talk about, but a lot of people on LW seem to actually think they might get to live forever. It's kinda sad.


> It's also heartbreaking to see intelligent people getting so excited about ideas like cryonics and personality uploading. I mean, they're interesting things to talk about, but a lot of people on LW seem to actually think they might get to live forever.

I wouldn't totally dismiss sci-fi concepts like brain uploading. But then I'm not totally sold on their viability either.

One of the things I always thought was interesting about cryonics was data integrity. It could be a while before you hit the singularity. (Or whatever it is you think will wake you up.) Even with liquid nitrogen I doubt your brain can be 100% preserved. So let's say hypothetically you get yourself some proper Ray Kurzweil recursively improving A.I. And as a common courtesy decide to revive the Alcor people. If you have 99% of someones brain image and use statistical algorithms like Bayes theorem to fill in the rest is it still the same person when you wake them up? How about 99.99% their brain? 99.999999? (Which brings us back to the semantics of labels and reductionism vs. wholism.)

People who think they'll live forever have huge logic fails on their hands. Ignoring the heat death of the universe as an obvious one. [0] In fact every time I think of the whole business Issac Asimov's The Last Question comes to mind. [1]

[0]: (http://en.wikipedia.org/wiki/Heat_death_of_the_universe)

[1]: (http://www.multivax.com/last_question.html)


> ...is it still the same person when you wake them up?

This is pretty much where I land on this. Given any empirical theory of consciousness, it's never going to be "I get to live in the computer," just "I die, but a copy of my brain lives in the computer." And it's pretty hard to draw a bright line between that and "I die, my brain slowly decays for thousands of years, then it's surgically reconstructed and awoken." Still feels like death to me.

Of course, to your transhumanist theorist, there's probably just as much connection between me and the computer as there is between me in the night and me in the morning; it's all just an illusion created by a persistent brain state. But that doesn't help either, because now you're describing a kind of immortality -- "This exact stream of continuous consciousness is a dead end, but something very like it will continue to be and think of itself as part of me and that gives me and it some comfort about the whole thing" -- which the general public has been achieving quite successfully for some time in the form of procreation.

In fact, since every living organism represents a terminal link in a chain of unbroken life dating back to the first self-replicating molecule, it might be quite reasonable to say that we have all been alive for billions of years at least, we just don't remember most of it. But this is changing: I can go on Wikipedia today and recover the memories of our culture dating back for most of its existence.

Naturally, they grow vaguer the further they go back in time, as memories do; but for events that take place today, we have a record which far exceeds human memory in accuracy and exactness of fact, and which will very soon be competitive with it in emotional effect. It is quite realistic for me to expect to create a record of my life which has as much effect on my descendants in a hundred years as my own memories of today would have on me were I to live that long.

So I think it's possible the transhumanists missed the boat. Or rather, they're on the boat already and just don't realize it. The human macro-organism does, starting from now, seem to stand a decent chance of living to see the heat death of the universe.


transhumanists == a bunch of people who are unhappy with their bodies.

I don't mind it though as much -- I am much more disturbed by the prospect of a tyrannical or even "Friendly" AI some of them seem to be fond of.

Ignoring the implications of an AI that may be required to support a transhuman, I believe that one could draw up a very pragmatic argument why transhumanism could be useful even to us, embodied souls. What you described as inter-generational memory via things like history books and Wikipedia is good but not perfect (the classic example is that history gets written by the victors). A transhuman living for thousands of years would potentially bring a fresh perspective to the table, even if that perspective was imperfect too. Just the way both a free market and an ecosystem can benefit from a diversity in their pool of ideas/genes, so can humans.


>I am much more disturbed by the prospect of a tyrannical or even "Friendly" AI some of them seem to be fond of.

Same. Even if such a thing is possible, the probability that it's done right the first time is close to nil. And since a recursively improving singularity A.I can be assumed to irreversibly take control of the balance of power, it's not really something that you could afford to screw up.

Of course, Yudowsky (To the extent that he's doing anything.) seems to be working off the assumption that if he doesn't do it someone else will.


I don't have a big problem with their ideas about AI just because they're so disconnected with reality. I mean, if we go another thousand years without an apocalypse, it seems more or less bound to happen, so in the abstract it is something that should be investigated; however, it's so far off now that I'd lay serious money that all of their theorizing will be pretty much useless when the first implementations come around. In fact, since I've read and understood the Basilisk and said so publicly, you could fairly say I'm willing to stake my life on that :) I suppose I can only have faith that, when it comes about, people will do their best to make it all work out OK.

The argument I'm making about memory is that when communication has become advanced enough, you can't make a clear distinction between inter-generational memory and meat-memory over a significantly long lifespan. People change over time; give it enough time and you're as different from yourself now as your great grandchildren will be.

I don't think we need long-lived human beings or human personality constructs to gain the societal advantages of "I remember when..." It's feasible today for a person to record and archive audiovisual, geospatial and limited haptic data of their entire life experience, beginning to end. We can't record your thoughts, but if they're important you can write them down. I'd also wager that we'll see almost fully convincing sensory recording, which is a plain prerequisite for uploading, well before any life-extension technology which deserves the title of immortality. It would then be unrealistic for your descendants to say that they remember the things that happened to you only because these recordings would be far superior to memory.

Of course, the only issue is that we've had this sort of thing for a good while now, and it turns out we just aren't that interested in the things that happened a long time ago, just like, aside from the highlights, I don't care that much about what happened to me ten years ago.

Ten years ago, of course, I thought that everything that was happening to me was quite important.[0] That's why I label this idea of immortality "greedy"; it represents the whim of a brain state at the present moment to continue to influence the world long after it has become irrelevant. Just look at the current state of US politics to see where that gets us. (I've never seen a transhumanist argue that every transient state should be preserved in perpetuum, but I'd be curious to know what they tend to think about it.)

The point being that if uploading constitutes a form of immortality, so does having kids; the same theory of consciousness underlies both.

[0] This is a bit of a tangent, but I think this is (most of) the reason that burial rituals are one of the cornerstones of human society. Obviously it doesn't matter to the dead person what happens to them, but it is crucially important for us to have a say in what happens to us; we hope that, if we respect our parents' wishes after they die, our children will respect ours. And we take this so seriously that, in fact, they do.

I suppose I consider transhumanism, especially cryonics and uploading, to be a very highly developed burial practice. If it is the wish of a dying man to have his brain frozen in nitrogen, I will respect his wish, and even humor his beliefs about what that might mean. But I don't believe it means any more in reality than if we stuck him in the ground with everyone else.

And yes, I recognize the irony in writing this much about something I think is silly to spend time thinking about :)


I completely agree with you that having kids is the best and patently designed for us way of achieving immortality, but I'd still love to talk to someone who grew up among the Ancient Greeks. Yes, people change with time, but some memories stay. I would be plain curious to find out which ones do. Reading (perhaps less so with watching recordings) doesn't quite give you that information, as evidenced by the fact that there are plenty of scholars out there who read a lot of material yet who utterly disagree with each other on what the Ancient Greeks (or Hebrews) were really like.


> actually think they might get to live forever

And they aren't horrified at the prospect of that being possible?

Not much thinking going on there, I suppose.


Eliezer Yudkowsky's younger brother died in 2004 and it inspired a lengthy email thread about the subject.[0]

I expect it is not so much about wishing not to die, which even a transhumanist must admit is at least no worse than living, but wishing not to have loved ones die. Truly tragic.

[0] http://yudkowsky.net/other/yehuda


Why would you be horrified at the prospect of living?

I suspect it is you that needs to put a bit more thought into this matter.


Oh wow! A downvote! So, the message I'm getting is that a) it is appropriate to be horrified at the prospect of living; and furthermore b) asking why is inappropriate...?

This is seriously weirding me out.


The message I've taken from this entire subthread is "try to laugh rather than shake my head." The behavior makes a lot of sense if I frame it in status signaling (which helps make sense of so much that I suspect it of being too broad a framework). My own comment on the topic of living is: so it may be physically impossible to live forever, I think shooting for even a "mere" 200 years is doable and would be fucking awesome. At least we're not dogs, they get less than two decades.


"... more full of crap than average..."

Than average what?

I often lurk on LessWrong, and post there very occasionally. I find it to be a very rich source of original ideas, some of which are truly profound. YMMV, of course.


As a frequent LW contributor, I agree that Less Wrong users can be arrogant in a way that's counterproductive. I'm actually planning to write a series of posts presenting a sophisticated argument for this at some point.

I'm curious if you think there is anything else we could stand to work on. I interpreted your use of the word "wisdom" to mean a lack of arrogance, but if you were using it to mean other stuff as well I'd love to know.

(I'm a fan of yours, BTW.)


Anecdotally I've seen some pretty good articles on lesswrong, and some not so good ones.

But the idea "Hey, want to be more rational? Join our community" gives me the willies. If you want to be more rational, then joining a groupthink-ish community is the last thing you should want to do.


Reminds me of my Mensa membership (hey, at 16 you're young and impressionable). Quit that ghetto a week into reading their mailing lists. Imagine the complete opposite of HN.


Given that everyone is wrong some of the time, wisdom suggests that being involved in both communities will lead to being more right than either of them.


Unfortunately, sometimes it's those who are the most logical, or think of themselves as most logical, that fall into the most glaring of logical traps: (http://rationalwiki.org/wiki/LessWrong (specifically the basilisk drama)


Actually, one of the coolest things, to me, about LessWrong, is the fact that the community as a whole is arguably more rational than the founder. Eliezer's own LessWrong comments have been downvoted many times, and a number of those were during the "basilisk" debacle. Many of the site's regulars were on the side of sanity in that incident.

So I think it's fair to tar Eliezer himself for that incident, but I don't think it's a good indictment of the site's community as a whole.


> the "basilisk" debacle

Hadn't heard of this, just read the RationalWiki article-- and it is fabulous. LW literally derived religion and then took that seriously enough to cover it up.

That's fantastic.


That was fascinating. "Reading the sequences" also gives off this vibe that feels like ordering Scientology texts.


Much of the practice of Scientology is based on theater exercises designed to do exactly the same thing as LessWrong. The similarities are unsurprising.

Personally, I have yet to see any evidence suggesting rationality is even desirable. The most irrational people, selfless, story-oriented, tolerante of multiple conflicting subjective "realities", interested in the feelings and passions of people around them no matter what they are, are also those I most enjoy interacting with. Everyone I have met who strives for rationality is at least a bit of a prick.


> Personally, I have yet to see any evidence suggesting rationality is even desirable

Interesting idea. I think a certain amount of rationality is desirable -- people of below median or even near-median rationality go round making really stupid decisions which screw up their lives. On the other hand, once you've stopped hiding from imaginary demons, buying magnetic charm bracelets and drinking venti caramel frappucinos, further effort in becoming more rational may be severely diminishing returns.

How would it really help me if I were more rational and less subject to cognitive biases? I don't think it would help me much in making my day-to-day decisions. I honestly don't think it would help me in my work, either. It might well help in tackling really, really difficult questions where it's extremely difficult to disentangle your own feelings from the correct answers -- things like "What is the probability that humans will one day achieve immortality", or "What is the fairest possible tax system?" But would answering those questions actually enhance my life? Humans will achieve immortality, or not, regardless of whether I correctly predict the probability circa 2012, and even if I did come up with the fairest possible tax system I have no chance of actually getting it implemented, so it would just cause me frustration.

The people who did really great things in history -- whoever you might choose as your examples -- did they achieve it by being significantly more rational than everyone else? Not really, no. They did it by achieving some baseline level of rationality and then being extremely good at other stuff.


> The people who did really great things in history -- whoever you might choose as your examples -- did they achieve it by being significantly more rational than everyone else? Not really, no.

Observational bias. Rationality of thought process in non-technical situations is rarely externalized; unless you talk of scientists, you're highly unlikely to remark on how highly rational he is being. In fact, the only way I can come up with to make such a statement fit in literary fashion is when you're making a quip on someone:

"It was highly rational of Nixon to start the Vietnam War."


"It was highly rational of Nixon to start the Vietnam War."

Either you're making some deep meta-quip that I don't get, or...


Wait a fucking second. Did you say theatre exercises?

There's a group of people doing them theatre exercises, they rent a hall down the corridor from me periodically. I've always known there was something really fishy about them. Are you saying they might be scientologists? Becuase that would really fit to the group's MO.


The difference between scientologists and theater majors is that scientologists charge extra and add aliens.


I guess we've reached the nesting limit. The people in question are trying to hang sociology and group theory onto those theatre exercises. The "tutors" are, well, let's just say really odd. How can I find out more about those original theatre exercises?


I'm thinking of something like http://www.amazon.com/112-Acting-Games-Comprehensive-Develop... It's not perfect coverage (I mostly encountered the exercises I recognized first-hand from teachers who had learned them from other teachers), but I think that book does describe some of the overlap. You could also look at the work of Keith Johnstone, especially his chapter in Impro on Mask and Trance. For Scientology, http://www.xenu.net

I was being flip before: there are real differences, primarily in the role of teachers (in theater they should never hold real power over you) and suppression vs. expression of emotion (theater exercises are often about how to feel more, whereas scientology is about brainwashing into feeling less). However, self-hypnosis, presences and detailed mental examinations are shared by both.


As does the pervasive use of acronyms. Not to harp on it too much, since I don't think they're bad people, but there's a cultish vibe for sure.


I'm reading the book Priceless by William Poundstone which discusses the work of Kahneman and Tversky (among others) in great detail as it relates to the psychology of pricing (excellent read, btw).

This is O.T from what the article is saying but mildly O.T (meaning on-topic) and I'd love to hear HN's opinion on this.

One of the problems presented in Priceless is:

Would you rather $3,000 as a sure thing, or an 80% chance of $4,000 and a 20% chance of nothing

versus:

Would you rather a $3,000 loss as a sure thing, or an 80% chance of losing $4,000 and a 20% chance of losing nothing.

The erroneous path that most people take, in the eyes of these researchers, is that they set their base reference point at the sure thing, ie. they say "well the $3,000 is a sure thing so I can assume I have it".

If you do that, then your answers are different:

In the first instance you keep the $3,000 (because it becomes an 80% chance of winning $1,000 versus a 20% chance of losing $3,000).

In the second instance you go to court (because it's an 80% chance of losing $1,000 versus a 20% chance of winning $3,000).

However if you don't "rebase" your reference point, then you would make the same decision in both cases - that is you would take the 80% of $4,000 bet because it's "worth" $3,200.

As much as I realise what they're saying and they say it's statistically incorrect to do this, it really seems to me the most sensible way to make the decisions (which is, I guess, exactly what they're saying right? I'm human, ergo fallible to this kind of illusion).

The thing that kills me is this: if this is a one time thing, I'd rather be sure of the $3,000. If I'm buying and selling these bets all day, then sure I should take the $4,000 at 80% because even if I lose this round, the next time I take the bet will make up for it (ie. law of large numbers).

But what this problem doesn't address is how often I get this opportunity? Depending on my circumstances, $3,000 could be a life changing opportunity, ie. if I "win" $3,000 or $4,000, my circumstances are essentially the same so I should always go for the sure thing. If I lose $3,000 or $4,000 I'm equally screwed, so I should take the risk and try and win in court.

What am I missing?


The issue is that the first $1000 gained is worth more than the second $1000. Similarly, the first $1000 lost hurts more than the second $1000.

On the gain side, the different values should be self evident. I think we can all intuitively understand why making $100000 instead of $50000 is bigger deal than $150000 instead of $100000. So let's assign utility to the numbers:

first $1000 = 1 second $1000 = .8 third = .6 fourth = .4

Option #1 gives us 2.4 whereas option #2 gives us 2.24.

I'd argue that we can similarly rank the losses the same:

first $1000 lost = -1 second = -.8 etc.

which gives us -2.4 vs --2.24 meaning we should obviously take the option #2.

Now the interesting question is why we can assign similar yet negative values for the losses. I'll give two examples that might show why this is true.

First, consider living paycheck to paycheck. I only have $500 of buffer. In this case, while losing $2000 instead of $1000 is worse, it's not worse by a lot because either way I can't afford rent and I'm evicted.

As a second example, consider bankruptcy. If we take losses to high values, eventually each additional loss doesn't subtract anything from my "happiness". I've already hit the bankruptcy point and nothing worse can happen.

These are of course the two extremes, but I think it's easy to complete the spectrum and show that for any $x the first loss of $x hurts more than the second loss of $x.


I'm trying to follow.

So you are saying that with the following scenario:

1. +3000 at P100 or +4000 at P80

2. -3000 at P100 or -4000 at P80

Depending on the context, the answer is basically universally different if you say "1 then 2" as opposed to "just 1" or "just 2"?


The issue is that the answers are (yes basically) universally different regardless of whether both options are provided, but that statistically speaking they are identical.

The foible of humanity that makes the answers different is that we first "rebase" our expectations to the 100% chance, rather than considering our current position to be the baseline.


Yes, statistically they are identical. However, the fact that people consider the significance of the objects and the context of the decision isn't a fault of human intellect.

Really, stating otherwise is a fault of human analysis. It's a game theory problem I think (likely a variation of the stag hunt: http://en.wikipedia.org/wiki/Stag_hunt). It's a paradox only because Armchair mathematical intuition fails to explain it.


No because the stag hunt family of problems are about trust in other people, the situation in the OP is not. It's about risk aversion. Whether or not that is a 'fault of human intellect' is a matter of definition and up for debate, it's not as clear-cut as you make it out to be.


Ok thanks. I am no specialist in this by any means. But I do know enough to be cautious when dealing with these types of problems. Glad you clarified.


This explains, when you look in your git repository for who created a bug...

When you find it and it's by someone else, it was obviously a stupid, idiotic error that you would never make.

When you find it and it's your own, it was obviously an understandable mistake that anybody could have made.

Particularly if you consider yourself a great coder.

:)


You're more likely to go back and notice the stupid broken code you wrote, because it disproportionately comes back to bite you later. The good code just works, so you don't notice it as much. Classic selection bias.


I've located incredibly stupid bugs that I wrote enough times to know that I am not a great coder.


Got the ball and bat one right, and the lily pad one. I must not be as smart as I hoped :-(

I think it comes down to having a value system where you'd rather be wrong and corrected (even if you have to do it yourself), as opposed to always projecting yourself as"perfect". Once you accept you aren't perfect, its easier to work towards perfecting what you've got.


My thought process was approximately, "Ten cents! Wait, that's really stupid. Hold on. Mental algebra. Oh. Five cents."

I catch myself like this all the time. It's a little depressing.


Ah, that's the author flattering the reader. If the author had picked a genuinely tricky example for the first one it would have turned many readers off.

Of course a question like the old bat and ball one is ridiculously simple, after you've been warned that many people get it wrong and hence that you should probably stop and think for a few seconds before blurting out the first answer that pops into your head. Do it without that warning and it's easier to get it wrong.


When I first encountered these, without the "it's going to be tricky" priming, I must admit I got the bat and ball one wrong until I revisited it. The lily pad one I got right with no deep thinking - my explanation is that experience working with bits primed me for treating "doubles" more correctly. That's post hoc and anecdotal, of course, so take it for whatever, but I thought it interesting...


This. I think many people don't think they are susceptible to these biases because every time they encounter one, it is in the context of reading about biases and thus they are primed to be on the lookout for trickiness.

I guess "many people" includes me - I always thought I was good at these types of questions, maybe I'm not :(


I sort of realized I was already primed for "gotchas"! Of course I can't sing my own praises without showing some bias, since the whole article was about bias.


Excellent point. Also, if you adopt that value system, you may be just as susceptible to making errors, but you will be slower to trust your own thoughts. In terms of the examples given, you may get the ball and bat thing wrong intuitively, but if you know not to trust your intuitive thought, you'll say "hang on, is that correct?" and switch to slow/analytical thinking to arrive at the correct answer. Maybe. Hopefully.


Well, at least people are collectively smarter today compared to 100 years ago - the percentage of people who can answer those questions correctly has gone up considerably :-)...

Also, I just hate these kind of questions - they've always been used to prove that I'm stupid by those who knew the answers, and they're not solving anything useful - I need the problem to solve something I care about in order for my brain to fully focus on it and "do the math"...


These questions are not designed to demonstrate intelligence if you get them right. They are designed to show how our cognitive biases trip us up, often without us even noticing. You're not stupid if you get them wrong, you're completely normal.


FYI the tallest tree in the world is ~ 116 m or 379 feet.

http://en.wikipedia.org/wiki/Hyperion_(tree)


That question sort of bothered me - I have no idea how big a tree is, and even if I saw one, I have no reference for how many feet it is without doing some fairly exhaustive mathematics (and at the scale of "largest redwood", I'd likely be wrong). Given some information about redwood trees, of course people are going to use that information in the subsequent guess. They're not going to imagine a redwood tree, then imagine a building next to it, then count the floors and estimate the height. Or estimate the girth, then guess a height/girth ratio that makes sense given the composition of a tree, and then estimate the height. They're lazy.

If I asked you, "Will a frooble fit in my pocket/empire states building?", and then asked you to estimate the average size of a frooble, you'd certainly take into account my earlier question.

See http://lesswrong.com/lw/k3/priming_and_contamination/ for some better examples. IMO, the more insidious form of anchoring is contamination (vs sliding adjustment).


For all of the high and mightiness of this article, this bugged me:

In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

If a lilypad is 20 square inches (which is probably conservative), and you started with 1 lilypad, after 48 days of doubling it would cover 1.4MILLION square miles. That is 44 times the surface area of Lake Superior.

I get the point of the question, but if you're trying to play "gotcha" on people, at least ask a reasonable question.


It was 20 days when I was a kid.

Inflation, I suppose.


So change the 48 to 15 or whatever, it makes absolutely no difference. Come on.


My takeaway is that smart people are in fact fairly dumb, in other words even fairly bright specimens of homo sapiens make stupid mistakes and irrational decisions quite often because of this shortcutting.

I also think that on the other hand those types of shortcuts are actually probably very useful aspects of our human intelligence.

I think that within 50 years or so we will see new species/upgraded humans or AIs that actually don't have those problems, because they will have built-in checks and alternative types of intelligence that rely on those shortcuts less.


I highly recommend reading "Predictably Irrational" by Dan Ariely if you thought this article was interesting. It covers exactly this subject and makes for fascinating reading. I picked up the book about a week ago via some other post linked here on HN and I'm loving it.

http://www.amazon.com/Predictably-Irrational-Revised-Expande...


Got the lily pad one. I still cannot believe I said 10 cents on the first one with a completely non-sarcastic chuckle.

This article reminds me of pg's reasons to have a co-founder to avoid being delusional. Better be proven wrong on the inside than on the outside.

edit: Although on second thought, I think this bias theory probably extends to organizations as well. Probably that's why big companies sometimes can't see the obvious which a startup does.



Well I got both those questions right by following the heuristic that the most obvious answer off the top of my head would not be the answer.

To my mind on any test that was supposed to be hard the appearance of any obvious an answer triggers me to check for the proverbial trick question.

On the other hand, most brain puzzler type questions that get discussed on HN (for example interview questions at Google) I find to be damn hard. I can't imagine that "smart" people would do worse than "stupid" people on truly hard problems. I guess that is the area of bias being pointed to in the OP.


All of this is covered (much better)in Kahneman's 'Thinking, Fast and Slow'.


Here's a good mental test for the author:

When you're done with an article for the "Frontal Cortex" section, read it aloud to yourself and smack yourself in the head with a frozen herring for every time you use the word "we", "us" or "our" in your article. If you have a headache when you're done, burn the draft and rethink the whole thing, b/c your article obviously suffers from a "smug we" bias.


I don't see why, when smart people are trained to be lazy, researchers are surprised that smart people are lazy.


I am surprised. I thought cognitive traits had high correlation. I did not expect laziness would factor into it so much.


Just a couple of weeks ago we had an article about why smart people don't think of others as stupid (https://news.ycombinator.com/item?id=3984894), and now they're stupid themselves? I'm puzzled.


A careful reading will tell you that the article you refer to is normative not descriptive.


This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.

Doesn't Kahnman distinguish between intuitive and deliberate thinking? So it could be possible to think better by distrusting our intuitions and deliberating more, right?


Well, the ball one I've heard before and the lilypad is obvious if you've been exposed to biology. Is this not more a matter of education not being applied to real-world cases and relying on theoretical teaching?


What, exactly, does the lilypad one have to do with biology...?


Population growth follows an exponential curve.


A reference to mitosis, perhaps?


I think that's the point. neither of these is hard to work out from first principles with nothing but very basic math. Memorising the answer is all well and good, but that's just another rule of thumb.


There is a simple cure for smart people to not be stupid, they can detect those errors and bias easily in other people thinking, but not in their own thinking because introspection doesn't work. So the cure is to play as if you were the actor in a theater, that is pretend that you are not yourself when you are thinking. You should imagine you are thinking as a known stupid person and by miracle you get smart and not so stupid.


Easier said than done.


This isn't big news for me. It took me about 7 years to understand two courses of high school while finishing my master of science. If you ask me the right question or try to teach me just the right matter in just the right way, a donkey will get it sooner than I do, and I'm talking about possibly years sooner. I might just not see the problem or I might think the wrong way, I don't know. There are things I just don't get.


New Yorker articles (that get posted here, anyway) always have some sort of take on things that attempts to bring down the good. Same for The Atlantic. I'm not saying they're always false, but there is a certain kind of thing that these publications are interested in, and it's a kind of thing that makes me feel dirty---or as if they're trying to make me feel dirty. Anyone else noticed this?


What did you think of first when you read about the bat and the ball problem? Also, what's your background (e.g. CS, Maths etc.)? As someone who has a relatively strong background in maths, I quickly saw the outlines of a simple, algebraic substituion problem. I'm quite interested in how people analyze problems, so I'd love to see how the HN community approached this.


I think this is more an issue of the English language. English is not a good way to speak math or logic. In the bat and ball question I mistakenly (and I'm guessing everyone who got it wrong) ignored the word "more". That word represents an operator and is therefore crucial to the question, but is extremely diminutive in terms of English language.


Possibly, I immediately substituted one dollar plus ball for bat, then saw I had 2 balls and a dollar making up $1.10. Or 5 cents a ball. I honestly didn't pull out the algorithm sheet, it just sort of worked itself out in my head before I had time to think: what are they looking for...

Not to say I don't have biases, just not for word-number problems.

Poker on the other hand is another matter, I still chase straights and flushes in games with wild-cards, even though I know those hands are almost worthless.


I answered the question correctly, which I'm sure the majority here probably did too. When someone asks a simple mathematical question, I always seem to give it more thought since I always know it must be a dodgy question. 4 years ago, I'd probably of answered the question incorrectly. But the baseball question is an obvious mindfuck.


So because I'm suffering from a deep case of the derp today, how are the first guess answers to those questions wrong?


I hate myself. I can never answer questions like this, and I exhaust myself from trying so hard.

Throughout school, I was absolutely HOPELESS. I couldn't do anything when it came to basic math, except some algebra. These questions (even faced with the formula to solve it) still bugger my head. I tried to read books, get tutors, do everything to better myself (who ever heard of a computer guy who couldn't do math!).

I can easily write algorithms, do algebra, write complex programs, do anything on a computer but faced with a question like this my brain shuts down very quickly.

I stumbled across this one day... still not sure if I believe it's a thing... and that I have it: http://en.wikipedia.org/wiki/Dyscalculia


Holy smokes, thank you, thank you, and thank you so much for posting this! I had no idea that this had a name called "Dyscalculia". I wonder if there is a way to medically verify if I actually have this condition. To this day, I cannot read an analog watch. I cannot do math in my hand, although I do perfectly fine on paper. For even simple calculations, like 23 + 45, I simply cannot keep both numbers in my head. It is truly a curse. I actually physically get dizzy when I'm asked to perform computations. When I was in high school, I still played games like "Number Munchers", asking myself.. "What is wrong with me?", despite the fact that I had successfully made it to calculus and maintained a very high GPA.

It is amazing then, that I somehow have managed to complete both an undergraduate and Master's degree in Electrical and Computer Engineering. I compensated for my lack of numerical ability by heavily relating on calculators throughout my college education. In fact, one of the reasons I pursued computers in the first place is because I recognized that I could survive, and hence, fake it, by offloading such "trivial" computations to a machine.

Many standardized tests, such as the GRE, don't allow calculators unfortunately.


It is a thing, and I have it.


May I ask how you were able to get it diagnosed? Are there specialists that one can see to identify this? What is the testing procedure?


I was diagnosed in college. My adviser suspected something was wrong and suggested I see a specialist. It's been more than 20 years, so I'm a little hazy on the details - but from what I remember, the tests mostly involved reasoning and abstraction.


If the ball cost $0.10, the bat would need to cost $1.10 (a dollar more than $0.10), which would mean that the pair together would cost $1.20, which we know isn't true. To get the right answer, you have to solve the simple algebra equation in Retric's sibling comment.

For the lily pads, the percentage of the lake covered doubles every day, so the lower bound percentages for the last few days look roughly like this: 12.5%, 25%, 50%, 100%. On the 24th day, you'd have to double in size 24 more times in order to fill the pond, rather than the once it'd take on the 47th day.


Here's how I broke it down.. (which is apparently wrong)

A bat and ball cost a dollar and ten cents.

  Bat + Bal = 1.10
The bat costs a dollar more than the ball.

  Bat = Bal + 1.00
How much does the ball cost?

  Bal = ??
So..

  Bat + Bal = 1.10
  Bat = Bal + 1.00
Therefore

  1.00 + Bal = 1.10
  Bal = .10
  1.00 + .10 = 1.10
This seems logical..

Aside: fuck that script that messes with your copypaste, and the same sentiment to sites that implement it


You're subbing in 1.00 for Bat when you should be subbing in 1.00 + Bal for Bat so that you have 1.00 + Bal + Bal = 1.10


Ahhhhhhh. I see now. Kind of funny how I completely misprocessed the "Bat costs 1.00 more than the ball" bit.

Always hated word problems in school :P


I have the opposite reaction when I have seen questions from these guys that goes something like, how do people get this stuff wrong?

X+Y = 1.10; Y + 1 = X; (Y+1)+Y = 1.1; 2y=.1, y=.05, x= 1.05;

2^48=x; 2^y=.5x; y=47


Since most people don't get calculus they use simple math to shortcut to answers that sounds right.


Calculus? These problems require elementary school arithmetic or, being generous, basic algebra.


Think it through.

The first problem is operational and the second problem is change on a slope.

People use what they know to deal with problems, so facing these two they use basic math (subtraction and division) being ignorant of higher-level math concepts such as algebra or calculus. The answers they come up with appears right to their known level of logic.

If you were the type of person that got to learn about high level math concept, and are the studious type to double check answers, then these two problems are condescendingly seen as trivial.


I see it differently. The whole point of the article is that these problems affect smart people.

>For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.

It has nothing to do with knowledge of higher level mathematics because these problems are easily solvable with arithmetic. Lacking calculus doesn't kill you on this problem. An intuitive gut feeling that you've already arrived at the right answer and laziness is the source of confusion.

I've even read about that damn bat and ball problem and it STILL tripped me up this time. I could easily have double checked my answer, but I wanted to read the article. Even a child knowing nothing other than addition could get it right with a little bit of trial and error. I hope after admitting that you see that I don't find the problems condescendingly trivial.

Personally, I found the second problem much easier... probably because programmers have a better intuitive grasp of powers of 2. Bringing in slope is stretching it a bit. Working in reverse from the completely covered lake, it should be obvious that going back one day halves the lily pads. However, I could imagine how someone more familiar with linear processes would get the wrong intuitive result.

>The answers they come up with appears right to their known level of logic.

A studious habit sure, but checking your answer isn't a higher level math skill.


It has nothing to do with calculus, or math at all.

My natural reaction to the bat and ball problem was to parse the problem statement verbally and search for a plausible answer among the tokens. The algorithm retrieved "USD$1", and then a second background process took over and said "wait, that sounds a little bit too right". It might have taken me full 5 seconds before I realizes I had to switch to math_mode!!!


Where can I find more of these questions?


I studied "Choice & Behavior" at Penn -- the names Kahneman and Tversky were a common refrain. If you're looking to self-teach, my prof Jon Baron has a great course outline online: http://www.sas.upenn.edu/~baron/p153.html


do we all think about the same thing at the same time or does Jonah Lehrer read HN religiously?

i _just_ watched that talk a couple of days ago because it was posted here: http://news.ycombinator.com/item?id=4082308


Comments here reinforce the research.


These sort of questions always put me into "hold on, think about it" mode and statements like "Your first response is probably to take a shortcut" are simply not true. I'm actually more vulnerable to over-think a problem rather than provide a quick wrong answer.


If the lily pad patch was a mere 1 square inch, on the 48th doubling, the pond would have to be (check my math)about 70k square miles - or 10 times the size of Lake Erie. Those lilies would be consuming a serious amount of co2 during that last doubling!


Both those questions are trivial and I answered them correctly. This is in line with the article's conclusion: I don't consider myself very smart. I mean I had some moderate successes in my childhood at math competitions, I am a reasonably good programmer, but I am not very smart. I even failed at the on-site Google interview.

But here is the problem with the article: The people who I consider smarter than me (in the mathematical/IQ sense) also answer these kind of questions correctly. This includes my friend working at Google, some researcher mathematicians who I know from math forums who won serious math competitions as a child, etc... These questions are really-really trivial. The researcher mathematician guy who I know do not even make mistakes on 10x more tricky or hard questions, it is scary how he do not make mistakes and thinks incredibly fast. Something seems to be wrong with this study.


I remember the SAT as more about checking oneself's first reaction to a problem. They often try to trick you with the obvious answer. The GMAT and GRE were quite similar. I would often have to stop myself from taking shortcuts


A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball

  bat + ball = 1.1
  bat = ball + 1
  2bat + ball = ball +1.1 +1
  2bat = 2.1
  bat = 1.05
  ball =0.05


My thought process went, "a dollar is 90 cents more than 10 cents, so $1.05 is a dollar more than five cents".

I made exactly the same fallacy as those in this study; I've just learned to check my work. On the other hand, I suspect the whole process of estimation and refinement was faster than writing out those equations.


I parsed that as:

    bat = 1.1
    ball = 1.1  
    bat = ball + 1
It confused the hell out of me.


I immediately switched to generic variables X and Y, rather than keeping the original names. It helps.


bat - ball = 1

1 - .1 = .9

1.05 - .05 = 1


I think that smart people are also prone to falling for headlines like this. The reality is much more complex.


The last line "The more we attempt to know ourselves, the less we actually understand." is worrying me a bit.


Because they spend all day on HN?


Taking more shortcuts as I get older too - a constant battle to stop and think...


English is a terrible language for formalism: News at 11.


Because they don't ask enough questions.


No, research did't show that "we do this" or "our approach is that" or "humans aren't rational" -- what the research showed is that the typical person does this or that.

A similar experiment where people draw the wrong conclusions is the Milgram experiment. Yes, most people are obedient to authority figures and do what they are told. But not everyone acts that way.

This research likes to sweep the best human beings under the rug, as if being virtuous is not something to try to emulate, but is something to hide. This explains why the majority of people act the way they do. Perhaps if they were taught that their "we're only human" vices are not the ideal to emulate, perhaps if the best that humanity had to offer were put forth as the ideal instead, then these lesser human beings who make up the majority would become what they might be and ought to be.


Animals are less irrational than humans. Children are less irrational than adults. Why do you assume rationality is better, rather than maladaptive?


I don't "assume", I induce. See Newton's rules of reasoning. Pay particular attention to Rule IV.

It is clear from many examples that rationality gives us the utmost ability to adapt, prosper, and survive over the long term. And there is no example that truly leads in the other direction. (There are many perverse definitions and applications of "rationality" that seem to trick some people into thinking it does lead in the contrary direction).


> I don't "assume", I induce. See Newton's rules of reasoning. Pay particular attention to Rule IV.

u so smaht


Your deranged sarcasm eloquently sums up what's wrong with our educational system: kids are taught that their opinions matter, regardless of how idiotic they are. Well the truth is, your uninformed opinion doesn't matter.


Now that's the kind of headline I'd give to my article if I wanted it to reach the top of the HN front page.


I notice this all the time, all over the place. It drives me nuts, to the point that I am now extremely skeptical of what we call "intelligence". Taleb's "The Black Swan" really opened my eyes to this. He talks a lot about how we reason in ways that do not correspond to reality.

I don't know what right is, but I know the way we currently think about intelligence is wrong.


Agreed. IQ is a terrible method of evaluation also. For instance, if you practice taking IQ tests, I guarantee you will get better at answering those questions. Even if you gauge only the first attempt, how do you know their daily job wasn't very similar to those exercises. In fact, IQ tests are very much like building software to only work in one architecture and then hoping it applies to others. It really only judges speed and word/number pattern recognition. This encourages short cuts and is hardly applicable to any practical situation.


I find the abundance of "See? Smart people are actually dumber than I am!" posts amusing.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: