A lot of "test-taking" training basically consists of saving time by training away from full reasoning, in favor of cheap-and-good-enough heuristics. Furthermore, those heuristics are over-fitted to the particular problem types on standardized tests. I wonder how much of this study is actually measuring their ability to trigger test-taking instincts on problem types they're not designed for.
The test itself aside, I feel as if often in real life people correlate speed with intelligence, and the "smart" people that we know are the ones that are able to come up with (usually) correct answers quickly. Therefore it makes sense that these smart people would have a lot of heuristics that allow them to do so. To be clear though, the causation here is more complex than what is implied in the article. People who have heuristic strategies are characterized as smart, as opposed to the other way around.
When you're taking a test, you have the expectation that the problem was designed by a human who wants you to demonstrate a particular piece of knowledge. You look for subtle clues in the words that point to which core problem was in the mind of the person who wrote the question.
The experimental setup seems like it would catch out people who make the assumption that the problems were designed to test knowledge.
Despite a number of statements to the contrary in the various comments here, taking SAT scores as an informative correlate (proxy) of what psychologists call "general intelligence" is a procedure often found in the professional literature of psychology, with the warrant of studies specifically on that issue. Note that it is standard usage among psychologists to treat "general intelligence" as a term that basically equates with "scoring well on IQ tests and good proxies of IQ tests," which is why the submitted article has a point.
"Frey and Detterman (2004) showed that the SAT was correlated with measures of general intelligence .82 (.87 when corrected for nonlinearity)"
"Indeed, research suggests that SAT scores load highly on the first principal factor of a factor analysis of cognitive measures; a finding that strongly suggests that the SAT is g loaded (Frey & Detterman, 2004)."
"Furthermore, the SAT is largely a measure of general intelligence. Scores on the SAT correlate very highly with scores on standardized tests of intelligence, and like IQ scores, are stable across time and not easily increased through training, coaching or practice."
"Numeracy’s effects can be examined when controlling for other proxies of general intelligence (e.g., SAT scores; Stanovich & West, 2008)."
As I have heard the issue discussed in the local "journal club" I participate in with professors and graduate students of psychology who focus on human behavioral genetics (including the genetics of IQ), one thing that makes the SAT a very good proxy of general intelligence is that its item content is disclosed (in released previous tests that can be used as practice tests), so that almost the only difference between one test-taker and another in performance on the SAT is generally and consistently getting all of the various items correct, which certainly takes cognitive strengths.
I still think Stanovich's point is interesting that there are very strong correlations with IQ scores and SAT scores with some of what everyone regards as "smart" behavior (and which psychologists by convention call "general intelligence") while there are still other kinds of tests that plainly have indisputable right answers that high-IQ people are able to muff.
(Disclosure: I enjoy this kind of research discussion partly because I am acquainted with one large group of high-IQ young people
and am interested in how such young people develop over the course of life.)
Major thanks to Cosmo Shalizi who opened my eyes to these issues.
Also, the SAT grew from IQ tests, so it really isn't surprising that the two measures are correlated, given that the questions for SAT were probably kept on the test because they correlated with IQ tests.
That being said, the work of Kahneman (And the new paper which I should probably read instead of commenting on HN) is pretty rock solid as far as it goes. It is worth noting that there is a converse position in this field, that of Gerd Gigerenzer who argues that these heuristics exist because they are useful, and only go wrong in artificial situations.
Personally, I incline far more towards the views of Gigerenzer than those of Kahneman, especially given that Gierenzer and colleagues attempt to model the mind given their theories computationally, which is something that psychology could do with more of.
Full disclosure: I'm a psychologist who's very frustrated with the lack of statistical sophistication and interpretation in my field.
Suppose Y is only weakly correlated to X, Y might still be used to show that Z and X correlated if we can show that Y and Z are not otherwise correlated.
If only some smart people score high on SAT and most who score high on SAT are more easily are easily fooled by certain questions, it might indicate most smart people are easily fooled by these question but you would want to engage in further investigation to be sure...
In general I would be reasonably satisfied with a definition of intelligence which described how usefully one is able to employ intuitive cognition.
Systems and rulesets can be a useful method to structure memory and reason, but being bound by them is detrimental to long-term correctness and comprehension. Being able to write syntactically-correct programs is useful, but neither necessary nor sufficient to be an excellent programmer. A meta-understanding of the effect such syntax has on the program at hand is more useful than always correctly following it.
"...test-taking" training basically consists of saving time by training away from full reasoning...
A quote from one of my professors:
"Tests test what tests test."
Tests end up serving as an observable criteria for identifying intelligence. As long as their proper function is understood, they are useful. They are not not necessarily helpful in identifying who will be most successful in business, the political arena, etc. Because of the increased attention on testing, most reasonably informed people develop test taking ability as a somewhat independent skill.
The SAT, ACT, IQ tests, and all standardized test like them are socially constructed concepts that ATTEMPT a method of measuring intelligence. Intelligence (in the real world) reaches far beyond one's abilities to answer multiple choice reading comprehension, basic math and writing. Not to mention that problem solving in the real world has no time constraints.
Beethoven would not have gotten a perfect score on his SAT's. However, we all can attest to his innovation, creativity and musical genius. How can a multiple choice test measure the creative abilities of people like Sir Richard Branson, Steve Jobs, or Pablo Picasso?
The idea that "smarter people... were slightly more vulnerable to common mental mistakes" is a nonsensical conclusion. These findings are completely worthless.
No one claims the SAT, or any other test, is the final word. But at the same time pretty much everyone accepts that that "general intelligence" (or something like it) exists, and that tests are a reasonably good proxy for detecting it. To first approximation, students who do well on the SAT are successful in other ways associated with "intelligence".
And that -- the fact that the SAT correlates with something under study -- is all that is needed for good science. Even poor correlations can be enlightening if the data (and scientist) is good enough.
Wikipedia's Definition of intelligence:
Intelligence has been defined in many different ways, including the abilities, but not limited to, abstract thought, understanding, self-awareness, communication, reasoning, learning, having emotional knowledge, retaining, planning, and problem solving.
Sure, many people that scored well on the SAT are intelligent, but that doesn't mean that those that didn't score well are not just as intelligent or capable.
The findings more accurately reflect the conclusion that: "Those With High Scores on the SAT Are Stupid"
Basically, if you want to quibble with the headline of the post, then I'll grant that it's a little confusing (intentionally so, as are most good headlines), even if IMHO that point is a little specious. If you really want to claim "there is no test that can accurately measure intelligence" as a matter of scientific fact, you're just plain wrong, sorry.
> Beethoven would not have gotten a perfect score on his SAT's. However, we all can attest to his innovation, creativity and musical genius. How can a multiple choice test measure the creative abilities of people like Sir Richard Branson, Steve Jobs, or Pablo Picasso?
These tests are not meant to measure creative abilities. They are meant to measure the ability to solve math, reading comprehension, and writing problems. The SAT is used as one of several criteria in the college admissions process, NOT to try to predict who will be on a short list of history's most innovative people.
On the orther hand I completly agree that is damn hard to meassure intelligence correctly. But as as long as you don't have proxy that works as well for "educated" westerners as it does for "uneducated" bush or jungle tribesmen you still have quite a high risk of error in your studies. Just my 5 cents.
P.S.: Upvoted rstevensons posts, don't see any reason to down vote him for being critical about multiple choice tests as a basis for such studies.
It might be better just to look at a student's qualitative portion of the exam, since one could score highly on the SATs while still getting a (relatively) poor score on the section most similar to these kinds of questions.
> all standardized test are socially constructed concepts that ATTEMPT a method of measuring intelligence
as opposed to ones that don't attempt and have not been constructed by a society? What are we hoping for here, exactly: some ray of light shone upon us by god almighty which will let us know that, without doubt, those men are smart and those other ones are stupid?
> The idea that "smarter people... were slightly more vulnerable to common mental mistakes" is a nonsensical conclusion. These findings are completely worthless.
> Intelligence (in the real world) reaches far beyond one's abilities to answer multiple choice reading comprehension, basic math and writing.
Good thing I'm in La-La-Land, then, so I can answer multiple-choice reading comprehension all day long.
> Beethoven would not have gotten a perfect score on his SAT's
And you know this because you dug up his skeleton and it wouldn't mark answers? I'm not sure what to make of your assertion.
You hadn't quoted any passages from the original study that would display inadequate methodology or statistical error. Your findings are completely worthless.
Bear in mind that 50 years ago the SAT was still in flux and a lot of it was being experimented with and new. Given that it was one of the biggest, and most repeatedly renewed concerns for researchers like Kahneman, there's little to doubt he's not only an expert on the SAT, but also knows as many of its downsides as anyone.
Therefore, I believe this paper is a sort of "lessons learned" story which shows that his approach is better than the form of education being undertaken by schools.
I find it ESPECIALLY curious that neither the article nor anyone in this whole thread (and have I tried reading most of it) has commented on the process, and consequences, of being caught on trick questions like bat and ball.
It is my understanding that our intelligence has evolved through use in situations where its impact was more or less immediately visible, and where this feedback could be acted upon.
Example: shaping tools. Is the flint stone sharp? No. Mash it against rocks. Is it sharp now? It's a bit sharper, but not sharp enough. Mash it against rocks some more. Is it sharp enough now? OK, you're done shaping your spear head.
Example: hunting. Approach the prey. It runs away. You don't notice why, you hadn't taken wind into account when looking for clues. No feedback, therefore you couldn't act upon it.
Example: hunting. Approach the prey. It runs away. You notice it did after wind turned and gave away your position. You got feedback from the grass and leaves moving in the wind. Next time you'll be able to act upon this.
Example: trying to shake fruit off a tree. You find a low-hanging branch and try to shake it. First you try this way, then that way, and finally you find the best way to get fruit without making too much fall down.
The last example extends to any sort of experimenting, tinkering, happy-hacking.
It is however notable that the bat and ball question does not test that. There's a question, and you give an answer. There's no feedback before it becomes final, and once it does that is the clear cut-off. This represents the stone-cold, immovable, monolithic machinery displayed by many technical subjects, such as science, mathematics, and some forms of computer programming, but also strategy, and some forms of art. There is no iterative process, you get one try, based on which you can in no way build a tangible mental model of how something works and what parameters of your thinking you need to adjust in order to better yourself. One very stupid example is when someone in a job interview asks you about standard library function names and argument orders. Either you remember, or you don't. I'll call this feedback the feedback gap.
Bear in mind I have mentioned "happy-hacking" above and "computer programming" below. In fact, they're both computer programming. The difference? If I'm presented with a python program where I can use an iterative process, a repl, and its help() command that immediately gives me access to documentation, then I can very easily build up a mental model of what's going on. Exceptions and errors give me constant, constructive feedback which comes immediately. This immediacy is extremely important and even a slight delay makes the learning process slower. Additionally, if some things aren't available as immediate feedback, I can find out. For example, when trying to get at the fruit, I saw immediately where the fruit was in the tree branches. When typing out python, I don't have this, I don't see what the functions are, so I need to use help(). That works well enough. Some people like intellisense for that. Works well too. It all fills the feedback gap.
There's a similar difference in ease of progress between experimental mechanics and similar physics, versus branches where experimentation cannot happen. That's feedback gap again.
It is my belief that this sort of immediate feedback is needed in other technical subjects, especially mathematics and physics. Approaches such as theorem provers are helpful in mathematics, but they're nowhere near being complete, and nowhere near the utility and immediacy of a repl. I am fairly sure there are other ways in which the feedback gap can be filled. Perhaps different methodology, or differently structured theories, can give us more immediate feedback? Perhaps mathematical systems in which theorems are easier to tentatively prove or disprove can become more successful in breeding new results?
Or perhaps the theories are not to blame, but we need more tools. For example, my abstract geometry teacher kept reminding us that we needed to come up with such quick checks. Non-linearities were always useful. His favourite was the binary distance function, which was 0 for two identical points and 1 for different points. A lot of stupid theorems can be disproved by checking some examples with this.
Can someone else comment on any such tools?
Writing this comment definitely came with an insight or two for me. If you read it, thanks for going on this trip with me.
The traditional feedback method is teamwork: the next student, or colleague, makes a claim about physics, and you exclaim: "Bullshit! As the mass of the pencil reduces to zero, the whole universe gets pulled off course." Traditionally, physicists and hackers were thick-skinned freaks; the triumph you felt more than made up for your bruised ego when the tables were turned. Now, normal people do these things; their feelings, more of resentment than glory, distract them inefficiently. They react the same way when computers call their bullshit.
What he's seeing isn't something new, it's something so old that it's part of popular culture: the absent-minded professor syndrome. It's the stereotype of the brilliant physicist forgets what he's supposed to buy at the supermarket because he's thinking about their quantum properties. Analytic people are horrible at things that don't interest them.
Pay the students $50 for each correct answer, and there's not a doubt in my mind that the results will be the complete opposite of what he's seeing now.
Basically, once again, I've learned that my wife is smarter than me and that these studies should be taken with a grain of salt.
P.S. She is not a math person nor is she a tech person.
I think the reason why my wife did it so fast is because it was fed to her as text only - in fact, I just read it to her. She's a lawyer and she's much better at interpreting text than most people.
You've never tried tutoring kids who get kickbacks for good scores then. Absolute fucking nightmare. Motivation to study must come from within for it to be successful in any way.
By using those examples, after its headline, this article seems to imply smarter people do worse on these CRT questions. But that is not what I've read elsewhere -- which is that the CRT is positively correlated with other quantitative measures of intelligence (including IQ scores, SATs, and high-school/collegiate grades). 'Smart' people (by those measures) do tend to do better on the CRT.
And if you read this article carefully, you see that while it uses these two CRT questions as examples of tricky questions, when it discusses the results about awareness-of-bias not helping alleviate bias, it isn't necessarily saying smart people do worse on those two CRT questions. It's a bit muddled in what it's saying, and reviewing the linked abstract doesn't help much either. The paper is evaluating some very specific things under the umbrella term 'cognitive sophistication', which might not map to what we usually call 'smart' or even 'test-smart'.
BTW, I personally think the CRT may be especially useful for evaluating software/systems proficiency. The bat-ball question probes understanding of algebra; the lily-pad question probes understanding of geometric growth (and someone accustomed to powers-of-2 will find it easier); the third question probes understanding of parallelism and projected-rates-of-work.
That third question happens to be:
"If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?"
(A software person might also think of it as: "If it takes 5 cores to compress 5 GB in 5 minutes, how long would it take 100 cores to compress 100 GB?")
The book explores several interesting ideas, but I think I can safely say that "why smart people are stupid" is almost certainly NOT one of the more important themes in the book. Which I think is why, as you've noted, the results from the CRT testing don't line up with the conclusions in the article.
Actually I'll go a bit further and say that in simplifying a fairly nuanced and complex concept down to an attention grabbing headline, this article is ironically appeals to the intuitive bias trap that Kahneman describes in his book. "Why smart people are stupid" gives us an easy bypass to answer a complex question and saves us the mental effort of actually coming to grips with the problem. The explanation is satisfying, but it's also flat out WRONG.
No, the deeper theme is something of an inconvenient truth for both smart and not so smart alike. Best for you to read the book yourself if you're really interested, but I think it's not too big a stretch to say that PG was approaching the same idea in his wisdom (intuition, fast thinking) vs. intelligence (slow, deliberate thinking) essay:
"And while wisdom yields calmness, intelligence much of the time leads to discontentment.
That's particularly worth remembering. A physicist friend recently told me half his department was on Prozac. Perhaps if we acknowledge that some amount of frustration is inevitable in certain kinds of work, we can mitigate its effects."
No, the smart people essentially never perform worse on the CRT or bias questions than their stupider confreres. What the headline is boasting about is that when asked to guess whether they will do better on the biases than their cohort, the smarter people tend to think they will do much better than their cohort, but actually do only a tiny bit better or identical.
But as the authors admit at the very end, this may be a reasonable generalization from their lives, in which they observe smart people outperforming stupid people at just about everything and these biases were selected from the big heuristics & biases literature for being resistance to raw smarts...
I don't think that most compression algorithms are Embarrassingly Parallel (http://en.wikipedia.org/wiki/Amdahl%27s_law), so it's not clear to me the software scenario is equivalent, or are you saying that the jobs come in 5 gigabyte chunks?
Typically such questions imply the unstated assumption, "answer to the same level of precision/abstraction as the question itself, and assume you have all the info needed to give an answer". With that assumption both questions should be answerable.
Compression tends to be pretty parallelizable, if by nothing else than choosing to break the input into separate chunks. You might lose a little bit of efficiency in output size – more restarts, each compressor has less global information – but those don't mean slowdowns (and in a few contrived situations might even mean speedups at the cost of size). See for example 'pigz' and 'pbzip2'.
If I were asking this question, I'd accept the 'rough, assuming perfect parallelization' answer as correct-enough in the spirit and level-of-precision implied by the question. If the answerer brought up the difficulties in assuming perfect parallelizability or specific to compression algorithms or choice of inputs, that'd be worth some extra credit, and would trigger followups along the lines of, "how would those factors affect the size?" and "what bottlenecks might you expect?" and "could it ever be faster when split among more machines?"
Decompression may not be embarrassingly parallellizable for an existing algorithm, if it relies on state that persists through every bit in the data set. But a codec algorithm can be designed to be embarrassingly parallel for both legs.
Yes, you could call that a way of intelligence testing... :-)
The psychologist Keith R. Stanovich is quite controversial among other psychologists precisely because he writes about what high-IQ people miss in their thinking, but his studies point to very thought-provoking data and deserve to be grappled with by other psychologists. I have enjoyed his full-length book What Intelligence Tests Miss
which meticulously cites much of the previous literature on human cognitive biases and other gaps in rationality of human thinking.
And here is the submitted article's link to a description of the Need for Cognition Scale:
Sounds interesting; care to summarize what those things are?
Can it help identify candidates who have little experience but will make good programmers once they're taught how to code?
This fallacy is at the heart of the matter. Intelligence and resistance against bias are only loosely correlated. Such resistance comes not from intelligence but from careful study and mental exercise, e.g. looking at various important ethical and philosophical arguments and analyzing them.
This is like saying all large people are strong. There is some dependance but a smaller gym-fly can kick a slacker giant's ass. The sad thing, while it is obvious that you have to exercise your body to be healthy and strong, the fact that the same is quite through fro your brain is often overlooked.
To me, this looks like a definition game. Smart/stupid is a black & white view of looking at it and hence, misleading. As one overcomes his primitive biases, we call him smart, even though he remains susceptible to other biases.
In other words, people aren't smart or stupid. People's actions are smart or stupid in a particular situation.
Traditionally, "intelligence" (as colloquially defined) has correlated with type 2 thinking. So, a reasonable conjecture would be that people who are better at type 2 thinking would use it more and, therefore be less vulnerable to bias. However, this research shows that even those who are very good at type 2 thinking (as measured by their SAT scores and NCS scores) are even more vulnerable to cognitive biases. This is a deeply counter-intuitive result. Why is it that people who have a greater capacity to overcome bias have a greater vulnerability to bias?
Overconfidence. If you've become accustomed to thinking of yourself as being better able to avoid cognitive bias, you come to be confident in your abilities, to the point where you (perhaps unconsciously) think of yourself as not susceptible to biases.
Why do you assume that type 2 people have greater capacity to overcome bias?
As a matter of fact, it would be worthwhile to avoid using the term "intelligence" in this discussion altogether. Please read this reference to understand why: http://lesswrong.com/lw/nu/taboo_your_words/
The whole point is that people assume that "problem solving" and "memory" skills automatically protect you against bias. This assumption is false, since "bias prevention" is a different kind of cognitive skill that must be practiced and developed independently.
Have you read the article (rather than skimming)? It says exactly the opposite.
Unfortunately everyone seems to be hung up on the "idea" of being smart, as if having a high IQ somehow constitutes an accomplishment.
Particularly during schooling it can be lonely, isolating and incredibly boring. It is easy, under those circumstances, to either build yourself up as better than people around you or come to hate and hide your intelligence so you can pass as one of them. Both are a loss to society.
It is not an accomplishment, any more than having diabetes is an accomplishment, but it is a fact of life and trying to pretend it doesn't matter is futile.
The myth that IQ tests measure any tangible internal capacity (or that they were even designed to do so rather than to justify the exclusion of eastern european immigrants and racism against blacks) will stay as the dominant view because the people who set the dominant view are people who get good scores on IQ tests.
But that is already adjusted for.
IQ tests highly correlate to a number of factors. More likely to read, less likely to get divorced, more likely to go to university, more likely to eat healthily, etc. But at the end of the day, when you keep trying to control for each of these factors you end up at the null hypothesis simply because there is nothing left to measure.
Just because something is politically incorrect doesn't mean it is wrong.
It has nothing to do with political correctness - it was literally the purpose of general intelligence tests to justify exclusionary immigration, racism and eugenics, and they were designed (or redesigned in some cases, such as when blacks in the military scored better than whites) to do that well. It reified a general intelligence concept that has no actual evidence, and justified that with factor analysis, the leading tool of statisticians for creating a single thing out of a maelstorm of complicated, interdependent factors by just pretending that they are linear.
IQ tests correlate far higher with pleasure reading in childhood than any other factor IIRC.
This is not sour grapes - I'm a member of MENSA (which is really a boardgaming club); it's just historically accurate. IQ tests could be completely replaced with a tally of books read for pleasure (and the reading level of those texts), and you would end up with all of the same correlations without all of the self-important mathy-sciency tone.
I accept that there's a difference in the amount of knowledge that people have accumulated, and the amount of familiarity about how to evaluate common classes of questions that a voracious reader will have seen a million times before, and that a lack of those things may create a lot of challenges in college. I have no issue with the SAT. My issue is with the completely unjustified leap to a belief of differences in capacity, and the projection of this onto reified folk theoretical (theory-theory) internal states. This mythology is just another cultural construct to separate humans into "us" and "the others" and to alleviate the cognitive dissonance between our proclaimed ethics and open prejudice. Being a measure of the status quo, primarily, it serves solely to perpetuate it, offering no other benefit.
edit: early intelligence tests actually had questions that assumed you knew details about current baseball teams.
edit2: I also noticed that, other than reading for pleasure, the other examples that you listed for high correlation with IQ are degrees of adherence to cultural norms. Are high IQ people simply better at obedience?
I wonder what the results would look like if you wrote 'bias check questions' within a knowledge domain and some outside the domain and compared the scores of a group of practitioners and a group of non-practitioners, sort of a four way table.
IQ tests, SATs, ACTs and other standardized test like them are improperly named. They should be called "Tests That Predict Success or Failure in the School System From Which the Questions Have Been Derived."
You seem to have a pretty serious ax to grind with standardized testing.
the SAT and ACT simply provides a common yardstick for comparing grades at different high schools. Educators use them as way to judge an A at this school or this teacher versus an A at this school or this teacher. Nothing more. At best the SAT is said to predict freshman year grades in college (somewhat).
Any of those articles are a good place to start, so don't be intimidated by the amount of stuff there.
Anecdotally I found that the Less Wrong community tends to be decidedly more full of crap than average. In the same vein as spiritual materialism, many people that engage in a bias witch-hunt seem to be falling prey to "logical materialism", where the whole exercise turns into people deluding themselves into thinking they're somehow "better" than others because they're less full of crap than average.
It's good to know thyself, but it's no use if your knowledge isn't tempered by wisdom, and you're not going to get that by reading blog posts about cognitive biases online, no matter how good the posts.
It's also heartbreaking to see intelligent people getting so excited about ideas like cryonics and personality uploading. I mean, they're interesting things to talk about, but a lot of people on LW seem to actually think they might get to live forever. It's kinda sad.
I wouldn't totally dismiss sci-fi concepts like brain uploading. But then I'm not totally sold on their viability either.
One of the things I always thought was interesting about cryonics was data integrity. It could be a while before you hit the singularity. (Or whatever it is you think will wake you up.) Even with liquid nitrogen I doubt your brain can be 100% preserved. So let's say hypothetically you get yourself some proper Ray Kurzweil recursively improving A.I. And as a common courtesy decide to revive the Alcor people. If you have 99% of someones brain image and use statistical algorithms like Bayes theorem to fill in the rest is it still the same person when you wake them up? How about 99.99% their brain? 99.999999? (Which brings us back to the semantics of labels and reductionism vs. wholism.)
People who think they'll live forever have huge logic fails on their hands. Ignoring the heat death of the universe as an obvious one.  In fact every time I think of the whole business Issac Asimov's The Last Question comes to mind. 
This is pretty much where I land on this. Given any empirical theory of consciousness, it's never going to be "I get to live in the computer," just "I die, but a copy of my brain lives in the computer." And it's pretty hard to draw a bright line between that and "I die, my brain slowly decays for thousands of years, then it's surgically reconstructed and awoken." Still feels like death to me.
Of course, to your transhumanist theorist, there's probably just as much connection between me and the computer as there is between me in the night and me in the morning; it's all just an illusion created by a persistent brain state. But that doesn't help either, because now you're describing a kind of immortality -- "This exact stream of continuous consciousness is a dead end, but something very like it will continue to be and think of itself as part of me and that gives me and it some comfort about the whole thing" -- which the general public has been achieving quite successfully for some time in the form of procreation.
In fact, since every living organism represents a terminal link in a chain of unbroken life dating back to the first self-replicating molecule, it might be quite reasonable to say that we have all been alive for billions of years at least, we just don't remember most of it. But this is changing: I can go on Wikipedia today and recover the memories of our culture dating back for most of its existence.
Naturally, they grow vaguer the further they go back in time, as memories do; but for events that take place today, we have a record which far exceeds human memory in accuracy and exactness of fact, and which will very soon be competitive with it in emotional effect. It is quite realistic for me to expect to create a record of my life which has as much effect on my descendants in a hundred years as my own memories of today would have on me were I to live that long.
So I think it's possible the transhumanists missed the boat. Or rather, they're on the boat already and just don't realize it. The human macro-organism does, starting from now, seem to stand a decent chance of living to see the heat death of the universe.
I don't mind it though as much -- I am much more disturbed by the prospect of a tyrannical or even "Friendly" AI some of them seem to be fond of.
Ignoring the implications of an AI that may be required to support a transhuman, I believe that one could draw up a very pragmatic argument why transhumanism could be useful even to us, embodied souls. What you described as inter-generational memory via things like history books and Wikipedia is good but not perfect (the classic example is that history gets written by the victors). A transhuman living for thousands of years would potentially bring a fresh perspective to the table, even if that perspective was imperfect too. Just the way both a free market and an ecosystem can benefit from a diversity in their pool of ideas/genes, so can humans.
Same. Even if such a thing is possible, the probability that it's done right the first time is close to nil. And since a recursively improving singularity A.I can be assumed to irreversibly take control of the balance of power, it's not really something that you could afford to screw up.
Of course, Yudowsky (To the extent that he's doing anything.) seems to be working off the assumption that if he doesn't do it someone else will.
The argument I'm making about memory is that when communication has become advanced enough, you can't make a clear distinction between inter-generational memory and meat-memory over a significantly long lifespan. People change over time; give it enough time and you're as different from yourself now as your great grandchildren will be.
I don't think we need long-lived human beings or human personality constructs to gain the societal advantages of "I remember when..." It's feasible today for a person to record and archive audiovisual, geospatial and limited haptic data of their entire life experience, beginning to end. We can't record your thoughts, but if they're important you can write them down. I'd also wager that we'll see almost fully convincing sensory recording, which is a plain prerequisite for uploading, well before any life-extension technology which deserves the title of immortality. It would then be unrealistic for your descendants to say that they remember the things that happened to you only because these recordings would be far superior to memory.
Of course, the only issue is that we've had this sort of thing for a good while now, and it turns out we just aren't that interested in the things that happened a long time ago, just like, aside from the highlights, I don't care that much about what happened to me ten years ago.
Ten years ago, of course, I thought that everything that was happening to me was quite important. That's why I label this idea of immortality "greedy"; it represents the whim of a brain state at the present moment to continue to influence the world long after it has become irrelevant. Just look at the current state of US politics to see where that gets us. (I've never seen a transhumanist argue that every transient state should be preserved in perpetuum, but I'd be curious to know what they tend to think about it.)
The point being that if uploading constitutes a form of immortality, so does having kids; the same theory of consciousness underlies both.
 This is a bit of a tangent, but I think this is (most of) the reason that burial rituals are one of the cornerstones of human society. Obviously it doesn't matter to the dead person what happens to them, but it is crucially important for us to have a say in what happens to us; we hope that, if we respect our parents' wishes after they die, our children will respect ours. And we take this so seriously that, in fact, they do.
I suppose I consider transhumanism, especially cryonics and uploading, to be a very highly developed burial practice. If it is the wish of a dying man to have his brain frozen in nitrogen, I will respect his wish, and even humor his beliefs about what that might mean. But I don't believe it means any more in reality than if we stuck him in the ground with everyone else.
And yes, I recognize the irony in writing this much about something I think is silly to spend time thinking about :)
And they aren't horrified at the prospect of that being possible?
Not much thinking going on there, I suppose.
I expect it is not so much about wishing not to die, which even a transhumanist must admit is at least no worse than living, but wishing not to have loved ones die. Truly tragic.
I suspect it is you that needs to put a bit more thought into this matter.
This is seriously weirding me out.
Than average what?
I often lurk on LessWrong, and post there very occasionally. I find it to be a very rich source of original ideas, some of which are truly profound. YMMV, of course.
I'm curious if you think there is anything else we could stand to work on. I interpreted your use of the word "wisdom" to mean a lack of arrogance, but if you were using it to mean other stuff as well I'd love to know.
(I'm a fan of yours, BTW.)
But the idea "Hey, want to be more rational? Join our community" gives me the willies. If you want to be more rational, then joining a groupthink-ish community is the last thing you should want to do.
So I think it's fair to tar Eliezer himself for that incident, but I don't think it's a good indictment of the site's community as a whole.
Hadn't heard of this, just read the RationalWiki article-- and it is fabulous. LW literally derived religion and then took that seriously enough to cover it up.
Personally, I have yet to see any evidence suggesting rationality is even desirable. The most irrational people, selfless, story-oriented, tolerante of multiple conflicting subjective "realities", interested in the feelings and passions of people around them no matter what they are, are also those I most enjoy interacting with. Everyone I have met who strives for rationality is at least a bit of a prick.
Interesting idea. I think a certain amount of rationality is desirable -- people of below median or even near-median rationality go round making really stupid decisions which screw up their lives. On the other hand, once you've stopped hiding from imaginary demons, buying magnetic charm bracelets and drinking venti caramel frappucinos, further effort in becoming more rational may be severely diminishing returns.
How would it really help me if I were more rational and less subject to cognitive biases? I don't think it would help me much in making my day-to-day decisions. I honestly don't think it would help me in my work, either. It might well help in tackling really, really difficult questions where it's extremely difficult to disentangle your own feelings from the correct answers -- things like "What is the probability that humans will one day achieve immortality", or "What is the fairest possible tax system?" But would answering those questions actually enhance my life? Humans will achieve immortality, or not, regardless of whether I correctly predict the probability circa 2012, and even if I did come up with the fairest possible tax system I have no chance of actually getting it implemented, so it would just cause me frustration.
The people who did really great things in history -- whoever you might choose as your examples -- did they achieve it by being significantly more rational than everyone else? Not really, no. They did it by achieving some baseline level of rationality and then being extremely good at other stuff.
Observational bias. Rationality of thought process in non-technical situations is rarely externalized; unless you talk of scientists, you're highly unlikely to remark on how highly rational he is being. In fact, the only way I can come up with to make such a statement fit in literary fashion is when you're making a quip on someone:
"It was highly rational of Nixon to start the Vietnam War."
Either you're making some deep meta-quip that I don't get, or...
There's a group of people doing them theatre exercises, they rent a hall down the corridor from me periodically. I've always known there was something really fishy about them. Are you saying they might be scientologists? Becuase that would really fit to the group's MO.
I was being flip before: there are real differences, primarily in the role of teachers (in theater they should never hold real power over you) and suppression vs. expression of emotion (theater exercises are often about how to feel more, whereas scientology is about brainwashing into feeling less). However, self-hypnosis, presences and detailed mental examinations are shared by both.
This is O.T from what the article is saying but mildly O.T (meaning on-topic) and I'd love to hear HN's opinion on this.
One of the problems presented in Priceless is:
Would you rather $3,000 as a sure thing, or an 80% chance of $4,000 and a 20% chance of nothing
Would you rather a $3,000 loss as a sure thing, or an 80% chance of losing $4,000 and a 20% chance of losing nothing.
The erroneous path that most people take, in the eyes of these researchers, is that they set their base reference point at the sure thing, ie. they say "well the $3,000 is a sure thing so I can assume I have it".
If you do that, then your answers are different:
In the first instance you keep the $3,000 (because it becomes an 80% chance of winning $1,000 versus a 20% chance of losing $3,000).
In the second instance you go to court (because it's an 80% chance of losing $1,000 versus a 20% chance of winning $3,000).
However if you don't "rebase" your reference point, then you would make the same decision in both cases - that is you would take the 80% of $4,000 bet because it's "worth" $3,200.
As much as I realise what they're saying and they say it's statistically incorrect to do this, it really seems to me the most sensible way to make the decisions (which is, I guess, exactly what they're saying right? I'm human, ergo fallible to this kind of illusion).
The thing that kills me is this: if this is a one time thing, I'd rather be sure of the $3,000. If I'm buying and selling these bets all day, then sure I should take the $4,000 at 80% because even if I lose this round, the next time I take the bet will make up for it (ie. law of large numbers).
But what this problem doesn't address is how often I get this opportunity? Depending on my circumstances, $3,000 could be a life changing opportunity, ie. if I "win" $3,000 or $4,000, my circumstances are essentially the same so I should always go for the sure thing. If I lose $3,000 or $4,000 I'm equally screwed, so I should take the risk and try and win in court.
What am I missing?
On the gain side, the different values should be self evident. I think we can all intuitively understand why making $100000 instead of $50000 is bigger deal than $150000 instead of $100000. So let's assign utility to the numbers:
first $1000 = 1
second $1000 = .8
third = .6
fourth = .4
Option #1 gives us 2.4 whereas option #2 gives us 2.24.
I'd argue that we can similarly rank the losses the same:
first $1000 lost = -1
second = -.8
which gives us -2.4 vs --2.24 meaning we should obviously take the option #2.
Now the interesting question is why we can assign similar yet negative values for the losses. I'll give two examples that might show why this is true.
First, consider living paycheck to paycheck. I only have $500 of buffer. In this case, while losing $2000 instead of $1000 is worse, it's not worse by a lot because either way I can't afford rent and I'm evicted.
As a second example, consider bankruptcy. If we take losses to high values, eventually each additional loss doesn't subtract anything from my "happiness". I've already hit the bankruptcy point and nothing worse can happen.
These are of course the two extremes, but I think it's easy to complete the spectrum and show that for any $x the first loss of $x hurts more than the second loss of $x.
So you are saying that with the following scenario:
1. +3000 at P100 or +4000 at P80
2. -3000 at P100 or -4000 at P80
Depending on the context, the answer is basically universally different if you say "1 then 2" as opposed to "just 1" or "just 2"?
The foible of humanity that makes the answers different is that we first "rebase" our expectations to the 100% chance, rather than considering our current position to be the baseline.
Really, stating otherwise is a fault of human analysis. It's a game theory problem I think (likely a variation of the stag hunt: http://en.wikipedia.org/wiki/Stag_hunt). It's a paradox only because Armchair mathematical intuition fails to explain it.
When you find it and it's by someone else, it was obviously a stupid, idiotic error that you would never make.
When you find it and it's your own, it was obviously an understandable mistake that anybody could have made.
Particularly if you consider yourself a great coder.
I think it comes down to having a value system where you'd rather be wrong and corrected (even if you have to do it yourself), as opposed to always projecting yourself as"perfect". Once you accept you aren't perfect, its easier to work towards perfecting what you've got.
I catch myself like this all the time. It's a little depressing.
Of course a question like the old bat and ball one is ridiculously simple, after you've been warned that many people get it wrong and hence that you should probably stop and think for a few seconds before blurting out the first answer that pops into your head. Do it without that warning and it's easier to get it wrong.
I guess "many people" includes me - I always thought I was good at these types of questions, maybe I'm not :(
Also, I just hate these kind of questions - they've always been used to prove that I'm stupid by those who knew the answers, and they're not solving anything useful - I need the problem to solve something I care about in order for my brain to fully focus on it and "do the math"...
If I asked you, "Will a frooble fit in my pocket/empire states building?", and then asked you to estimate the average size of a frooble, you'd certainly take into account my earlier question.
See http://lesswrong.com/lw/k3/priming_and_contamination/ for some better examples. IMO, the more insidious form of anchoring is contamination (vs sliding adjustment).
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
If a lilypad is 20 square inches (which is probably conservative), and you started with 1 lilypad, after 48 days of doubling it would cover 1.4MILLION square miles. That is 44 times the surface area of Lake Superior.
I get the point of the question, but if you're trying to play "gotcha" on people, at least ask a reasonable question.
Inflation, I suppose.
I also think that on the other hand those types of shortcuts are actually probably very useful aspects of our human intelligence.
I think that within 50 years or so we will see new species/upgraded humans or AIs that actually don't have those problems, because they will have built-in checks and alternative types of intelligence that rely on those shortcuts less.
This article reminds me of pg's reasons to have a co-founder to avoid being delusional. Better be proven wrong on the inside than on the outside.
edit: Although on second thought, I think this bias theory probably extends to organizations as well. Probably that's why big companies sometimes can't see the obvious which a startup does.
To my mind on any test that was supposed to be hard the appearance of any obvious an answer triggers me to check for the proverbial trick question.
On the other hand, most brain puzzler type questions that get discussed on HN (for example interview questions at Google) I find to be damn hard. I can't imagine that "smart" people would do worse than "stupid" people on truly hard problems. I guess that is the area of bias being pointed to in the OP.
When you're done with an article for the "Frontal Cortex" section, read it aloud to yourself and smack yourself in the head with a frozen herring for every time you use the word "we", "us" or "our" in your article. If you have a headache when you're done, burn the draft and rethink the whole thing, b/c your article obviously suffers from a "smug we" bias.
Doesn't Kahnman distinguish between intuitive and deliberate thinking? So it could be possible to think better by distrusting our intuitions and deliberating more, right?
Not to say I don't have biases, just not for word-number problems.
Poker on the other hand is another matter, I still chase straights and flushes in games with wild-cards, even though I know those hands are almost worthless.
Throughout school, I was absolutely HOPELESS. I couldn't do anything when it came to basic math, except some algebra. These questions (even faced with the formula to solve it) still bugger my head. I tried to read books, get tutors, do everything to better myself (who ever heard of a computer guy who couldn't do math!).
I can easily write algorithms, do algebra, write complex programs, do anything on a computer but faced with a question like this my brain shuts down very quickly.
I stumbled across this one day... still not sure if I believe it's a thing... and that I have it: http://en.wikipedia.org/wiki/Dyscalculia
It is amazing then, that I somehow have managed to complete both an undergraduate and Master's degree in Electrical and Computer Engineering. I compensated for my lack of numerical ability by heavily relating on calculators throughout my college education. In fact, one of the reasons I pursued computers in the first place is because I recognized that I could survive, and hence, fake it, by offloading such "trivial" computations to a machine.
Many standardized tests, such as the GRE, don't allow calculators unfortunately.
For the lily pads, the percentage of the lake covered doubles every day, so the lower bound percentages for the last few days look roughly like this: 12.5%, 25%, 50%, 100%. On the 24th day, you'd have to double in size 24 more times in order to fill the pond, rather than the once it'd take on the 47th day.
A bat and ball cost a dollar and ten cents.
Bat + Bal = 1.10
Bat = Bal + 1.00
Bal = ??
Bat + Bal = 1.10
Bat = Bal + 1.00
1.00 + Bal = 1.10
Bal = .10
1.00 + .10 = 1.10
Aside: fuck that script that messes with your copypaste, and the same sentiment to sites that implement it
Always hated word problems in school :P
X+Y = 1.10; Y + 1 = X; (Y+1)+Y = 1.1; 2y=.1, y=.05, x= 1.05;
2^48=x; 2^y=.5x; y=47
The first problem is operational and the second problem is change on a slope.
People use what they know to deal with problems, so facing these two they use basic math (subtraction and division) being ignorant of higher-level math concepts such as algebra or calculus. The answers they come up with appears right to their known level of logic.
If you were the type of person that got to learn about high level math concept, and are the studious type to double check answers, then these two problems are condescendingly seen as trivial.
>For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.
It has nothing to do with knowledge of higher level mathematics because these problems are easily solvable with arithmetic. Lacking calculus doesn't kill you on this problem. An intuitive gut feeling that you've already arrived at the right answer and laziness is the source of confusion.
I've even read about that damn bat and ball problem and it STILL tripped me up this time. I could easily have double checked my answer, but I wanted to read the article. Even a child knowing nothing other than addition could get it right with a little bit of trial and error. I hope after admitting that you see that I don't find the problems condescendingly trivial.
Personally, I found the second problem much easier... probably because programmers have a better intuitive grasp of powers of 2. Bringing in slope is stretching it a bit. Working in reverse from the completely covered lake, it should be obvious that going back one day halves the lily pads. However, I could imagine how someone more familiar with linear processes would get the wrong intuitive result.
>The answers they come up with appears right to their known level of logic.
A studious habit sure, but checking your answer isn't a higher level math skill.
My natural reaction to the bat and ball problem was to parse the problem statement verbally and search for a plausible answer among the tokens. The algorithm retrieved "USD$1", and then a second background process took over and said "wait, that sounds a little bit too right". It might have taken me full 5 seconds before I realizes I had to switch to math_mode!!!
i _just_ watched that talk a couple of days ago because it was posted here: http://news.ycombinator.com/item?id=4082308
But here is the problem with the article: The people who I consider smarter than me (in the mathematical/IQ sense) also answer these kind of questions correctly. This includes my friend working at Google, some researcher mathematicians who I know from math forums who won serious math competitions as a child, etc... These questions are really-really trivial. The researcher mathematician guy who I know do not even make mistakes on 10x more tricky or hard questions, it is scary how he do not make mistakes and thinks incredibly fast. Something seems to be wrong with this study.
bat + ball = 1.1
bat = ball + 1
2bat + ball = ball +1.1 +1
2bat = 2.1
bat = 1.05
I made exactly the same fallacy as those in this study; I've just learned to check my work. On the other hand, I suspect the whole process of estimation and refinement was faster than writing out those equations.
bat = 1.1
ball = 1.1
bat = ball + 1
1 - .1 = .9
1.05 - .05 = 1
A similar experiment where people draw the wrong conclusions is the Milgram experiment. Yes, most people are obedient to authority figures and do what they are told. But not everyone acts that way.
This research likes to sweep the best human beings under the rug, as if being virtuous is not something to try to emulate, but is something to hide. This explains why the majority of people act the way they do. Perhaps if they were taught that their "we're only human" vices are not the ideal to emulate, perhaps if the best that humanity had to offer were put forth as the ideal instead, then these lesser human beings who make up the majority would become what they might be and ought to be.
It is clear from many examples that rationality gives us the utmost ability to adapt, prosper, and survive over the long term. And there is no example that truly leads in the other direction. (There are many perverse definitions and applications of "rationality" that seem to trick some people into thinking it does lead in the contrary direction).
u so smaht
I don't know what right is, but I know the way we currently think about intelligence is wrong.