
Best-selling introductory psychology books give misleading view of intelligence - DanAndersen
https://digest.bps.org.uk/2018/03/08/best-selling-introductory-psychology-books-give-a-misleading-view-of-intelligence/
======
tokenadult
Gottfredson's paper is a paper that I have read, and I see it is being used as
the analytical framework for criticizing psychology textbooks. I do research
for writing popular writings on psychology, so I have a whole bunch of
introductory psychology textbooks in the office where I am typing this, and I
have to agree that introductory psychology textbooks leave a LOT to be desired
in representing the consensus of modern psychology research. That's true about
research on human intelligence and true about any other psychology topic: the
introductory textbooks only do a so-so job.

That said, one might wonder where to find good information about current
psychology research. Sometimes there are review articles that update
practitioners on current research, which are incidentally read by scientists
in other disciplines. I'll note for the record that not all psychologist agree
EITHER with the review article I will link here, but it is a good readable
account of current issues in the psychological research on human intelligence
and well worth a read for Hacker News participants who are curious about these
issues. It refers to many of the most important papers in the field, most of
which I have read over the last three decades.

Nisbett, R. E., Aronson, J., Blair, C., Dickens, W., Flynn, J., Halpern, D.
F., & Turkheimer, E. (2012). Intelligence: New findings and theoretical
developments. American Psychologist, 67, 130-159. doi:10.1037/a0026699

(Disclaimer: I have met many of the researchers on human intelligence,
including Gottfredson, at professional conferences, but my views of what the
overall research says and who has the best leads on open research questions
are my own.)

[http://people.virginia.edu/~ent3c/papers2/Articles%20for%20O...](http://people.virginia.edu/~ent3c/papers2/Articles%20for%20Online%20CV/Nisbett\(2012\)%20.pdf)

------
RcouF1uZ4gsC
Intelligence studies : Left :: Climate science : Right

The Left views the expert consensus on intelligence in much the same way that
the right views the expert consensus on climate change. The consensus view is
rejected not because of evidence, but because of what it does to their model
of the world and fear that it will open the door to great abuses in the future
(increased government regulation for climate change, and increased racism and
discrimination for intelligence studies).

~~~
danharaj
I'm on the far left, and while I am comfortable with whatever the scientific
consensus on intelligence is at the moment and how it develops in the future,
I am very recalcitrant about allowing that understanding into politics, policy
or most discussions I want to have about anything.

Looking towards the past on how (pseudo)science about intelligence has been
leveraged to commit atrocities, and looking at the reasons _why_ so many on
the right really really really like bringing up IQ and whatnot, I'll pass. At
best the conversation dissimulates the urge to justify hierarchies of
dominance-- the same fucking reason why the right likes to bring up anything
from biology. At worst you get enthusiastic eugenicists; meeting such people
at, say, conferences or other programmer venues (i'm sure programmers skew
both towards higher IQ and views that the strong should crush the weak) is
like stepping on a cow pie.

Not to mention, I know plenty of people who would score high on intelligence
tests, perhaps much higher than I would. Some of them seem to leverage that
intelligence to be far stupider than anyone dumber than them could manage (us
programmers again...).

The point being-- the gap between scientific research on intelligence and what
it means for A) society, and B) my personal life is a chasm I have no reason
to bridge.

~~~
haberman
> The point being-- the gap between scientific research on intelligence and
> what it means for A) society, and B) my personal life is a chasm I have no
> reason to bridge.

The problem is that well-meaning people on the left come to the table full of
assumptions and intuitions about how reality works and what causes inequality,
and base their policy positions on these. People on the right bring up biology
because it can expose weaknesses in these assumptions and beliefs.

> Looking towards the past on how (pseudo)science about intelligence has been
> leveraged to commit atrocities

You can also look at how blank slatism and Lysenkoism were leveraged to commit
atrocities in the USSR. Science and pseudo-science have been use to justify
all kinds of horrible regimes. That isn't a license to ignore reality and form
policy based on unfounded beliefs.

~~~
danharaj
> That isn't a license to ignore reality and form policy based on unfounded
> beliefs.

Did I say anything about unfounded beliefs? I didnt. I even said I accept
research on intelligence. Not basing policy on X is not the same thing as
basing policy on X.

But I'll bite. Tell me a way that intelligence research should inform policy
that isn't immediately morally repugnant.

~~~
haberman
> Did I say anything about unfounded beliefs? I didnt. I even said I accept
> research on intelligence.

I wasn't commenting on your personal beliefs. I was commenting on your
position that certain true facts should not be spoken in a political context.
To me, that is the same as saying that people should be free to make policy
without being challenged if their beliefs are incorrect.

> But I'll bite. Tell me a way that intelligence research should inform policy
> that isn't immediately morally repugnant.

No Child Left Behind in the USA is (I think) widely regarded as a failed piece
of legislation. It is based on the idea that the only reason kids don't
succeed in school is because schools are failing them. It heavily penalized
schools that can't get their achievement scores up. Intelligence research can
provide one leg of a counterargument that kids are different in all kinds of
ways, and proscriptively mandating equality of outcome makes for bad policy.

------
mywittyname
It seems like there's a subset of psychological researchers that want so bad
to debunk the years of research into IQ. I get that there are some
uncomfortable implications that arise from accepting IQ as a measure of
overall intelligence, but, from my understanding, IQ is an excellent predictor
of income, social status, and academic & job performance.

My opinion is that we are doing society a huge disservice by dismissing IQ.

~~~
ggggtez
Yes, but then again, I'll posit that having Blue Eyes is correlated with
income. But I'd like to see someone to try to defend that position
scientifically.

In reality, all I've done is created an artificial group which is heavily
skewed towards richer countries. The color of the eyes had nothing to do with
IQ, and yet it's correlated.

So what, if anything, is the IQ test measuring? How do you know you haven't
simply written a test which measures how much time the person has to think
about brain teasers? Researchers should ask these questions to determine if
the correlation is actually caused by anything real, or if it's better
explained by some other underlying factor.

~~~
Consultant32452
>How do you know you haven't simply written a test which measures how much
time the person has to think about brain teasers?

Come on man, do you really think a century's worth of psychologists never
considered such trivial questions?

~~~
skj
This kind of response is, in my opinion, anti-academic.

A better response would be too try to find some literature on the subject.

~~~
Consultant32452
One aspect of IQ is your ability to solve certain kinds of abstract problems.
OP asked what if the IQ tests just test your ability to solve certain kinds of
abstract problems.

OP's post didn't seem like a genuine academic inquiry but instead a weak
emotional attempt to dismiss something they don't like.

------
onuralp
Two comments:

\- Lay people seem to have a very good intuition about the actual heritability
of intelligence (taken to assess the genetic and environmental determinants of
intelligence). [0]

\- If you'd like to read an up-to-date and thorough literature review on human
intelligence, I highly recommend "Intelligence: All That Matters" by Stuart
Ritchie.

[0]
[https://news.ycombinator.com/item?id=16468694](https://news.ycombinator.com/item?id=16468694)

------
skytrue
Psychology as a field is one that is constantly becoming out of date. General
consensus amongst psychologists often doesn't reach academic text quickly,
because of the bureaucracy that exists inside the APA (American Psychological
Association). See the controversy around the most recent version of the DSM.

I've observed that academics in the field will often take theory as most
likely fact, if only because the actual published academic material hasn't
caught up yet, but they've all been to the various conferences that are
relevant to their particular field (social psychology, cognitive, etc).

Any study that is citing materials from over 20 years ago is grossly outdated.
Many of the materials from even 10 years ago are becoming outdated. And now,
more recently, we've started to examine the racial bias that exists in our
systems of research, so I expect past material to come under even further
scrutiny.

~~~
breck
Some people have researched this.

I remember reading "The Half-Life of Facts: Why Everything We Know Has an
Expiration Date" by Arbesman, and IIRC psychology papers had some of the
lowest half-lives of all the sciences. In math the half-life of a paper was
something like 85 years, but in pysch it was closer to 10. I forget the exact
statistics and don't have the book here but it was pretty interesting.

Scientometrics and Eurekometrics are two areas of research that delve into
this sort of thing more.

------
quadrangle
The most remarkable thing BY FAR: That some expert(s) are actually looking at
and critiquing introductory textbooks!

This type of issue is present in nearly every field. Education would be
dramatically improved if more actual experts reviewed more of the introductory
materials (ideally all the way down to elementary school level).

------
smallgovt
>> To identify factual inaccuracies, Warne’s team used as a benchmark a
consensus statement on intelligence research published in 1997 by Linda
Gottfredson et al

Is the best way to define 'fact' really a twenty year old study?

Also, even if the results are believed, it doesn't seem to be a big deal.
Afterall, they're reporting an average of 1 factual inaccuracy per book. When
each book presents hundreds or thousands of truth claims, is 1 factual
inaccuracy per book noteworthy? I'm willing to bet a much larger percentage of
the 'scientific consensus' they define as 'fact' is in fact wrong.

~~~
tokenadult
I know eminent IQ researchers back at the time who declined invitations to
sign on to the Gottfredson consensus statement (with good reason, in my
opinion). On the other hand, it is still a reasonably fair statement of a lot
of the consensus among IQ researchers, but as you say now twenty years out of
date. (I read the consensus statement right after it was published.)

------
nokcha
Link to the actual paper:
[http://psycnet.apa.org/fulltext/2018-07714-001.pdf](http://psycnet.apa.org/fulltext/2018-07714-001.pdf)

~~~
DanAndersen
Thanks! I wasn't sure whether to link the summary blog post or the original
paper. Having both is definitely good for the discussions here.

~~~
DanBC
BPS is hit and miss. Sometimes they're great; sometimes they really mangle the
science.

[https://www1.bps.org.uk/networks-and-communities/member-
micr...](https://www1.bps.org.uk/networks-and-communities/member-
microsite/division-clinical-psychology/understanding-psychosis-and-
schizophrenia)

First response from 2014: [https://www.nationalelfservice.net/publication-
types/report/...](https://www.nationalelfservice.net/publication-
types/report/understanding-psychosis-and-schizophrenia-a-critique-by-laws-
langford-and-huda/)

Newer response to updated document from 2017:
[https://www.nationalelfservice.net/mental-
health/psychosis/u...](https://www.nationalelfservice.net/mental-
health/psychosis/understandingpsychosis/)

It's a bit worrying that they get so much wrong.

------
closed
This may be a bit off the deep end, but in 2014 the Journal of Intelligence
had a special issue on the state-of-art for intelligence research (in
psychology). It's open access :).

[http://www.mdpi.com/journal/jintelligence/special_issues/int...](http://www.mdpi.com/journal/jintelligence/special_issues/int_where)

~~~
wuliwong
Just to make sure I understood, that link is a collection of hand-picked
papers? I expected it to be a review article, summarizing a bunch of past
work. Just wanted to be sure that I was looking at the right thing.

~~~
closed
Yeah, that's right--it was a special issue of the journal, where they invited
many people to write review articles on intelligence.

------
mcguire
* Intelligence is a statistical construct defined based on a group of rather arbitrary tests.

That one is true.

" _This contradicts the 1997 consensus statement which tackles this issue and
concludes that “intelligence tests are not culturally biased”._ "

Well, glad we got that out of the way.

~~~
nokcha
>* Intelligence is a statistical construct defined based on a group of rather
arbitrary tests.

>That one is true.

In what sense are they "rather arbitrary"? It sounds to me that you might be
suggesting that there is little correlation between performance on different
IQ tests or different parts of the same IQ test. But the authors of the paper
claim (with cited evidence) that "a common _g_ factor accounting for about
half of variance on cognitive tasks has been found across many human cultures
(e.g., Carroll, 1993; Dolan, 2000; Dolan & Hamaker, 2001; Frisby & Beaujean,
2015; Gurven et al., 2017; Reuning, 1972)".

~~~
mcguire
"Arbitrary" in the sense of not being chosen based on an understanding of what
"intelligence" is.

Please correct me if I'm wrong, but _g_ is defined by noting that there is a
correlation in performance on loosely defined cognitive tasks and assuming
that there is a single important variable explaining that correlation.

~~~
thraway180306
What you call an assumption actually sneaks in at a slightly different place.
When you design multiple tests that can only correlate positively or not at
all, you get a matrix with nonnegative entries. Of course when you factor it
you find a dominant factor, there has to be a dominant factor (or some, if
matrix has zeroes, unlikely in a noised case). Frobenius–Perron theorem
guarantees that dominant factor for you. You can make up any set of traits,
test for positive correlations this way and always get some dominant factor.
Mathematics guarantees there will always be a _g_ factor whether from real
influence or as an artifact of factorization, if your tests can't correlate
negatively.

~~~
mcguire
How does one choose between a single _g_ factor and multiple _g_ factors?

~~~
thraway180306
They do not appear in the real data, you have one g stronger than all others.
This g has as much explanatory power as noise. Frobenius–Perron guarantees it
would always appear whether there is one strong underlying influence on the
data or hundreds unrelated ones.

Factor analysis used to establish g doesn't and can't have the power to tell.
There are numerous and intense attempts to paper over this mathematical fact
by circumstantial evidence, I'm not up to date with all the contortions. I
don't think they can make linear algebra a causal theory: when you want to
replace your data with linear combination of factors there will happen to be a
strongest factor.

To somewhat answer the direct question: how to check factors. There is
Confirmatory Factor Analysis that checks how factors decay. You take out
factors you see as accounting for most of the variance in the data and check
if indeed not much partial correlation is left. This has most power to negate
an influence but next to none or very little power to confirm it not being
spurious.

These wikipedia pages don't look any good, but I don't want to evaluate that
right now or chase better material
[https://en.wikipedia.org/wiki/Confirmatory_factor_analysis](https://en.wikipedia.org/wiki/Confirmatory_factor_analysis)
[https://en.wikipedia.org/wiki/Partial_correlation](https://en.wikipedia.org/wiki/Partial_correlation)

------
franciscop
One of the most shocking things for me when I went through highschool a decade
ago was how new things we learned about maths/physics showed the previous
things we learned were not accurate. Things like relativity and time dilation,
integrals/derivatives, etc.

This was further proved (but not that shocking then) when I went through
Engineering in the University with fluid mechanics et al.

I think when learning a whole new topic, specially ones experienced at a
practical level, we need to start at a point with some broad
oversimplifications. Not saying that is what happens here since I know little
about psychology, but I would expect the same to apply here. Then we will be
ready to dig deeper.

~~~
TipVFL
When I was in elementary school my dad constantly pointed out that they were
teaching me nothing but lies. Sometimes he'd ask me what we were learning and
then he'd give me questions to fluster the teacher. Teachers either loved it,
or hated it (most hated it).

For some topics it makes sense to start with a simplified view, but I don't
understand the general trend of insisting it's actually correct while they do
it. It's already hard to convince some kids to learn, when they later find out
that most of it was lies it has to sap their motivation further.

The simple solution, when you're dealing with a simplified/incorrect version
of something for educational purposes, admit that's what you're doing, and
why.

An example of this madness: in my high school physics/chemistry class the
teacher asked how many states of matter there are are, I raised my hand and
said 4. She said, "Wrong, there are three." I said, "Solid, water, gas, and
plasma. Four." Her response, "Look in your book, what does it say? THREE:
water, solid, and gas."

I told her that plasma is the most common form of matter in the universe and a
textbook wouldn't change that.

I got detention. I got detention for insisting plasma exists, in a high school
science class.

~~~
jacobolus
This is a general problem with organization and counting of abstract
categories. All of my science, history, and social science courses in
middle/high school were pretty much garbage, and as far as I can tell that is
the norm in the US (and many other places; not through special fault of the
teachers, but because the curriculum/textbooks are insipid).

“State of matter” is not a perfectly defined property, but only a simplified
model of high-level aggregate behavior/interactions; individual particles have
simpler interactions which are not categorized the same way. Scientists are
happy to admit a new “state of matter” whenever they manage to observe a
situation that doesn’t behave according to their previous models, and are
happy to admit that the states of matter are just a simplified model, but some
petty bureaucrats (like, sadly, many teachers) can’t handle this.

You can similarly get into trouble arguing about how many “kingdoms” there are
of eukaryotes, or how many Romance “languages” there are, or how many
“continents” there are, or the difference between crops and weeds or a fetus
and a person or plagiarism and research.

Quizzing students on how well they can memorize and recall labels (and even
worse, punishing them for knowing additional labels) is not just an incredible
waste of human time and attention, but is actively harmful insofar as it
disempowers students to ask and investigate their own questions and
discourages them from seeking a deeper understanding of what they are
learning.

I really appreciate the beginning of the MIT electronics textbook
[https://amzn.com/1558607358](https://amzn.com/1558607358) which explicitly
describes the abstractions used and their limitations, up front. Rare
economics textbooks also do a decent job of this.

~~~
franciscop
Oh yeah, I got that in University. If I remember correctly from
thermodynamics, high pressure and heat above the triple point [1] makes the
change from liquid to gas to be a continuum and not a discrete jump.

[1]
[https://en.m.wikipedia.org/wiki/Triple_point](https://en.m.wikipedia.org/wiki/Triple_point)

------
drngdds
It's probably worth noting that the "consensus statement" cited here was only
signed by 52 of the 131 researchers it was sent out to.

[https://en.m.wikipedia.org/wiki/Mainstream_Science_on_Intell...](https://en.m.wikipedia.org/wiki/Mainstream_Science_on_Intelligence)

~~~
notahacker
It's also worth noting that judging from the examples provided, the
methodological approach of Warne et al. is to suggest textbook authors are
guilty of a specified fallacy if they make any statement which appears to
conflict with Gotfredsson's listed points in any way (even if that statement
is itself cited, quite possibly true and not actually presented as a
refutation - fallacious or otherwise - of any claims made about intelligence)

------
epx
Hard right: everybody should take an IQ test Libertarian right: I take drugs
to increase my IQ Libertarian left: there is no IQ Hard left: everybody has
the same IQ

------
ggm
When laypeople misuse science and maths, (I am in this context a layperson
btw) the consequences depend on the applicability of the subject to ordinary
discourse and events.

Because I don't work in bomb science, power engineering or nuclear medicine,
If I misuse facts about nuclear fission making decisions to hire a new
staffer, the consequence is low. Nothing about nuclear fission informs what
happens in the workplace. So the colloquial lack of understanding on my part
as to what science actually says about nuclear physics is low.

When I mis-use understanding of I.Q. and g to make a hiring decision, the
consequences are huge for the person hired and the organisation. Assumptions
about normative behaviour, likely success, future hiring, application of
applied knowledge, ability to learn, all these things have direct meaning and
importance. The impact on the hire, or non-hire is also immense. Future
promotion, reward, positional authority can all be affected by somebody in the
psych space, working in H/R, mis-applying domain-specific knowledge around IQ
and intelligence.

Its not nuclear physics, but its being mis-applied. Thats the problem. (that
it's being mis-applied: not that it isn't nuclear physics. Although a quality
of its mis-application _is_ about it not being _hard_ science: its guesswork,
rei-fied through G and other statements of single-value measure against an
imprecise measurement. I don't actually think social science is well named.)

------
abvdasker
This doesn't make any sense. He's taking a study over 20 years old designed to
measure academic consensus and then using it to try and disprove 29 textbooks
authored by leading psychologists. Couldn't the latter be said to be a (much
more recent) form of academic consensus? Why should a measure of academic
consensus over 20 years old be used as some kind of evidence against newer
ideas?

------
deviationblue
Kinda related to this, probably a good idea to re-evaluate EQ while we're at
it. Although, my gut feeling is that a healthy mix of IQ and EQ makes good
worker bees (if that's the metric we're interested in).

[https://en.wikipedia.org/wiki/Job_performance#Emotional_inte...](https://en.wikipedia.org/wiki/Job_performance#Emotional_intelligence)

------
posterboy
Those books are written by psychologists, maybe they have a good reason to be
... imprecise, underspecific? Be honest: nobody "knows"; Instigate thought:
don't serve the truth on a silver tablet. That seems to be true for every
advanced topic. Reverse psychology is still psychology.

------
sosuke
I'm so confused now. Who is right about this?

~~~
poster123
For decades, the media have been misreporting what is known about
intelligence, because it is politically incorrect. A good book documenting
this is "The I. Q. Controversy: The Media and Public Policy" by Stanley
Rothman and Mark Snyderman (1988).

~~~
saas_co_de
That "political correctness" as you call it is masking a much deeper problem.

If you believe that there are innate differences in intelligence then what do
you do about it?

If we are not going to discriminate based on innate differentiators like
gender, race, etc, then how can we discriminate based on intelligence, if it
is innate?

The idea of social justice for people with innately low intelligence is very
disruptive so better to pretend they don't exist.

~~~
poster123
It is essential to discriminate in hiring on the basis of qualities that are
relevant for the job. If a person's race or sex does not affect his or her
ability to perform as a programmer, but his or her intelligence does, then of
course you use perceived intelligence as a criterion. Athletic ability is
heritable. How can sports teams not discriminate on the basis of athletic
ability?

~~~
saas_co_de
> How can sports teams not discriminate on the basis of athletic ability?

What applies to a few thousand people playing games does not necessarily apply
to society at large.

> It is essential to discriminate in hiring on the basis of qualities that are
> relevant for the job

Yes, the job has to fit the person, but what about the salary? If a person can
only qualify for a certain position based on their innate intelligence then
they should not earn less because of that.

Jobs might be distributed based on innate intelligence but distributing income
based on innate intelligence would seem to be counter to the anti-
discrimination norms that our society lives by.

------
lallysingh
Sweet! Time to print new editions of all those textbooks! Time to buy a boat.

------
forapurpose
On one hand is, "A researcher in human intelligence at Utah Valley
University". On the other are, "the 29 best-selling introductory psychology
textbooks in the US - some written by among the most eminent psychologists
alive".

Why would I believe the former? I'm not saying that Utah Valley U. researchers
shouldn't raise questions, but that the standard for what I'm going to take at
face value is much higher than one study that contradicts everyone else. One
study is just the start of the start of a discussion. I think I'll let the
body of evidence accumulate and let experts weigh in.

One psychological concept I'm familiar with is confirmation bias. This article
confirms beliefs held by some at HN about political bias, authority, and
accuracy in science, (and unfortunately, an effort to rationalize and justify
racism by finding some basis for it). That doesn't make this report any more
likely to be accurate. Apparently, 29 leading textbooks contradict it; if not
for the bias, why not take 29 publications at face value instead of 1?

EDIT: a bunch of edits; sorry for the mess.

EDIT: I'll add that this one study was published "in an open-access article in
Archives of Scientific Psychology". It's not even peer-reviewed [EDIT2: It is,
in fact, peer reviewed; sorry. I still expect the textbooks to be reviewed
much more carefully than one paper.]

~~~
ahultgren
Well, there is nothing to believe or not believe in this study; it simply
points out that most of the books disagree with what's widely accepted in the
field[0]. Which do you believe, the widely accepted theory or 29 books?

[0]:
[https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93C...](https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory)

~~~
forapurpose
> it simply points out that 29 books disagree with what's widely accepted in
> the field

The study claims that. The textbooks claim otherwise. Why do you believe the
study's claims? (I'd believe Wikipedia less than either.)

~~~
rvern
> I'd believe Wikipedia less than either.

That's where you make a mistake. Whatever the reason, the Wikipedia articles
on the _g_ factor, the intelligence quotient and Cattell–Horn–Carroll theory
represent better what is widely accepted in the field than any other source.
This somehow also turns out to be true for other academic fields.

Anyway, when source A says explicitly that source B is wrong but source B
makes no such claim about source A, you should usually believe source A. But
in this case you can know that the study is right by just learning for
yourself what does happen to be widely accepted in the field and why it is
widely accepted.

~~~
forapurpose
> represent better what is widely accepted in the field

How do you know this? Are you a practitioner in the field?

> when source A says explicitly that source B is wrong but source B makes no
> such claim about source A, you should usually believe source A

Yikes. That is scary: "You are wrong." You'd better respond or else that
proves it.

"Newton is wrong." "Darwin is wrong about evolution." "Climate change
scientists are all wrong." "Popper is wrong about postpositivism." "The legal
system is wrong about the right to cross-examination."

