
Lee Jussim Is Right to Be Skeptical about ‘Stereotype Threat’ - tic_tac
https://quillette.com/2020/02/22/lee-jussim-is-right-to-be-skeptical-about-stereotype-threat/
======
virtuous_signal
Results like this make me wonder what news from sociology or psychology can
really be trusted. It seems that every finding I've heard about that has
entered the mainstream (stanford prison experiment, the importance of "grit"
and "growth mindsets" e.g.) ends up either debunked or at least shown under
replication to have vanishingly small effects.

During my time in my PhD program in science, I (voluntarily) went to two talks
for my department and for the wider university about stereotype threat. It was
a fascinating, compelling subject. But alas.

Contrast that with results in math or laboratory sciences. I'll read about
something like the Sensitivity conjecture in Quanta or Scientific American and
it might not be as inherently interesting, but at least I can somewhat trust
that the result is "true". But that kind of scientific result never enters the
mainstream where literally everyone in-the-know has heard of it.

------
perl4ever
"black students “consistently scored lower than black students who were not
asked to identify their race before the test.”"

Why attribute this to students imagining they're inferior instead of them
inferring from the question that the test administrators are biased or racist?
Or just have thoughts about racism interfering with concentration?

So much research I read about seems to assume that the subjects are thinking
precisely what the researchers expect.

~~~
lonelappde
> Or just have thoughts about racism interfering with concentration?

That's what "stereotype threat" is.

~~~
olliej
No, stereotype threat is performing less well because you believe you’re
expected to.

@perl4ever is being much more explicit: the stress caused directly by the exam
setup making it very explicit that they are considering race when grading.

Those are wildly different.

My presumption on the concept of stereotype threat is that it isn’t “real” -
these studies find statistical evidence based on “priming” which was super hot
for a decade or two before it was discovered that it was complete nonsense (it
was a significant contributor to the start of the “replication crisis” in
psych+soc).

Eg if your study finds a statistical significance based on assuming priming
works, but we know priming doesn’t, then your study, statistics, or both, are
bad.

~~~
perl4ever
It seems plausible to me that _some_ people might be discouraged by thinking
of stereotypes, and _others_ might get angry and do better, and once you
average it out, you don't find an effect.

People often talk about how you can find an effect that doesn't exist by
slicing and dicing your data, but you can also cover up one that does exist by
_not_ dividing it up.

------
aaron695
I'm surprised they gloss over Priming as real since it's also having big
p-hacking issues.

[https://www.nytimes.com/2013/02/24/opinion/sunday/psychology...](https://www.nytimes.com/2013/02/24/opinion/sunday/psychology-
research-control.html)

~~~
olliej
Right? Priming has mostly been debunked at this point hasn’t it? (That was my
understanding as of last year) so if your statistical difference is driven by
an experiment based on something that doesn’t actually do anything... what are
you actually measuring?

------
raarts
Isn't this a bit off topic for HN?

