
Social Psychology Is A Flamethrower - gwern
http://slatestarcodex.com/2013/06/22/social-psychology-is-a-flamethrower/
======
TeMPOraL
So a friend of mine just shared a survey from a student of social sciences on
Facebook. The survey is concerned with stress levels of kids between age of 13
and 16, and is targeted to them. The data is to be used as a basis for a
bachelor's thesis.

The first thing I learned is that my friend actually took part in the survey
by "putting herself in the mindframe of a 13-year-old and filling it in as the
kid would". After a short discussion about how doing so turns the whole thesis
into pile of nonsense, she (herself a graduate from a liberal arts college)
told me that the survey author was actually one of the better student in that
she actually made the survey - as most people fill the surveys themselves or
just make the data up.

So things like these give me trust issues with all the "soft sciences". I do
consider most of the sociology nonsense, because I seriously doubt that people
who make up data for their thesis will suddenly turn into honest researchers
after graduation. Yes, the "best research in social psychology" might be "as
well-supported as anything in physics or biology", but I doubt it's more than
a small part of research done, and mostly research from first principles (so
there isn't much chance that the research you're basing your paper on is
crap).

~~~
arrrg
Answering a survey by pretending to be someone else is clearly unacceptable
and fraud by the student if they encouraged others to do it. I’m pretty sure
that would lead to that student failing the class or worse if it were
discovered where I study. The same is (even more) true for faking data. That’s
a disgusting violation of all scientific standards. For any somewhat larger
project that might yield actual useful data our teachers are usually also
involved to a degree where they would know if something like that is going on
to a massive degree.

It is also standard practice to disguise these kinds of filter questions in
surveys and to leave possible respondents in the dark about the ultimate
purpose of the survey (insofar as that is possible and it isn’t ethically
necessary to be transparent about certain aspects). That way respondents
outside the filter criteria hopefully couldn’t participate even if they wanted
to fake their way in. The way I do it, no one besides my advisor and a limited
pool of pre-testers (who I make sure to tell not to participate in the main
survey – and I also try to keep the link to the survey away from them) knows
what the survey is about and what are the filter criteria.

That’s students. Most of them won’t go on to do any actual worthwhile research
that will ever be cited by anyone. I would hope that the standards are even
higher for research that’s published in journals.

Social research is tough even if everyone behaves ethically. No doubt about
that. But I have never encountered any culture of fraudulent behaviour outside
of lazy students who will never publish anything anyone cares for.

~~~
return0
I tend to agree that defrauding data is not so much of an issue. After all,
filling in a survey does not require too much effort or equipment. Cherry-
picking the data and biases are the main issues with some studies.

------
cinquemb
_Most people are not consequentialists, but most people feel implicitly
uncomfortable making moral arguments on non-consequentialist grounds. “Stop
what you’re doing, it disgusts and offends me” is less noble than “stop what
you’re doing, it will hurt people who can’t stand up for themselves”._

I hear "arguments" like this all the time against data-mining public
information and crowd sourcing and making it open on our site (and through an
API) through our feedback and they make me chuckle seeing how they had to
engage and co-opt a multitude of psychological/behavioral theories not drafted
by themselves to say such things. Even more funnier is when the things they
are protesting has been done in "private" for a very long time. Kind of
reminds me of how the whole Snowden thing plays out on HN and in the general
culture at large, despite the mumblings of such since the very beginning…
quite amusing to say the least…

~~~
forgottenpass
_quite amusing to say the least…_

Ah, yes, quite droll that people who don't make money from the thing I make
money from have different moral opinions on it. Very amusing indeed. Because,
of course, they're obviously wrong.

And they use other people's words when arguing their point? I'd point out that
this is how just how ideas pass through society, but I'll settle for laughing
at the fact you see unaware that you're also doing exactly that in your post.

~~~
cinquemb
Only if we all could conflate stating observed interactions with arguing a
point like you just did exactly in your post, we all could laugh at ourselves
more. :D

I guess those who think they are morally right can just give themselves a big
round of applause while they continue to engage in behaviors that act against
such moral beliefs, whether they are willing to admit that to themselves or
not and I'll just keep exploring the infinite. :D

------
gravity13
| The best research in social psychology is as well-supported as anything in
physics or biology

Nope.

~~~
mikevm
I am not sure why you are being downvoted, as the author himself first says:

 _The best research in social psychology is as well-supported as anything in
physics or biology, and much more intuitively comprehensible._

And then he follows it up with:

 _Social psychology experiments in the laboratory tend to throw up spectacular
mind-boggling effects. Many of these fail to replicate and are later
discredited. The ones that do replicate are not always generalizable –
sometimes an even slightly different situation will remove the effect or
create exactly the opposite effect. The effects that remain robust in the
laboratory may be too short-lasting or too specific to have any importance in
real life. And the ones that do matter in real life may respond unpredictably
or even paradoxically to attempts to control them._

~~~
taber
He was downvoted due to lack of evidence and his casual dismissal of a point
in the article. This isn't reddit, as easy as it is to see an unsubstantiated
point in the OP and reply with an equally baseless retort, it doesn't help any
of us learn more. Your comment, on the other hand does point out apparent
contradictions in the OP, and thus is useful

------
gopalv
The issue with psychology is that double blind experiments are prone to
observer effect.

The behaviour often have nothing to do with reality - when measured,
everything tends towards the ideal.

And the experiments can often result in the results you want if the double
blinds are removed.

I always fall back onto the Survey from "Yes Minister" to explain how that
works

[https://www.youtube.com/watch?v=G0ZZJXw4MTA#t=30s](https://www.youtube.com/watch?v=G0ZZJXw4MTA#t=30s)

~~~
gwern
> The issue with psychology is that double blind experiments are prone to
> observer effect.

You mean, 'non double blind' experiments?

------
pkinsky
>Social psychology experiments in the laboratory tend to throw up spectacular
mind-boggling effects. Many of these fail to replicate and are later
discredited. The ones that do replicate are not always generalizable –
sometimes an even slightly different situation will remove the effect or
create exactly the opposite effect.

Generator-based tests (as in Haskell's Quickcheck) could help here. Standard
unit tests run assertions with hardcoded input values such as [0, 1, -1].
Generator based tests instead try to show that properties hold over randomly
generated inputs, by replacing 1 with a generator of positive integers, -1
with a generator of negative integers, etc.

Imagine if social psychologists ran generator-based studies as a hedge against
unknown unknowns. The biggest challenge would be generating and executing
valid permutations of the original experiment. Perhaps ad network AB-testing
could be repurposed to run experiments as ads with randomly generated values.

~~~
adrianN
The biggest challenge would be to get funding to run many slight variations of
the same experiment without clear indications that this will produce many
publishable results. Getting data from humans costs many orders of magnitude
more than getting data from your code.

------
autokad
I went to a guest lecture by Michael Macy on network autocorrelation. its not
fair for me to tell you what he was saying (so look him up if you want), but I
will summarize.

there is a lot of research done with surveys, and most of these surveys have
an inherent assumption: that the people live on an island and are not
autocorrelated.

so he created a false network of people with random fake political views, etc
and added network autocorrelation. they found that random views would cluster
with demographics and were very statistically significant - but the views were
random.

in a similar experiment, a music sharing website to collect data on its users
separated users into 'worlds'. users could see what songs were downloaded by
other users within the world. and each world had its own popular songs despite
being made up of random people.

problem with networks (people) is that even a few missing nodes makes
analyzing the network almost useless.

------
iskander
>If I were a demon from Hell, charged by my infernal masters with increasing
rape as much as possible, I literally could not think of a better strategy
than talking about rape culture all the time.

I don't understand the proposed alternative. There is a widespread disconnect
between the ethical view of consent ("enthusiastic yes") and the equally
common disregard of consent ("well, she only said no the first time but then
got real quiet...seemed OK"). Families and friend circles often fail to
support rape victims or confront rapists. We're facing a structural cultural
problem, and the solution is to...not talk about it?

~~~
DanBC
One alternative (which appears to work with alcohol problem drinking) is to
tell people what other people like them are doing. Thus, all the newspapers
and tv programmes talking about the perils of binge drinking and how everyone
is drinking too much and how it's harmful to them and the nation - that just
makes people think that excessive drinking is just what people do. But if you
tell university students that people their age tend to drink about 8 units a
week (or whatever the actual real average is) then they realise that they
drink a lot more than that and start to cut down.

Telling men that "most men think drunk consent is not consent" would be in
line with this, and is I think what the article is talking about.

~~~
mrgriscom
I remember those binge drinking proclamations in college and reading a few
years later that they basically didn't work at all. People construct their
social norms based on the behavior of those they observe around them, and
particularly the people they care most about: their friends and their friends'
friends, not some vague model citizen they were told about in some quasi-
propagandist fashion. And if I recall the article correctly, the average they
quoted wasn't even real, so in effect it was propaganda.

------
Paradigma11
As someone who has written his thesis in social psychology i have to say that
social psychology is fundamentally broken, kaput. If you had a worker who
fixed the pipes in your apartment the same way your toilet and kitchen would
explode. But because we call social psychology a science we accept it somehow
and hope for progress.

Issues i have:

1.) What would the practical consequences be if social psychology and it's
knowledge would be erased. Not many that i can think of. Considering that the
area has immense potential applications this is pretty telling.

2.) What have been the most important research breakthroughs of the last 20-30
years? There are new fads for sure (behavioral economics, neuroeconomics and
evolutionary psych i look at you) but they all coexist peacefully. No
falsification is happening, just new profs need new theories for publication.

3.) There is NO theoretical rigor. If you do have tons of theories you would
suppose that there is much theoretic work going on clarifying and contrasting
this theories to enable empirical tests of their validity. Not here. The more
theories the merrier. Why? Every researcher needs his own theory or not
crowded field so he doesn't rock the boat. That has lead to the development
that one experiment is done by two grad students and a significant result in
one direction is interpreted by one theory and in the other direction by
another theory. Basically you give a sample of students a questionnaire and if
it goes one way it proves that XY gave an evolutionary advantage to the
prehistoric people if it went the other way it proves that the utility curves
cringle on the right side. Both ways the researcher has something to publish.
I do think that the 2 theories per experiment limitation is rather arbitrary
and inefficient tough. There is definite room for improvement here.

4.) Many recent work try to address this by going theory free. All this
different effects are the result. This is also practical because there is
always room for more effects to research and if you happen to need one to
explain a particular fact then there are always a ton to choose from.

5.) Cargo cult use of statistics. You could make an argument that most social
science researchers are not able to understand the whys and hows of statistics
and decision theory so giving them recipes to follow is better than the
alternative. Might be. It just does not work. It really does not. Also the
imperative of the researcher is not to generate knowledge but publishable
significant results.

Basically i do think that we have to start over and scrap the work done so
far. And no, i do not have a better alternative at hand. That is not necessary
to see that the current process is broken and produces nothing useful and
binds intellectual potential, tough.

~~~
derefr
> No falsification is happening, just new profs need new theories for
> publication.

Much like macroeconomics, the problem with falsifying our hypotheses in the
field is that you can't just build a culture in the laboratory. Even worse,
there's no such thing as a control group: everything is observing everything
and trying to self-modify to copy the good ideas of others, all the time.

------
DanBC
> and studies on child porn show pedophilia is less common where it’s more
> accessible.

There's no link to the research.

Does anyone have any links? I'd be interested to know of this is just more
lack of reporting of crime in a country that doesn't do much about that crime.

~~~
gwern
I don't know all the studies Yvain might have in mind, but at least one of
them is "Pornography and sex crimes in the Czech Republic"
[http://www.researchgate.net/publication/49644341_Pornography...](http://www.researchgate.net/publication/49644341_Pornography_and_sex_crimes_in_the_Czech_Republic/file/79e415111ced447bf5.pdf)
\- one of a few papers exploiting various lapses in nations' child porn laws
and observing not an increase in child sex crimes but rather decreases. You
can probably find more studies by looking at related papers
([http://scholar.google.com/scholar?cites=14325274059754433423...](http://scholar.google.com/scholar?cites=14325274059754433423&as_sdt=20000005&sciodt=0,21&hl=en))
and reverse citations
([http://scholar.google.com/scholar?q=related:j-vH2duYzcYJ:sch...](http://scholar.google.com/scholar?q=related:j-vH2duYzcYJ:scholar.google.com/&hl=en&as_sdt=0,21))
of that paper.

(This is broadly similar to other correlations you may've heard about with
more regular porn: porn seems to substitute for sex crimes, and not increase
crimes like rape.)

------
vacri
_They found violent movies decreased crime 5% or more on their opening
weekends, and that each violent movie that comes out probably prevents about
1000 assaults. Further, there’s no displacement effect – the missing crimes
don’t pop back the following week, they simply never occur._

This is a very naive view of the idea that violent movies increase violence in
the population - the author is acting a little surprised that the results
actually show that crime goes down a little on opening weekend and doesn't
change from normal in the weeks before and after. It makes no long-term
analysis of the baseline; the _culture_ of having violent movies so prominent
affecting levels of violence in the population. It just shows that violent
people like to go to violent movies, not that the baseline violence is
increased or decreased from those movies.

Reading over it again, the author is just guilty of bad journalism. The
article he's quoting for this section is "Does opening weekend of violent
films have an effect on violence rates?", which is a really very specific
view, but is painted by the author as an incredibly broad "Violence In The
Media Prevents Violent Crime". The irony is that it's the very first example
given after a screed about how sloppy the field is.

~~~
gipp
That's _the whole point of the article_. He's showing that research can easily
be twisted to support one conclusion or the other, and that the standards for
what conclusions are acceptable to draw from social sciences research is far
too low. It is __not __that the stuff listed there is in any way the "absolute
truth".

~~~
vacri
Except that he does imply in the _conclusion_ to the article that they are
reasonable conclusions to make. It's not just "twist to meet a predetermined
conclusion", the author is actually saying that these are reasonable and
plausible arguments. I'm saying that the first example isn't plausible because
the quoted study is not representative of the argument being made. It's more a
comment on spin-doctoring than quality of social psych research.

