
How not to be a crank: rules for not being a science-dick - fanf2
https://medium.com/@jamesheathers/how-not-to-be-a-crank-819103800502
======
exabrial
All of these are extremely important! Two life skills:

* If you are without a doubt, you are wrong

If you can't fathom the other side and where they're coming from, you, are in
fact, biased beyond listening to facts, and are likely wrong.

* Words matter

As my grandma said, "If you don't have anything nice to say don't say anything
at all."

~~~
ars
> If you can't fathom the other side and where they're coming from, you, are
> in fact, biased beyond listening to facts, and are likely wrong.

This one especially applies to politics. If you are one of those looking for
"sanity", or "common sense", and can't imagine how the other side can possibly
hold some position, then this applies to you.

~~~
Karunamon
I’m not sure that’s accurate. People hold all manner of irrational political
beliefs and there is value in recognizing that irrationality.

~~~
zaphar
Do you know the reasons for their beliefs? Can you articulate them? If not
then you are conflating your own beliefs with the standard for rationality.
You will also be really ineffective at convincing them they are wrong. Not
because they aren't smart enough but because you are unable to communicate
effectively to them.

~~~
TheOtherHobbes
What makes you think people understand the reasons for their own beliefs? How
do you know the reasons they give you when asked are the correct ones? Why do
you believe that challenging those statements respectfully will change their
minds?

The actual requirement for persuasion is a believable narrative with a bit of
moral weight, and a random selection of standard persuasive tricks that
politicians and lawyers have been deploying with outrageous success for
millennia.

Generally, being objectively correct is one of the least effective ways to
change anyone's mind. You might have some hope in a scientific setting. But in
popular debate, facts and rationality are almost completely powerless.

~~~
zaphar
I don't know that challenging them respectfully will change their minds. After
all what if they are right?

I do know that not challenging them respectfully is unlikely to persuade
anyone least of all yourself.

------
JasonFruit
A certain amount of this boils down to, "Don't be mentally ill or have an
otherwise disordered mind." I've known people who were the sort of crank he's
talking about, and they were generally — though not always — unhealthily
isolated people who had difficulties managing their personal lives. Most had a
tenuous grasp on reality, especially the motivations of other people.

There are cranks who could respond to these rules, but I believe (though I am
not a mental-health professional) that many would be unable to.

~~~
zaphar
I think we can let our emotions get away from us whether we suffer from mental
illness or not. Many people exhibit the behavior in the article despite being
of perfectly sound mind.

~~~
JasonFruit
Certainly true as well.

------
soVeryTired
Nice article. For the last few years, I've had concerns that parts of
economics have serious problems. But it's good to get a reminder of how to
criticise well: points one and two (calm down, and don't be a dick) are
particularly relevant to me.

For me, the stakes feel high enough that it's easy to get worked up and spit
vitriol. But I've found that I'll lose the interest of anyone I'm talking to
when I do.

~~~
mike_hearn
You definitely don't want to come across as worked up or vitriolic. I would
note that the guy who wrote this article is sometimes a tiny bit dickish
though, check this article out:

[https://medium.com/@jamesheathers/long-hair-dont-
care-5eeba2...](https://medium.com/@jamesheathers/long-hair-dont-
care-5eeba266ec52)

Sample comments: _If that title leads you to expect off-brand evo psych
exploring massive behavioural changes induced through the complicated medium
of hairdressing… well, you’d be right._

Mmmm.

It's also worth bearing in mind your audience.

This article was written by two guys who by and large are going after junk
science in psychology, nutrition, and so on, where the _worst_ motivation can
only be to advance the careers of the scientists - and quite possibly the
errors are just the result of over-optimism or lack of statistical abilities.

For that sort of work it makes sense to try and fix psychology from within, by
pointing out errors. There isn't much point in trying to spread the word
through the wider world that psychology is full of errors: what's the best
outcome, exactly? A few people stop being invited to give TED talks? Big deal.
Psychology isn't going to stop being studied, nor should it be!

But in the case of economics, there are two problems:

1) Economics is absolutely riddled with not just bad individual papers but
entire sub-fields that are nonsense, it's an epidemic. It probably cannot be
fixed by shaming individuals who make simple errors via their own journals and
hoping for the field to improve. Even when major economists like Paul Krugman
or Paul Romer have gone on very serious attacks against their own field, there
has been no visible impact and in Romer's case he simply lost his job.

2) Economics is routinely used to justify massive government interventions (or
lack of interventions) in markets and society. Bad economics research can have
a very serious impact on people's lives.

In this case, it's probably better to work at a different level and try to
spread the word amongst the general public that economics is not trustworthy.
Thus the audience is not journals or the authors of the papers themselves.
It's the public.

This is not hard because most people who follow the news have already
encountered major cases where consensus economic predictions were simply
wrong. Polling shows that trust in economists is quite low as is. If and when
the field reforms it can try to rebuild trust.

------
cwyers
I don't know how much I agree with the premise of this article. People aren't
resistant to retracting faulty research because the people who point out
faulty research sound like cranks, they do it because their incentives are to
maintain the work they've done. If you look at someone like Brian Wansink[1],
what started off as unearthing methodological errors seems to have found
actual fraud, although I'm sure Wansink wouldn't view it that way. And this
massive peer review process not only didn't discover it, it's been an
impediment to correcting it. At stake are millions of dollars in federal
funding and more than a few careers. Do you think anyone involved is primarily
concerned with tone? And I'm not sure how effective anyone can be at combating
it by working within a crooked and broken system, where extracting rents from
academic institutions for publishing their own work is the main incentive for
publishers, and academics are incentivized to go along because their careers
depend on numbers of publications and citations. You may have a high success
rate at combating individual errors, but you aren't making any headway on the
bigger issues like publication bias and the overall trend away from long,
important work and towards shorted studies designed to provide an eye-catching
conclusion so that you can get published and cited. It's like bailing out a
boat with a drinking cup. You're still sinking.

1) [https://slate.com/technology/2018/02/how-brian-wansink-
forgo...](https://slate.com/technology/2018/02/how-brian-wansink-forgot-the-
difference-between-science-and-marketing.html)

------
wonderbear
Currently I'm trying to think of a way to get my father to read this without
overtly telling him he needs to read this. Because that would be being a
crank.

~~~
meri_dian
Open it on his computer and walk away

~~~
ak39
This is bad advice. I hope you only being funny.

------
Finnucane
If we extended these rules to the internet, Facebook and Twitter would go out
of business.

------
cgiles
I would use Andrew Gelman and John Ioannidis as prototypes for the Good Critic
and Bad Critic, respectively.

Gelman does real, solid work and confines his righteous takedowns to a side
hobby. And his criticisms are directed at individual, specific cases, with
evidence, and he reserves most of his wrath for repeat offenders rather than
one-off mistakes, which are unavoidable given enough projects. He confines
criticisms to his area of expertise, which is statistics.

Ioannidis' claim to fame is writing a hand-waving, philosophical argument in
order to cast doubt on all research at once. Followed up shortly by a paper
that implied (but didn't directly state!) that regression to the mean on
replication implies error or fabrication in the original. And he has ridden
this sort of tired argument into a lucrative and prestigious career and
Stanford appointment. Never mind the collateral damage on funding and morale.
My theory is that people pay attention to Ioannidis and wring their hands
about these highly generic issues for the same reason most people pretend to
be highly, highly concerned about sexual harassment/discrimination in the
workplace -- they're praying that if they act concerned enough, the hammer
will fall somewhere else. But indiscriminate criticism like this is highly
toxic to science (or anywhere else, for that matter, and has the side effect
of letting real bad actors off the hook).

A further moral of the story is that, for better or worse, I don't trust and
generally ignore criticisms that do not come from someone who has invested
time into a field and made useful contributions to it. 'Full-time critics'
strike me as generally sleazy and opportunistic, and I doubt the sincerity of
their desire to 'improve' a field if all they can do is tear down others' work
and not provide any of their own.

~~~
curuinor
The fact that Ioannidis stakes his claim to fame as attacks on medical
research is no evidence that he is a crank: it's evidence that medical
research is a target-rich environment.

~~~
cgiles
No doubt it is a target-rich environment. However, his arguments are so
generic that they could apply to any field studying anything multifactorial or
with high variance.

Fundamentally, however, I don't believe, for example, that physicists are more
scrupulous/intelligent/honest than biologists, who are more so than
psychologists. The irreproducibility rates in a field have to be primarily
caused by the nature of the system studied in that field rather than some
inherent attributes of the researchers.

So any field will have some constant level of irreproducibility which which
essentially cannot be changed. But some researchers do better than others. My
fundamental point is that if you want to change what can be changed, and
genuinely improve a field, you have to focus the criticisms on specific bad
actors rather than target an entire field at once and leverage a near-
tautology (more complex systems yield more irreproducible results) into a
20-year tirade/career.

Also, helpfully for people like Ioannidis, a statement like "field X has lots
of errors and irreproducibility" is non-falsifiable, and indeed will be true
no matter what, so he is taking no risk with his criticisms -- even though
they do more damage to public perception, morale, and funding, and help less
to change things than targeted criticisms would do.

If I sound irritated by this sort of thing, it is because my graduate advisor
dabbled in "meta-science" / "error correction" and I stoutly refused to have
anything to do with those projects. I thought, and still think, that it is
considerably more valuable to try to build my own solid lego tower despite the
difficulties rather than knocking down other people's, despite the latter
being considerably easier and a fairly simple and reliable route to a high
citation count. In the end, irreproducible or erroneous papers in science just
get ignored rather than retracted, and I think that's fine.

------
mike_hearn
I've also investigated bad science papers before, and blogged about it a few
times. There's a lot out there. There's nothing special about getting a PhD
that makes you politically neutral or morally superior. The temptation to
abuse maths to make your personal views seem "scientific" is irresistible for
too many. My last public attempt:

[https://blog.plan99.net/did-russian-bots-impact-brexit-
ad66f...](https://blog.plan99.net/did-russian-bots-impact-brexit-ad66f08c014a)

I feel some sympathy with fellow Brit Nick Brown after reading this story
about his work:

[https://www.theguardian.com/science/2014/jan/19/mathematics-...](https://www.theguardian.com/science/2014/jan/19/mathematics-
of-happiness-debunked-nick-brown)

(as an aside, totally unsurprised the guy works in computing, developing an
intolerance for abuse of logic is a job hazard.)

But I don't know if this perspective is helpful to be honest. "Don't be a
dick" is both quite an obvious principle and also rather useless when
criticising people's work: some of the authors, and some of the people who
want to believe the research is true, will inevitably view any criticism as
being dickish. This is doubly true if you take the obvious next step and
speculate as to whether the issues are mere oversights or (far more commonly,
in my view) deliberate deceptions intended to further an agenda.

One problem is that genuine, honest mistakes tend to get picked up already by
the existing peer review process. The really bad papers that get through are
usually bad in support of some wider social mission, usually leading the
public to some policy goal, and they don't get struck down because the peer
reviewers share the same goals. Thus pointing out mistakes has no effect,
because they already knew the science was bad. The only thing that can work is
pointing out to the people who they're trying to influence that something has
gone wrong.

There _is_ a responsibility to be harsh in these cases. Failing to do so can
simply let the issues fester and compound. From the Guardian article on Brown:

 _There were several psychologists, versed in non-linear dynamics, who smelt
something fishy about the maths in the published paper. Stephen Guastello,
from Marquette University, wrote a note of mild complaint to American
Psychologist, which it chose not to publish because "there wasn't enough
interest in the article". Guastello feels now that he should have been more
forceful in his opinions. "In retrospect," he says, "I see how I could have
been more clearly negative and less supportive of what looked like an article
that could move the field forward if someone would follow up with some strong
empirical work."_

The story describes Brown's first debunking of a paper that used lots of
clever looking math to reach obviously absurd conclusions about psychology.
One author of the paper admitted that she had never understood the maths and
the person who did create the maths (Losada) has refused to ever respond to
the debunking in any way. _All_ the reviewers were successfully intimidated by
the maths and were unwilling to criticise it, allowing Losada to get away with
it:

 _John Gottman, a leading authority in the psychology of successful
relationships, wrote to Losada because he couldn 't follow the equations. "I
thought it was something I didn't know about, because he's a smart guy,
Losada. He never answered my email," he says._

So the critics weren't critical enough and a paper based on pure mathematical
bullshit racked up 350 citations. Vast amounts of time, money and effort was
wasted. What does this say about the scientific process?

By the way, in case anyone thinks computing is immune to this sort of thing,
it's not. Papers about "Russian twitter bots" are a current streak of bad
science in computing, for obvious reasons: the authors want to undermine
political trends they don't like. The last one I read claimed to have
constructed a neural network that could detect Russian bots with 99%+
accuracy. Nowhere in the paper did it give any examples of what the network
detected, nor what the ground truth they were comparing to was. When I emailed
the authors to ask for the data so the study could be replicated, I was told
the data was not available (despite being just a bunch of tweets i.e. public
data).

It was literally an entire paper that looked scientific but boiled down to "we
made a magic black box that without fail finds evidence of a vast conspiracy,
but we can't show you. Trust us."

~~~
AlexCoventry
I don't understand why this is being downvoted. It seems to be the most
insightful, empirically grounded, high-effort comment in the thread.

~~~
shaki-dora
When an article on how not be seen as a crank makes you want to post a long
comment about your research into Russian bots, you should probably think
again.

~~~
mike_hearn
It's become a cliche, but that kind of low effort insult is exactly how people
like Trump win. You can't argue your position so you resort to calling people
who challenge you "cranks". The lazy assumptions that so often pervades these
debates leads directly into over-enthusiastic trust of pseudo-science.

I've been out all evening, and my post is now at +11. But if it was getting
downvoted I am 99% certain it's because a lot of people _really_ want to
believe in the whole Russia/Trump/Twitter story and can't stand anything that
challenges it, even incidentally. It meets their pre-existing biases and
grants a feeling of moral and mental superiority: the _other_ tribe might be
brainwashed by something as trivial as a retweet, but there's _no way I 'd be
like that_.

But that's the whole point of this blog post we're discussing: lots of people
have very, very strong incentives to believe in pseudo-science dressed up as
the real thing. That doesn't just include scientists themselves but also
journalists, politicians and so on.

The psychological paper that Nick Brown did a takedown of was cited by
hundreds of papers and led to lucrative speaking careers. The underlying
equations that made it seem like science vs mere opinion weren't connected to
anything and boiled down to nonsense (a problem I've seen several times). Yet
it took enormous efforts to do anything about it, and in the end the
scientists who got caught are unrepentant: one of them didn't even bother to
respond and the other agreed that the maths was nonsense but doesn't believe
it really matters.

The same thing has happened repeatedly in recent years. See the story of Amy
Cuddy: her 'power pose' research was thoroughly debunked but she simply
refused to accept it, despite her co-authors accepting that the research was
wrong. She continues to milk the speaking circuit and has even been cited by
the BBC as an example of an "inspiring woman", although her career is arguably
built on deception.

At this point, if you believe in Russian twitter bots influencing elections,
why shouldn't you be the crank? Where is the evidence? All claims I've seen on
this topic fall apart under examination. Even the premise is absurd - it
relies on the belief that millions of people are so politically ungrounded
their vote can be shifted merely by seeing a spammy retweet. That is a very
dubious belief.

But if you believe you have a reliable study that proves the existence of
armies of bots capable of swinging elections, please do show it. I will enjoy
examining it. However, it is not sufficient to merely point at high-status
individuals and delegate to them, because - as this entire discussion shows -
bad "science" is capable of spreading and even becoming a consensus far too
fast.

~~~
shaki-dora
The article here is about the mistakes people make when trying to communicate
their doubts in published science: how, _even when they are right_ , they can
come across as “cranks” and easily dismissed.

I have no idea if you’re right, and I’m not about to waste time down that
particular rabbit hole. But you’re arguably doing wrong everything the article
mentions. Starting with the impulse of “hold on, I’m not a crank and here are
12 paragraphs and my blog to prove it”. It’s simply off topic in this
discussion.

~~~
AlexCoventry
> But you’re arguably doing wrong everything the article mentions. Starting
> with the impulse of “hold on, I’m not a crank and here are 12 paragraphs and
> my blog to prove it”. It’s simply off topic in this discussion.

I don't think it came across that way to most people. I'm a bit surprised you
got that impression from it, and can't see how you'd read it that way. There's
nothing in there about "I'm not a crank," for starters, and it didn't seem at
all verbose.

------
psyc
I was struck by how every word of this could apply to how not to do politics
and advocacy on social media. (Aside from the part where they actually advised
taking all the terrible behavior to social media. It seems everyone is already
doing that.)

~~~
AlexCoventry
You'd be surprised at how far people get simply by making it unpleasant and
socially destructive to express certain views.

------
keithpeter
_Other would-be data whistleblowers are impressed by the duo’s success in
getting journals to act. Paul Brookes, of the University of Rochester Medical
Center in New York, briefly used his now-defunct blog, science-fraud.org, to
highlight questionable papers anonymously. Brookes, who was outed amid legal
threats in 2013 after only 6 months, says he would “routinely write dozens of
emails [to journal editors], and it was common to have no response at all.”_

Quote from the Science article linked to in OA. I think the OA is useful in
laying out a less confrontational approach to raising issues.

------
arkem
This is great advice for information security professionals as well,
especially vulnerability researchers and other appsec types!

------
sporkologist
Cranks generally don't think they are cranks....

------
cryoshon
the authors make some good points, but i have a different perspective when it
comes to criticizing science.

1\. criticism is easy. penetrating criticism that demands an answer beyond the
evidence already presented lest the criticized appear totally defeated is
hard. it's also the only kind of criticism worth doing, imo.

2\. being listened to is a matter of having penetrating criticism and offering
a reasonable alternative. if you destroy, you must offer a chance to rebuild
afterward, or else the criticism will be too uncomfortable to respond to, and
nobody will appear to listen as a result. if you are skeptical of this point
(as you should be) refer to the position of string theory within the world of
particle physics; its criticism of the mainstream is very difficult to recover
from if accepted, and there's no real mental way forward as a result.

3\. it sucks to have your work criticized in public -- it's intolerable, even.
but it sometimes gets results. it also sucks to have someone be impolite to
you. but i am sure we can all remember a time that someone (perhaps a teacher
or parent) has corrected us impolitely when we did not deserve courtesy. you
need to retain the ability to dismiss cranks out of hand once you've suitably
proven that you can beat them by making a diligent argument. anything else
would lead to your time being wasted rehashing the same mistakes of others in
detail.

4\. scientists are, very frequently, in their professional life, jerks. i am
speaking from my own experiences and my own actions. the OP doesn't exactly
brush on this, but sometimes being a jerk to the right people is what makes
you the right friends. you don't really want to be friends with baddies
anyway. this next point is super important and directly related.

5\. don't be a jerk to people because they produced incorrect work. often the
theories that turn out to be totally wrong are the ones that move the field
forward the most by virtue of the work put into disproving them. if they're
fabricating data or doing things that are methodologically unforgivable,
that's a different story.

6\. scientists are a stubborn lot, until they break in the face of evidence.
if you are "right" and the world is "wrong," perpetually aggregating more
evidence and showing it off will eventually work, especially if you make a
point to summarize your findings every once in a while. this is just as true
regarding criticism as it is regarding experimental results. here is a secret:
if the subfield where you are directing your criticism has only made
incremental progress for a long time -- which is to say, recently there have
been no major breakthroughs or destructive revolutions which reshape prior
understandings -- time is on the critic's side. incremental progress isn't
stagnation, but it rarely adds enough evidence about fundamental assumptions
to protect those assumptions from a sustained attack. if a field is making
leaps and bounds forward, criticism is going to be shrugged off because there
will be new understandings which are too juicy to slow down to refine. the
later refining stages are where critics are the strongest.

------
lemagedurage
"And death threats (and trolling, and weird online gender-flavoured violence)
are not just deeply unpleasant, they’re CRIMES. If you do this, you are
breaking the law AND being a shit human being, and you should either stop or
walk yourself into the sun."

Death threats towards people that use death threats, really?

I don't like the article. Some people are more knowledgeable on a topic than
others, "if you aren't in doubt you're probably wrong" doesn't apply there.
Not even in general in my estimation.

A better alternative to "you have to sugarcoat all your thoughts by 30% before
you speak them out" would be to train people to be better at taking criticism.
People get spoiled by a high baseline positivity. I'd like to bring my points
objectively without sugar coating, if everyone did that there would be no
problem. Now i would have to either pretend to be more nice than i am (
talking behind people's back is a consequence of this), be straight to the
point and get disliked by people who are used to sugar coating or keep silent.
It usually boils down to the latter. Let me work alone if i can't be my
critical self.

Also, this article seems to be mostly aimed at unfair criticism, let me pose
the following.

If the criticism is fair and reasonable, it is justified, doesn't have to be
nice, and the receiver is at fault if they dislike it.

If the criticism is not fair or reasonable, it is not justified, the
criticizer should express uncertainty where applicable, they are at fault as
the article suggests.

~~~
shaki-dora
The article makes the point, quite explicitly, that you should not even call a
vast conspiracy a “vast conspiracy”. Not because it’s “not PC”, but simply
because people on the receiving end of such diatribes have come to associate
certain terms with cranks.

You can disregard such advise, and simply launch more and more over-the-top
screeches against the MSM trying to silence you. But you’re not going to
change any minds.

~~~
lemagedurage
What about using evidence instead of manipulating your point in order to
convince people?

~~~
shaki-dora
Yes, it would be great to live in a world where everyone would judge
everything on its merits alone, like you do.

Unfortunately, we don’t. Which is why we still put on pants for interviews,
write in full sentences, and occasionally waste oxygen being nicer to others
than warranted by the cold hard facts alone.

~~~
lemagedurage
Well i do like warmth, just the genuine kind, not the politeness peer-pressure
kind i feel is promoted in the article. Real warmth speaks from actions
(patterns) and not from words, because words are forged too easily. An evil
person could sugar coat twice as much and get a better job than their honest
peer, hm? (virus scanners)

When you see through it you can choose to either exploit it or be fair. That's
your call.

