

Signs a Claimed Mathematical Breakthrough is Wrong (2008) - ColinWright
http://www.scottaaronson.com/blog/?p=304

======
Jun8
I reread this list every now and then, it never loses its power. I've never
reviewed papers in math or hardcore CS like on topic mentioned here, but
here's a similar informal list that I mentally use when I review articles for
IEEE:

* Bad English: I'm not talking about a few typos here and there (although too many typos is a problem), mostly it is gross grammatical mistakes that makes sentences unparseable. The thought here is that sloppiness in language correlates with sloppiness in research.

* Omitting important references or too many self-references: This is a sure sign that the authors have not really done a good study of the area.

* Sloppy LaTex equation formatting: Similar to language sloppiness, this also shows that either the authors have not read enough technical articles to see how it's done, or that they don't care (or worse, they cannot learn).

* Glossing over outliers in results: Usually results tables would contain outcomes that don't fit into the model/approach the authors are proposing. I look to see if this is discussed.

~~~
rdtsc
> * Bad English:

Don't know about this one. There is probably a correlation there, but we
should stop at that. I think the temptation to explain it can lead into all
kinds of nasty stereotypes.

~~~
lmkg
There are two types of bad English. There's English from non-native speakers,
which can be hard to read at times, but it's not a red flag. Then there's the
other type of bad English. It's really hard to describe, but after you've seen
it a few times, it's easy to pick up on it after a page or so. It's... a
certain type of incoherence and lack of logical thinking, that superficially
resembles logical thinking. The difference manifests in the structure of how
they communicate, from low-level grammar all the way up to top-level
organization of the paper.

Since I can't really describe what I'm talking about, I'll given the most
blatant and obvious example I know of: Time Cube[1]. Even if you ignore the
content, and just focus on the sentence structure, it's incoherent, often
failing to parse as valid English, with a variety of ambiguous or defined
referents, and freely introducing new undefined concepts. If Time Cube is 100,
most papers you would see in this category are never higher than 3 or 4. But,
even at that level, the lack of clarity at the structural level usually
implies a similar lack of coherence at the content level.

[1] <http://www.timecube.com/>

~~~
anonymoushn
You can find some writing at a level much higher than 4 but much lower than
100 at <http://kamouna.wordpress.com/2010/09/03/p-np-iff-p-np/>

~~~
sireat
That was seriously hurtful to my brain.

There is certain aloofness to most cranks that oozes out of their writing.

The things have not changed much since Gardner's "Fads and Fallacies in the
Name of Science"

------
sopooneo
I struggled with this idea very much a month or so ago when HN had the post
about the guys that claimed to have invented an electric/human powered
flapping wing ultralight aircraft.[http://techland.time.com/2012/03/21/man-
flaps-android-powere...](http://techland.time.com/2012/03/21/man-flaps-
android-powered-bird-wings-flies-with-wii-remote/)

Even before reading the paper I was 99% sure it was a hoax. Something about
the video clinched it and I was sure. But many others (even here on Hacker
News) were not sure. They (like me) found the idea fascinating and did not
want to dismiss it out of hand. Many people weighed in on the extreme physics
and engineering challenges that would have to be overcome to make it possible.
But the video's defenders pointed out again and again that the nay-sayers
should read and directly address the explanatory technical documents that the
"pilots" had posted on their web site.

I didn't even look at the pilots' web site, and I expect most other skeptics
didn't either. But should we have?

Because later the video makers admitted it was a hoax. So should we, who were
so skeptical really have spent time reading that paper and picking it apart
point for point? Or was it enough to let time prove us right?

------
auggierose
His argument against computer checked proofs is wrong, because doing a
computer checked proof for your claim is NOT NEARLY as common as wearing your
seat belt is. If it were, then all the big proofs would come with a computer
checked proof, just as wearing a seat belt is normal when seeing the police
nearby.

~~~
anonymoushn
I don't think it's an argument against computer checked proofs. He even seems
to acknowledge that automatic verification is a good idea. Unfortunately the
world we live in is filled with proofs that cannot be checked by a computer.

------
dennisgorelik
_At some point, there might be nothing left to do except to roll up your
sleeves, brew some coffee, and tell your graduate student to read the paper
and report back to you._

You can also delegate initial paper evaluation to your colleagues - allow them
to let you know if the paper is plausible.

