
A Dig Through Old Files Reminds Me Why I’m So Critical of Science - RougeFemme
http://blogs.scientificamerican.com/cross-check/2013/11/02/a-dig-through-old-files-reminds-me-why-im-so-critical-of-science/
======
ahelwer
From personal experience (anecdote alert!), errors are also common in the
ostensibly stone-cold-hard field of algorithms in computer science. A few
years back I went on a string algorithm kick, and started dredging up old
algorithm papers from the 80's on which to build Wikipedia articles.

Often, the papers would get the _general idea_ right, but if implemented as
described would not work at all or fail on edge cases. The best example I have
is an algorithm to find the lexicographically-minimal string rotation[0]. The
simplest and fastest algorithm to do this is based on the KMP string search
algo, and is tribal knowledge among ACM ICPC competitors. I thought it was
pretty neat and wanted to cement this algorithm in popular knowledge, so I set
about researching and writing the Wikipedia article.

I found the KMP-based algorithm in a 1980 paper[1] by Kellogg S. Booth. The
paper has very detailed pseudocode which does not work. At all. The tribal
knowledge version I inherited had similarities in the general idea of the
algorithm (use of the KMP preprocessing step) but everything else was
different. I scoured the internet for a retraction or correction, but all I
found was a paper written in 1995[2] which mentioned in passing errors in the
1980 paper.

I do wonder exactly how common this is. I emailed a professor who co-wrote one
of the papers, and he replied that "it seems to me that all the algorithms
(including our own) turned out to have errors in them!" Has anyone done
studies into errors in computer science papers?

[0]
[https://en.wikipedia.org/wiki/Lexicographically_minimal_stri...](https://en.wikipedia.org/wiki/Lexicographically_minimal_string_rotation)

[1]
[http://www.sciencedirect.com/science/article/pii/00200190809...](http://www.sciencedirect.com/science/article/pii/0020019080901490)

[2]
[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.9...](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.9144)

~~~
rrrrtttt
There is a point of view that says that computer science conferences exist for
the purpose of gaming the tenure system. The name of the game is plausible
deniability: you're not supposed to submit papers that contain known false
claims, but everything else is fair game. And this has become such an integral
part of the culture that technical correctness is no longer a necessary
condition for accepting a paper [1]. I think in this light it's quite clear
why many scientists are happy to leave their papers hidden behind the ACM
paywall.

[1] [http://agtb.wordpress.com/2013/04/14/should-technical-
errors...](http://agtb.wordpress.com/2013/04/14/should-technical-errors-
disqualify-conference-papers/)

~~~
ahelwer
Thank you, that was a fascinating read. It is understandable that technical
errors are given a pass, as they aren't the meat of the paper. In the case of
the Booth paper, I really should state I do not mean to attack him. The idea
of using the KMP preprocess to solve the problem is a wonderful approach and
works very well despite the actual implementation being technically incorrect.
If I recall, the bug had to do with the termination condition; the algorithm
had to run twice as long to terminate correctly. I will say my understanding
of the algorithm improved as a result of debugging it!

------
mturmon
The author says "Petrofsky was a lavishly honored star of the IEEE", but I was
unable to find any honors he got besides an award for a paper in an IEEE
journal ([http://ieee-aess.org/contacts/award-recipient/jerrold-s-
petr...](http://ieee-aess.org/contacts/award-recipient/jerrold-s-petrofsky)
\-- there could be other awards, but they don't show up in google). I thought
"lavishly honored" would be shorthand for IEEE Fellow, but Petrofsky is not on
the Fellows list.

Saying his work was prematurely made into a biopic starring Judd Hirsch is not
an indictment of science...

~~~
GalacticDomin8r
Also he seems to lay at the feet of Petrofsky the failure of any further
progess in the area and even seems to insinuate Petrofsky misled people
intentionally. I don't follow him on point one, it seems a non-sequitur. As
for the second, I'm going to quote a section from one of the articles he
linked to:

\--------------------------------------------------------------------------------------

“That was just for that event,” Davis, now Nan Huckeriede, said of her brief,
but famous walk at graduation. “It was the computer-controlled electric
stimulation, not me.” Davis had met Petrofsky while she was in college. “I was
attending the WSU Lake Campus, and went to a spinal cord society conference
there,” she says from her St. Marys home. “Jerry (Petrofsky) was a presenter,
and afterwards, I introduced myself and told him I was interested in his
research. “For about a month, I drove back and forth to Dayton to work with
him, and then I transferred to the Dayton campus.” Following her graduation
walk, Davis returned to her wheelchair, stayed in Dayton a few years, married,
and then returned to St. Marys. “Jerry moved to California and stopped his
research — I think he felt that he had gone as far as he could,” said
Huckeriede. “But I still use the equipment he developed to get my exercise.”
Last summer she traveled to Beijing for a procedure to strengthen her back and
stomach muscles. “It didn’t work, but I knew it was experimental. It was worth
a try.”

\--------------------------------------------------------------------------------------

Given all this, my take away is that John Horgan has axe to grind for almost
20 years now and it's still not sharp.

~~~
mturmon
Nice find. That kind of testimonial from the subject of the work is really
important.

Another relevant item. The magazine he wrote for, _The Institute_ , is a
general-interest magazine of feature stories related to IEEE members. It's not
a technical journal. It's more akin to the feature newsletters published by
universities or engineering schools and sent to their alumni.

The general-interest _technical_ IEEE journal is _Proc. IEEE_ , which is peer-
reviewed and contains research articles and research summaries written by the
experts themselves.

------
tokenadult
I remember some of the same overhyped news stories the science journalist who
wrote the article submitted here remembers. I especially remember the
breathless (and false) reports about a "gene for" this or that human
behavioral trait. The science news cycle[1] frustrates journalists, because
every new study with an incremental finding (which may not even be replicable)
has to be hyped up by research organization press offices, in the interest of
obtaining more funding.

The author's follow-up on a famous science story from early in his career is
thought-provoking. Indeed, editors are more nervous about publishing stories,
even very well reported stories, that question good news and expose hype or
even fraud than editors are about publishing stories on the latest science
hero.

On the whole, it's good news that more and more scientists and journalists are
alert to the possibility that a preliminary research finding may be false and
overhyped besides. Here on Hacker News, we can keep one another alert by
remembering the signs to look for whenever we read a new research finding news
story.[2]

Hacker News readers who want to learn more about how research articles become
retracted may enjoy reading the group blog Retraction Watch[3] compiled by two
veteran science journalists with lots of help from tipsters in the science
community. I think I learned about Retraction Watch from someone else's
comment here on HN.

[1]
[http://www.phdcomics.com/comics/archive.php?comicid=1174](http://www.phdcomics.com/comics/archive.php?comicid=1174)

[2] [http://norvig.com/experiment-design.html](http://norvig.com/experiment-
design.html)

[3]
[http://retractionwatch.wordpress.com/](http://retractionwatch.wordpress.com/)

------
powera
There are always people who say "Science is still the best way of determining
true statements" in response to these articles.

But this isn't science. It's pure politics. And politics is probably the worst
way of determining true statements.

~~~
crusso
Exactly. The very nature of the tools used to advance and succeed in politics
are anathema to doing good science.

Treating science like you treat politics or marketing is akin to going to war
in the name of Jesus or Gandhi.

~~~
dnautics
Does it not seem silly, then, to use a political apparatus to fund science?

------
thrill
"media hype can usually be traced back to the researchers themselves"

Journalist investigates media hype and lays blame not on the media. Film at
11.

~~~
capnrefsmmat
You might enjoy this paper:

Gonon, F., Bezard, E., & Boraud, T. (2011). Misrepresentation of Neuroscience
Data Might Give Rise to Misleading Conclusions in the Media: The Case of
Attention Deficit Hyperactivity Disorder. PLoS ONE, 6(1), e14618.
doi:10.1371/journal.pone.0014618.t003

It's thankfully open-access:

[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna...](http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0014618)

They argue that a few prominent examples of misreporting in the media come
from researchers misrepresenting their results in their abstracts, and
journalists rarely read past abstracts.

~~~
musicaldope
Interesting. Seems maybe journalists should rely on researchers other than a
study's authors for interpretation? I don't mean that sarcastically (well, a
little), but it seems that there's a fairly simple solution to this problem.

------
makmanalp
> I wrote a puff piece about Petrofsky–based primarily on interviews with him
> and materials supplied by him and Wright State–published in the November
> 1983 issue of The Institute, the monthly newspaper of the IEEE

My, could this be part of the problem?

> It never occurred to me to question Petrosky’s claims.

Or this?

Maybe, just maybe, it's very hard to get good reporting on something by people
unqualified in a subject?

~~~
anigbrowl
He criticizes himself for this too. But he didn't make the guy famous;
Petrofsky was already a star, the subject of laudatory movie, and so forth. He
became more skeptical and challenged Petrofsky's claims in print in 1985. I
think that's a pretty good turnaround time.

------
rtpg
>“Academic scientists readily acknowledge that they often get things wrong,”
The Economist states in its recent cover story “How Science Goes Wrong.” “But
they also hold fast to the idea that these errors get corrected over time as
other scientists try to take the work further. Evidence that many more dodgy
results are published than are subsequently corrected or withdrawn calls that
much-vaunted capacity for self-correction into question. There are errors in a
lot more of the scientific papers being published, written about and acted on
than anyone would normally suppose, or like to think.”

I think the fact that people rarely retract can also be more of a practical
issue than anything. Some guy wrote his master's thesis 3 years ago and it's
in fact wrong? The author's too busy with his real life to have been keeping
track of that. Or he's working on some other domain. Or the paper was written
30 years ago.

I've also heard a lot of people say that a lot of research happens in the dark
in many domains. People doing research in Haskell will gladly talk about their
work on mailing lists it seems, but when it comes to chemistry, apparently
it's a whole lotta silence. Pretty depressing.

------
pasbesoin
One of my "pet peeves" (that has actually caused me a fair amount of
aggravation and "expense", personally): Absence of evidence is taken (or,
_insisted upon_ ) as evidence of absence.

I've encountered this particularly in the medical community. As sort of a
TL;DR: Many medical practitioners seem at best to be... "technicians" who are
not much capable of more that following the "current script" that is handed
down to them from blessed authority figures (including particularly if not
only the pharmaceutical companies).

P.S. I'll add insurance companies to the mix of authority figures,
particularly in the U.S. They categorize and dictate what they will and won't
pay for. Better doctors sometimes spend a lot of time finding ways around
these restrictions in order to provide the treatment they think is actually
appropriate and optimal.

I would understand using statistical evaluation to help determine the best
treatment approach. But when the profit motive enters, combined with a more or
less fungible population of insured, the number crunching seems often to put
cost ahead of outcome.

~~~
mikeash
Absence of evidence _is_ evidence of absence, provided that you've actually
looked.

However, it's often not very _good_ evidence, and far from proof. But it's
definitely evidence.

~~~
pasbesoin
Yes, I think "proof" is a better word for what I was trying to describe.

------
kghose
I did not take this as an anti-science article, but as an article critical of
how academic science/engineering has become hype driven. Perhaps it was always
hype driven, but I find it to be particularly bad now. Before you could do
'regular' work i.e. work which was scholarly and did not have terms like
'first to show' and which did not appear in the tabloids 'Science' and
'Nature' and still advance your career and get funding. Not so much any more.

------
d4vlx
In other words, science is hard, predictions are unreliable, scientists are
humans and humans make mistakes / are motivated by emotions.

As usual with pieces critical of science they focus too much on a very small
number of bad eggs and seem to implicitly assume that scientists should
somehow be superhuman.

~~~
naterator
> a very small number of bad eggs

Even if there are a lot of bad eggs and hype and bullshit, you have to ask
yourself what the alternative is. The non-science-based existence we suffered
through for millennia? I think not. Excuse us for trying to cure cancer and
failing less than 100% of the time.

Part of the reason there is so much hype and bullshit is because, if we
weren't cramming it down everyones throats for the 30 seconds they'll pay
attention, there would be no money funding science and we'd still be living in
our own filth and praying to god that the plague stops.

~~~
crusso
_Excuse us for trying to cure cancer_

That's a total misrepresentation of the complaint. The complaint is that
claiming to cure cancer or being close to curing cancer to get some funding
hurts the credibility of Science as an institution.

 _Part of the reason there is so much hype and bullshit is because_

How will training the public that Scientists are money-grubbing hucksters who
are full of crap help the matter any?

A lack of humility and self-criticism is a huge problem in any discipline,
especially one that claims to be the best way to learn the "truth".

~~~
dnautics
_How will training the public that Scientists are money-grubbing hucksters who
are full of crap help the matter any?_

Also: It just might teach the public to actually take agency over who gets
funded and encourage them to decide for themselves who is or isn't a huckster.

Incidentially, I _am_ trying to cure cancer, and I've set up a nonprofit to do
so... And am considering writing an piece explaining why you _shouldn 't_
donate to me. (if you can't take the risk of failure, etc.) What do you think?
Although I'm being genuine, is it too humblebraggey?

------
hacknat
I think it's really important to talk about what Science is. It gets bandied
about like it's this abstract idea when, in fact, it is a real process that is
taking place. A good definition of Science is humanity's current working
knowledge of reality based on the process of lots of people utilizing the
Scientific method to test hypotheses. I think that's fair definition. On
examination of this definition you will notice it has a large human component.

Science is like the stock market. It's lots of people spit-balling about
what's happening in the market (in Science's case, the market is "ideas about
reality"). In the short term Science can look really ugly, just like the stock
market can, in the long term, however, we'd like to think of it as an accurate
weighing machine.

I generally think this is a fair assessment, but as I get older and start to
see how few people there are who aren't willing to cut corners to get ahead.
This worries me, because, like the stock market, Science affects real peoples
lives. It's all well and good that over the course of 100 years the Dow Jones
will outperform cash or, really, any other investments, but that's of little
use to the real people who get left behind in periods of great economic
stagnation. Science can go through similar periods of stagnation, and
currently, it seems like we might have hit upon one.

It is possible to criticize the current way we have set up the Scientific
endeavor without criticizing the abstract notion that human beings will
generally discover new things over the long term. I think it is hardly
controversial to say that our current way of doing things is not the best, but
it may even be bad. Money and time are corrupting factors. Postings that used
to require a PhD require a post-doc, positions that required a post-doc, now
require two. The immense pressure of publish-or-perish is becoming greater and
greater and room for failure, which is an essential part of the Scientific
method, is being squeezed out. This is not a good thing and, I think, is a
larger reason, among others, why Science is becoming noisier and noisier. When
the stock market becomes noisy it benefits insiders, but hardly anybody else.
I think Science is, currently, in a similar place, it's efficacy is being
diminished by crap.

~~~
lutusp
> I think it's really important to talk about what Science is. It gets bandied
> about like it's this abstract idea when, in fact, it is a real process that
> is taking place.

But science isn't defined by its process, it's defined by its philosophical
axioms, its foundational rules. The first and most important rule is that a
scientific theory must be potentially falsifiable in practical tests -- if
there's no empirical testability, there's no basis for falsification,
therefore there is no science. The second rule is that scientific ideas cannot
ever be proven true, only false. The third rule is that an idea without
supporting evidence is _assumed to be false_ , not true (this is known as the
"null hypothesis"). The remaining rules are comparatively unimportant -- these
are the big three, without which any discussion of science is pointless.

Science's process can change, and from field to field, it certainly does. But
the rules stay the same.

> Science is like the stock market. It's lots of people spit-balling about
> what's happening in the market ...

That is not science. To call that science is like confusing a spacecraft with
a conversation in a bar about a spacecraft.

> It is possible to criticize the current way we have set up the Scientific
> endeavor without criticizing the abstract notion that human beings will
> generally discover new things over the long term.

Again, that is not science. Science's goal is not discoveries, its goal is to
reliably refute ideas that are false, primarily by comparing them to reality.
This is why science journalism articles that trumpet breakthroughs, with rare
exception do a disservice to both science and journalism.

~~~
calibraxis
Science doesn't have such axiomatic rules; did Galileo lay out these these
axioms and proclaim the scientific revolution? As I understand it, science is
a human enterprise with the goal of understanding principles... with
limitations and strengths.

Falsifiability in particular has serious criticisms, in terms of people taking
it as a defining part of science.
([http://en.wikipedia.org/wiki/Falsifiability#Criticisms](http://en.wikipedia.org/wiki/Falsifiability#Criticisms))
I suspect (and it's just pure suspicion for now which I'm mentioning for no
particular reason) it tends to be emphasized in cultures interested in
debunking people's claims in a competitive debating way, rather than
constructive conversation where both parties aim at coming to new
understandings. I don't mean in science, but cultures influenced by science's
success.

------
plg
Nobody ever got a raise from their Dean or an endowed Chair because their work
was celebrated for being careful, thoughtful, measured, balanced and
realistic.

Plenty of science that was uber-hyped at the time has turned out to be
misguided and/or even outright wrong. Plenty of those scientists have led
wealthy, rewarded lifestyles as a result of the hype.

As an academic scientist one has to make a conscious decision to play the game
or not play the game. There isn't a lot of room in the middle. You make your
choices and you live with the consequences.

You see your colleague making double your salary, you read the press office
reports hyping their work, you understand that it's no more innovative,
important, or TRUE than your work or anyone else's in your cohort ... but they
are playing the game.

Wouldn't you like to take your family to Hawaii for vacation? Wouldn't you
like a bigger house? A nicer car? To send your kids to private school? Your
University press office is practically going around begging for science
stories to promote (i.e. hype). It's difficult to resist jumping in with both
feet.

It's a jungle out there people.

------
shadowOfShadow
As with many things, we incentivize the wrong things. We pay for the headlines
with clicks and eyeballs - we will get more headlines.

------
jamesash
Reading through this article reminds me of why I'm so critical of science
journalism: it's in the attention business, not the science business, and
directing undue scepticism about so-called "breakthroughs" would kill a lot of
great "stories".

------
enupten
With the reward systems currently in place, what else would you expect ?

