
The Decline Effect and the Scientific Method : The New Yorker - edo
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all
======
Udo
I sense some vaguely spiritual concepts (often disguised as pop philosophy)
that are being introduced into public conversations about science of late,
probably this is a result of the growing mistrust and desire to return to "a
simpler, more wholesome world" I've been hearing about. What the article cites
as " _cosmic habituation_ " is nothing more than thinly veiled
anthropocentrism, an attempt to explain the behavior of complex systems with
simple delusions of how human semantics supposedly influence cosmic reality.

The specific meme discussed in the article has been popping up all over the
place in the last two weeks, and to give the New Yorker's editors minimal
credit, they strain to eventually reach a balanced view in an effort not to
offend anyone. However, some viewpoints just don't deserve equal
consideration, and it doesn't matter how many supposed authorities are dragged
into the debate to infuse it with credibility.

I worry about statements like " _Such anomalies demonstrate the slipperiness
of empiricism._ ", because while factually true, they serve as pretexts to
mislead the public into thinking that scientific results are just opinions and
open to a debate based on spiritual worldviews. Yes, models are almost always
faulty, yes there are tons of results that turned out to be spotty or in dire
need of adjustment. Science welcomes these faults, because they serve to point
out errors that need to be corrected.

So some researchers discovered their assumptions turned out to be false, I get
that frustration. But it's not a cosmic conspiracy. Articles like this one
often play with the sentiment that "the truth wears off", it doesn't. It is
really hard to define models that stand the test of time. If you're a
scientist your goal is to find models that best explain your data. You accept
your models will be incomplete at best, and have a decent chance of turning
out as totally wrong later. No matter the outcome, finding new data
contradicting the old model is always a good thing, because it's a chance to
update it, to make it stronger, or to throw it out in favor of something
better.

We're dealing with a number of complex systems that have complex interactions.
It's hard to isolate factors. And even if you do, many researchers fail
miserably at basic statistical reasoning when it comes to determine the
relevance of variables. It happens even to renowned physicists. It's hard. But
it's not the mysterious work of an anthropocentric cosmos. The universe really
doesn't care what we believe or not.

~~~
bambax
> _I worry about statements like "Such anomalies demonstrate the slipperiness
> of empiricism."_

I found the end of the article very honest and open, except the two last
paragraphs, which are indeed a little sensationalist and relativist (the
sentence you quote is the first of the last paragraph).

The article should have stopped at _"the decline effect is actually a decline
of illusion"_ (perfect conclusion).

------
sabatier
I take from this article that the decline effect is merely an illusion. What
initially appears to be a scientific breakthrough is found out to be nothing
more than (i) random noise or (ii) hype resulting from publication bias
(publish only the positive results and ignore the negative).

~~~
_delirium
I can't seem to find the link at the moment, but someone wrote up a parody
patent on something like, Method and Apparatus for Converting Random Data to
Publishably Significant Results. Basically involves taking an infinite stream
of random data, sampling N points at a time, checking for statistical
significance at the p < whatever level, and repeat until you get it. Obviously
an unsound statistical procedure, but many corners of science follow something
not that dissimilar from it: collect data, analyze it to see if it's
interesting, and if not, collect different data and repeat.

------
ThomPete
This is not really the kind of problem it seems to be made to be in the
article.

Science is model building, nothing more, nothing less.

Models to predict outcomes with. As long as the outcomes are predictable it's
established as a fact.

But these predictions are always temporary that is the very basic principle of
the scientific method as put forward by Popper. In other words you can never
do enough testing to establish something to be universally true.

The problem with Poppers method is his tendency to see the scientific method
as something with a direction. I.e. we build knowledge to gain better and
better and increasingly more true understanding of our world.

The article shows exactly the limits of testing. You can even for a very long
time and repeatedly have the same outcomes just until you don't have them
anymore. We have no way of knowing whether our observations of various
phenomena are universal or simply just some patterns that happen to be true
for a limited time (even if we talk millions of years)

And that is ok. Just as we make assumptions in our personal lives that are
true for some time until they fall apart. (housing market, pension funds, that
death sure job, marriage)

Life's complicated and we never know what it will turn up with. I find that
part of why it's great to live.

~~~
john_horton
I don't think this is what the article is about. At least for physical and
biological phenomena, there is some objective truth. For example, Seroquel
does or does not reduce schizophrenic symptoms---the reason an original
finding is "temporary" and wears off is not because the efficacy has changed
over time---it's because it was never efficacious in the first place (and we
were mistaken because of bad research practices).

~~~
ThomPete
But that is my point. Whether we call it objective or not is solely based on
our practices, our observations. And these can never be completely
satisfactory. The scientific method no matter how rigor is not telling us
whether something is objectively true but rather that as far as we have tested
something to be true it is.

More specifically I was commenting on this:

 _...For many scientists, the effect is especially troubling because of what
it exposes about the scientific process. If replication is what separates the
rigor of science from the squishiness of pseudoscience, where do we put all
these rigorously validated findings that can no longer be proved?..._

~~~
john_horton
I see your point---my point is that this article is not a good jumping off
point for discussions about philosophy of science or epistemology (bold title
notwithstanding). It's about bad statistical practice. More importantly, it's
about a bad practice that is well-known (at least among statisticians) and has
remedies.

~~~
ThomPete
Fair enough.

------
john_horton
Andrew Gelman has a great blot post on this:

[http://www.stat.columbia.edu/~cook/movabletype/archives/2010...](http://www.stat.columbia.edu/~cook/movabletype/archives/2010/12/the_truth_wears.html)

In a nutshell, the problem arises from trying to estimate small effects and
then screening results on statistical significance.

------
bambax
> _Joseph Banks Rhine, a psychologist at Duke, had developed an interest in
> the possibility of extrasensory perception, or E.S.P. Rhine devised an
> experiment featuring (...) cards printed with one of five different symbols:
> a card was drawn from the deck and the subject was asked to guess the
> symbol. A [test subject] averaged nearly 50% during his initial sessions,
> and pulled off several uncanny streaks, such as guessing nine cards in a
> row. The odds of this happening by chance are about one in two million._

I think this is what happened:
<http://www.youtube.com/watch?v=fn7-JZq0Yxs#t=1m24s> (opening scene from
Ghostbusters).

The point of "replicability" is to eliminate bias; but it assumes bias affects
each person independently.

In fact, bias is also a cultural phenomenon and is subject to fashion. Some
results are expected at a certain point in time; replicating an experiment
with different experimenters ("scientists") at the same time doesn't eliminate
the "fashion bias".

Repeating the experiment ten years later does.

------
drallison
There have been several articles of this ilk on HN of late. It seems to me
that science and engineering education needs to focus more on the philosophy
of science, the techniques of experimental design, and statistics. The
scientific method as taught in junior high school is only part of the story.

------
rbanffy
The decrease could be caused by aliens or time-traveler coming from the future
(or the distant past) ;-)

That could be the starting point for a nice sci-fi series...

------
grovulent
"There are more things in heaven and earth, Horatio, than are dreamt of in
your philosophy."

I'm not sure what this makes me - but I'm always quite excited when I read
something which confirms Hamlet's claim. I really want the universe to escape
us - and I'm glad when it consistently does.

Having said that - does anyone understand the research on this issue? I'm
going to remain dubious until I hear about it more than just from the New
Yorker...

~~~
ThomPete
The only problem with Hamlets statement is that he assumes that this "more" is
metaphysical.

The answer of course is. Of course there is more things that can be dreamt of
in our philosophies. But you don't need to go meta for that to be true.

