
No publication without confirmation - ktamiola
http://www.nature.com/news/no-publication-without-confirmation-1.21509
======
beloch
"Confirmatory labs would be less dependent on positive results than the
original researchers, a situation that should promote the publication of null
and negative results. They would be rewarded by authorship on published
papers, service fees, or both. They would also be more motivated to build a
reputation for quality and competence than to achieve a particular finding."

Sounds great, but how would this _actually_ work. Nobody is going to get juicy
grants from existing funding agencies for being a "confirmatory" lab. Nature
sure as hell isn't going to pay for this. Most researchers probably can't
afford to pay an outside lab to duplicate their research. Is Nature going to
suddenly start refusing papers whose results haven't been reproduced
elsewhere? That's basically suicide for their journal because researchers are
frequently in a race with other researchers to publish first, so why publish
with a journal that requires you to double your budget to pay a confirmatory
lab and wait months or years for them to do the job? The pressure will be
intense to publish _elsewhere_ first.

I have a simpler solution.

Don't just slap the names of confirmatory lab authors onto other papers.
Publish original papers _and_ publish confirmatory papers with equal
prominence to the original papers. Hell, devote a portion of Nature to doing
_just_ that. Currently, if you want to publish a paper about confirming
someone _else 's_ original findings, not even a third rate journal will touch
it unless you put at least _some_ kind of novel-sounding spin on it. Nature
should use all that scummy impact factor gaming they do to make confirmatory
papers _respectable_. Only when the work of reproducing results gains labs
respect will funding agencies start supporting "confirmation labs". At
present, such "unoriginal", "hack" work is not respected at all, and Nature is
a big part of the reason why.

~~~
setrofim_
> Most researchers probably can't afford to pay an outside lab to duplicate
> their research.

Even if they could, we probably don't want the researchers paying for their
results to be duplicated. This would create perverse incentives, similar to
what happened with investment banks and credit rating agencies. If the
original researchers _must_ get their results confirmed in order to get
published, and it is them who are paying for the confirmation, they will
naturally tend to choose confirmatory labs that are more likely to confirm
their findings. Since the labs would then rely on the researchers for funding,
that would create pressure on the confirmatory labs to adapt their
methodologies in ways that make it more likely that results get confirmed
(even when the original study may not warrant it).

We want confirmatory labs to have no special interest in either confirming or
disproving a particular study, but in improving the overall quality of
research.

Since a journal's reputation depends (at least in part) on the quality of
research it publishes, the journals would seem to be the natural candidates
for the source of funding of confirmatory labs. Whether they'd actually be
willing to do it another matter...

~~~
forkandwait
Any confirmatory lab would have to be licensed in order to get grant money for
it. Just like CPAs who do an audit. Sure there is some corruption and drift
toward hiring more lenient firms, but it basically works.

Side note: it is weird to me that everyone talks about whether researchers can
afford to pay for confirmation, but researchers never pay for anything, grants
pay for everything. The granting institutions might even be excited to try a
confirmation process.

------
caseysoftware
I think we've conflated terms.

The lay public thinks "peer reviewed" means that others have tried it and
validated the results. What it really tends to mean is that a peer looked at
the procedures and results and that it passes the "sniff test" and generally
doesn't have any glaring errors.

The more subtle problem is that in some circles, it isn't even that. Since
fewer and fewer people want to be the person who damaged someone else's work
and/or career, it's a blanket pass.

We're drifting away from scientific study and critical thinking to
"reasonable" approaches and not upsetting doctrine and/or your superiors. That
looks less and less like science and more like religion.

~~~
pyrale
I reject the idea of a religion-like science. I would say it has become what
it is now because of the economic view the society has adopted to manage it,
rather than because of irrational thinking.

Apparently, science production doesn't scale well, because scientists, when
asked to compete for their bread-winning, find it easier to fool their
managers than to produce legit science.

~~~
caseysoftware
You challenge my point of "not upsetting doctrine and/or your superiors" by
saying some scientists find it "easier to fool their managers than to produce
legit science."

Sounds like you agree with me.

~~~
pessimizer
In my reading, pyrale was agreeing with you, but instead of sourcing the
problem as some vague social effect putting the blame specifically on our ways
of funding science.

------
StClaire
I have an idea: if a research study doesn't go the way you thought it would,
put it out there.

We need a central repository like Arxiv where we dump the experiments that
didn't work out so that we can quickly compare a "successful" one to ones done
before. That gives us a better idea of if the data is just a fluke.

The papers wouldn't have to be super involved. What did you do? What were
statistical conclusions. Give an upper-level undergraduate or an early masters
student some experience writing up a procedure. Shouldn't take more than a
couple hours but it could save a lot of time dealing with publication bias

~~~
chrisseaton
How can an experiment 'not work out'? Do you mean a negative result? Not
getting evidence for your hypothesis is not 'not working'. That's a crazy way
to approach science. It is more information to adjust your hypothesis. Or do
you mean a failure such as broken equipment or an infected sample meaning you
have no data? Well then what would you put in the paper?

~~~
julian_1
An experiment doesn't 'work out', if it doesn't lead to continued grant
funding.

~~~
castle-bravo
Universities are charging hundreds of thousands of dollars in tuition, paying
adjunct professors a pittance to teach classes, and researchers are dependent
on grant funding for their research. Where is the money going?

------
tdaltonc
The obvious questions is "who's going to do the confirmation work?"

I think that masters/bachelor students should be able to handle that work. A
new grant mechanism for masters/bachelor training grants that fund replication
would get the job done with a lot of nice side effects.

~~~
pcrh
The article refers chiefly to repeating mouse or rat experiments.

The obstacle there isn't what level of training researcher has (as long as
it's sufficient), but who is going to pay for it.

At the scale proposed (a 6-fold greater number of mice per experiment than is
usual) the cost of testing only the core hypothesis is easily over $100K. In
addition there is the time involved, which can be from months to years,
depending on the experiment.

~~~
mattkrause
Scaling up non-human primate experiment like this could make them span a
decade, assuming the infrastructure and staff aren't also increased sixfold.

~~~
pcrh
I don't think they're proposing to scale up to that level, which would indeed
result in excessive costs.

Even scaling-up mice experiments would be quite costly, and beyond the size of
most research grants. There would have to be new funding mechanisms in place
for work like this.

The article proposes that this would be more economical in the long run, and
so the NIH et al. should be in its favor. Perhaps they will create a new
facility for just such work? Though that seems unlikely in the current funding
climate.

------
untilHellbanned
No thanks. Papers involving animals are already backbreakingly slow compared
with cell-based or in vitro work. I know because I've been lapped by my
colleagues using more simple systems as I slog through our paper we got
rejected from Nature because the reviewers suggested another 3 years worth of
experiments. Yep, year 5 into this single project, which we knew the outcome
for 4 years ago. Not excited about this proposal at all.

Look I'm all for rigor but how about the people trying to make money off the
deal pay for all the work and keep people like me out of it. Or don't allow
the people trying to make money interpret the results of such preliminary
studies so liberally. It's like the education system. Scientists like
teachers, both of which don't make much money and do all the labor, don't want
more hoops jump through.

~~~
jessriedel
Sorry, which parts of the proposal are you responding to? The article is more
specific than "more rigor". Are you objecting to the higher p-value threshold?
The independent confirmation?

The author argues that a single higher quality confirmatory experiment will be
able to replace gathering lots of statistics for exploratory experiments:

> Unlike clinical studies, most preclinical research papers describe a long
> chain of experiments, all incrementally building support for the same
> hypothesis. Such papers often include more than a dozen separate in vitro
> and animal experiments, with each one required to reach statistical
> significance. We argue that, as long as there is a final, impeccable study
> that confirms the hypothesis, the earlier experiments in this chain do not
> need to be held to the same rigid statistical standard.

Do you disagree?

~~~
untilHellbanned
> one that incorporates an independent, statistically rigorous confirmation of
> a researcher's central hypothesis. We call this large confirmatory study a
> preclinical trial. These would be more formal and rigorous than the typical
> preclinical testing conducted in academic labs, and would adopt many
> practices of a clinical trial.

As you can see above and from your quotation (and like many other folks who
come in to save the day), this article is heavy on plans and short on who is
going to do the work. Of course I support papers where every single experiment
doesn't have to play p < 0.05 games, but other parts of the article wander in
other directions. That's all I'm reacting to.

~~~
jessriedel
Having a higher threshold for publication can be imposed without detailing who
does what work. You might argue that this means less research will get
produced, but it's probably worth it. Practitioners underestimate the
difficulty of transferring knowledge to outsiders because of frictions due to
trust, clarity, and tacit knowledge.

[http://blog.givewell.org/2016/01/19/the-importance-of-
gold-s...](http://blog.givewell.org/2016/01/19/the-importance-of-gold-
standard-studies-for-consumers-of-research/)

~~~
mattkrause
Eh...

When you call for everyone to scale their experiments up by sixfold, I think
you also need to consider the logistics of doing that. I'm totally in favor of
better, more rigorous experiments, but I know that we couldn't afford the
time, space, or gear needed to do that right now.

------
bloaf
I don't think this is a good idea because it would _increase_ the politicking
in scientific publication. Specifically, no one is going to want to do the
reproduction work, so reproduction work will be seen as a favor from one
scientist to another. Moreover, in specialized fields, scientists just as
frequently see each other as competitors as collaborators. I strongly suspect
there would be a lot of gamesmanship where scientists refuse to (or drag their
feet) do reproduction work on new studies that threaten to disrupt the status
quo that has made them successful.

~~~
disgruntledphd2
I would absolutely kill to get a job doing replication.

What I always hated about science was the inability for things not to work
out. Even if you find something directly opposed to your hypothesis, you are
somehow supposed to pretend that it worked out "just as planned".

It's toxic, boring and leads to bad science.

And so, for me, I would absolutely adore to be in a place where I got to run
well-powered studies and aim to just figure out the right answer rather than
build my career on a bunch of unrepeatable statistical flukes.

That being said, my PhD is in Psychology, so they probably won't be hiring me
to run animal-model studies.

I really like this idea, as long as Nature put their space where their mouth
is (which they won't, as they have at least one of these articles per year and
it doesn't appear to have made any impact).

------
dorianm
I applaud for the P < 0.01.

There are too many non reproducible results with real life harm:
[http://infoproc.blogspot.com/2017/02/perverse-incentives-
and...](http://infoproc.blogspot.com/2017/02/perverse-incentives-and-
replication-in.html)

~~~
feral
Just to note, there's a tradeoff here - not publishing work until you are
massively certain of it would also cause real life harm. Reducing the p value
doesn't automatically reduce harm.

Physicists require extremely low values before confirming a discovery has been
made, but that's different from requiring it before publishing.

The problem is with people interpreting published work as if once its
published, its completely certain.

Maybe each publication should come with a headline 'confidence' stat beside
the title. I guess this is a step in that direction.

------
rdlecler1
The problem here is that publishing a single paper is often the product of
months if not years of work. Saying: now add more work without more grant
money, is going to be difficult to swallow. Even worse, it departments need to
hire even more PhDs who are unemployable after they graduate.

------
ramblenode
Pre-registration is nice, larger samples/greater power is necessary, and
increasing the p-threshold may indirectly filter out some false positives but
kind of misses the underlying issue of p-hacking, some of which would be
solved by pre-registration.

The authors suggestions are preventative in nature but what I would like to
see above all else is requiring researchers to publish the raw data and to
make their statistical analyses minimally reproducible--something which could
be satisfied by publishing scripts or Excel macros along with instructions for
any non-automated data stitching. Experiments frequently implode at the
analysis phase which then gets intentionally or unintentionally masked in
ambiguous, poorly written methods sections. Giving others access to the data
allows errors to be spotted earlier after publication and alternative
hypotheses and analyses to be tested against the published results. It's also
sometimes the only way of spotting abnormalities resulting from the data
collection process itself. Again, not a means of preventing errors, but a low
friction way of discovering them. Maybe having everything in the open would
light a fire under some researchers to be more thorough, though.

------
lngnmn
Again, statistics applied to a partially observable and partially understood
phenomena yield nonsense. If not all the variables are controlled or not all
possible causes has been taken into account the result will be a mere
aggregation of observations.

What's true for coins and dices does not applicable for partially observable
environments with multiple causation and yet unknown control mechanisms.

Statistics is not applicable to imaginary models based on unproven assumptions
or premises.

------
jeffdavis
Dumb outsider question: why not just mark studies that have been reproduced
versus ones that still have not been reproduced?

The way I view it is two steps of publication: the cutting-edge (not
reproduced yet) versus independently reproduced.

~~~
untilHellbanned
Figuring out which is which is easy enough. The real issue is people not over-
interpreting the results.

~~~
jeffdavis
Maybe it could score labs based on how many of the studies have been
reproduced successfully?

If something is not reproduced, and submitted by a lab with a low score,
people would take it less seriously.

------
lutusp
Quote: "Our proposal is a new type of paper for animal studies of disease
therapies or preventions: one that incorporates an independent, statistically
rigorous confirmation of a researcher's central hypothesis."

This probably won't happen right away, but it's a terrific and necessary idea
that we need to to move forward. It will revolutionize biology and medicine,
and it will end the field of psychology as we know it.

[http://arachnoid.com/psychology_and_alchemy](http://arachnoid.com/psychology_and_alchemy)

~~~
mattkrause
Why would this end psychology?

It might make it _better_ , but we're nowhere near being able to describe
(e.g.) group behavior from first principles or ion channel kinetics. Despite
your link, there is a lot of solid psych research. There are (obviously)
discredited theories and cranks too, but psychologists characterized rods and
cones way before the biologists found them, for one example.

~~~
lutusp
> Despite your link, there is a lot of solid psych research.

Yes, solid, but lacking the dimension of falsifiable theories about the mind.
Tax accounting is also solid research.

> ... but psychologists characterized rods and cones way before the biologists
> found them, for one example.

Those weren't psychological studies. Psychology is study of the mind and
behavior. Rods and cones are neither. When a psychologist studies something
biological, it's not psychology any more.

~~~
mattkrause
Really?

Psychophysics and perception research is _widely_ considered to be part of
psychology, and has strong, falsifiable theories about how sensory stimuli are
encoded and processed. Using purely behavioural methods, psychophysicists
figured out that there were three color-sensitive "sensors" and pinned down
their properties. I'm not sure it suddenly becomes biology because someone
later found the cellular substrate, nor did it become chemistry when someone
figured out the structure of opsin.

Likewise, I'd argue that a lot of the learning stuff (e.g., reinforcement
learning) also describes the mind's operation in testable and falsifiable
ways.

~~~
lutusp
> Psychophysics and perception research is widely considered to be part of
> psychology ...

Not according to the APA, nor how psychology is formally defined.

[http://www.apa.org/support/about-
apa.aspx?item=7](http://www.apa.org/support/about-apa.aspx?item=7)

Quote: "Psychology is the study of the mind and behavior."

> and has strong, falsifiable theories about how sensory stimuli are encoded
> and processed.

Studies that aren't based on empirical evidence, that aren't based on theories
about nature, cannot be falsified. We know a lot about behavior, but we have
no empirical theories about it -- for that, we have to wait for neuroscience.

> I'm not sure it suddenly becomes biology because someone later found the
> cellular substrate, nor did it become chemistry when someone figured out the
> structure of opsin.

Of course it becomes biology/chemistry. But the connection between someone's
ideas about the mind and biology can only be conjecture until neuroscience
produces a physical theory that makes such a connection -- and at that point,
mind studies will be abandoned.

> ... also describes the mind's operation in testable and falsifiable ways.

The mind is not a physical thing, consequently it cannot produce empirical
evidence or falsifiability, two of science's fundamental requirements. If one
psychological experiment asserts that X is so, and another asserts that X is
not so, that's not a falsification, it's a contradiction. The difference? A
contradiction can itself be contradicted in another experiment (something
often seen in psychology), but a scientific falsification is conclusive.

All this talk about empirical evidence, theories and falsifiability may seem
overly philosophical until one realizes this is how we keep religion out of
science classrooms.

~~~
mattkrause
Let me give you an example, from visual perception.

The "Feature Integration Theory" suggests that low-level image features are
processed in parallel: you extract information about the color, orientation,
and movement in parallel across the entire visual field. However, these
representations are separate, and a second, serial process is needed to
combine information across the two areas.

This makes a specific, testable predictions. Suppose you're searching for a
red triangle. If this shape is embedded in a sea of green triangles, your
reaction time shouldn't vary as a function of the green triangles. The same
thing should happen if the red triangle is surrounded by red circles--reaction
times should be relatively constant regardless of the number of red circles.
However, if you need to find a red triangle embedded in a mix of red circles
and green triangles, you should a) be slower and b) your reaction time should
be a function of the total number of shapes.

I'd argue that this theory is empirical (run the experiment, record reaction
times) and about as falsifiable as it gets (it's easy to test the difference
in RT. vs. item # slopes).

I'd also argue that this is a computational theory describing _how_ visual
search works, without worrying about the underlying implementation of that
process. Clearly, it would be interesting to know that too, but it's certainly
not necessary. David Marr proposed that cognitive processes could be studied
on three levels: computational (what's the problem), algorithmic (what's a way
to solve the problem), and implementation (what do the neurons do to run that
algorithm), and each level was largely independent of the ones below.

~~~
lutusp
You seem to be missing the point that, no matter how many hypotheses we make
about the inner workings of the brain, we cannot turn them into science
without either confirming or refuting them by direct examination of the brain
itself. As long as we're hypothesizing about mechanisms that remain beyond
direct observation, it's speculation. One cannot falsify a speculation.

Psychology doesn't study the brain, it studies the mind.

> I'd argue that this theory is empirical ...

The observation is empirical but the theory isn't. It cannot become science
without validation by way of empirical evidence.

[https://youtu.be/LIxvQMhttq4?t=32](https://youtu.be/LIxvQMhttq4?t=32)

> I'd also argue that this is a computational theory describing how visual
> search works, without worrying about the underlying implementation of that
> process.

Yes -- and because we cannot directly observe the processes we're
hypothesizing about, we cannot make them a matter of empirical evidence,
therefore we have no basis for falsification.

[https://www.britannica.com/topic/criterion-of-
falsifiability](https://www.britannica.com/topic/criterion-of-falsifiability)

Quote: "Criterion of falsifiability, in the philosophy of science, a standard
of evaluation of putatively scientific theories, according to which a theory
is genuinely scientific only if it is possible in principle to establish that
it is false."

> Clearly, it would be interesting to know that too, but it's certainly not
> necessary.

Only necessary for science, otherwise not important.

~~~
mattkrause
On the contrary, the point I'm trying to make is that you can study the
_processing_ done by whatever's in your skull (mind, brain, GPU, nanobots,
whatever) while being totally agnostic about the underlying hardware. You can
develop theories about this, test them, falsify them, and revise them.

Returning to the feature integration theory, it says that during singleton
search (red vs. green), reaction times should be constant regardless of the
number of items, while reaction time should be a linear function of the number
of items when the search involves combining information from multiple feature
channels.

You _can_ test this with a junky laptop or even some drawings and a stopwatch.
In fact, if you really want, I'll send you a script so you can test it
yourself. You _can_ falsify this: just fit lines to the (item count, RT) data
and see if the slopes match the predictions. People have, in fact, done this,
and have shown that this explanation of visual search isn't quite complete--
weird things happen when the target is very rare, for example.

Can you explain exactly what some neuroscience data would add here? Look, I'm
not some hardcore dualist. I work in a systems neuroscience group and
completely agree that brain-based theories are more interesting than
phenomenological ones, which is why I put up with the hours, pay, etc.
However, this doesn't make those psychological theories any less _valid_ , nor
does it make psychology less of a science.

Anyway, there are lots of other processes that are not directly observable.
Gravity, for example, wasn't directly observed until last year. Evolution
can't be observed directly either.

~~~
lutusp
> Can you explain exactly what some neuroscience data would add here?

Certainly -- it would move the issue from the metaphysical to the physical
realm. That might make it science. There's no science of the metaphysical.

> Anyway, there are lots of other processes that are not directly observable.

Yes, but not scientific ones.

> Gravity, for example, wasn't directly observed until last year.

Orbital mechanics both predicts and observes gravity. Each successful
spacecraft journey represents another successful prediction of the physical
theory of gravity. Dark Energy shows that a physical gravitation theory can be
potentially falsified, a property all self-respecting scientific theories must
have.

Gravitational time dilation represents an empirical confirmation of Einstein's
General Relativity, our present theory about gravity. To be accurate, the
atomic clocks on board GPS satellites must take this time dilation into
account (as well as that from Special Relativity).

Einstein rings represent another direct observation of gravity. I think you
mean that _gravitational waves_ weren't observed until last year. Predicted
long ago, finally observed.

> Evolution can't be observed directly either.

There is copious, empirical evidence for evolution. Start with how and why
antibiotics lose their effectiveness over time. Then move on to laboratory
studies of the evolution of _Drosophila Melanogaster_ (fruit fly), chosen for
its short reproduction cycle. Examples abound, all empirical and falsifiable.

These are all examples of experimental confirmation of empirical theories, all
potentially falsifiable by observing nature.

~~~
mattkrause
Great--thanks for the examples.

I'm still a little hung up on the metaphysical part though.

Suppose we ignore all the baggage around the "mind" and just consider an
input/output relationship: a visual stimulus goes in and some response comes
out. We can formulate a theory about the transfer function that maps between
them, and then test it by applying different stimuli and comparing the
observed response with the expected one.

A) Suppose the device is a machine. It initially looks like it responds by
beeping when illuminated. However, we falsify this theory by finding different
patterns of light and dark, some of which cause the machine to beep and others
that don't, falsifying that theory. However, it suggests that some feature of
the image may matter, so we test it to see if varying the color of the light
matters (it doesn't), or if the relative spacings of light and dark do matter
(it does). We try more features and eventually discover that it responds to
every single one of the barcodes we try, but nothing else, and hypothesize
that it's a barcode scanner.

B) Suppose we're recording from a single neuron in a rat or cat's brain while
the animal views a screen. In early experiments, we discover that cells in
this brain area--and this neuron specifically--responds to visual stimuli. We
adjust the stimuli and note that it only responds to _some_ of the stimuli, so
we construct a quantitative model giving the expected distribution of
responses as a function of stimulus features. This suggests more tests of the
model--perhaps the model has very limited spatial support and thus claims that
far apart stimuli have no effect. We present the animal (and thus, the neuron)
with stimuli outside the supported region and, to our surprise, the responses
change. We revise the model to include some suppressive interactions, and try
again....

C) Suppose we're recording the behavioural responses of a human subject. We
show the subject pictures of other humans, and ask them to report whether the
individual shown is a man or woman. We hypothesise that certain features in
the image guide this decision, so we modify the images to enhance or degrade
those features and repeat the experiment. Some of these changes have no
effect, others increase or decrease the speed and accuracy with which they
respond. So, we modify the images more selectively, or only in certain
locations, and repeat the experiment, revising our model as we go.

It seems like you would admit A and B as being "scientific", but think that C
is flawed. Is this right?

~~~
lutusp
> Suppose the device is a machine. It initially looks like it responds by
> beeping when illuminated. However, we falsify this theory by finding
> different patterns of light and dark, some of which cause the machine to
> beep and others that don't, falsifying that theory.

But that's not a theory, it's an observation, and it cannot be falsified, only
contradicted. We observe the machine's outputs without any deep understanding
of the reasons for the behavior or a grasp of why it's acting as it is.
Therefore when we draw a conclusion about a repetitive pattern, and make an
assertion about the pattern, we could easily be contradicted by another
observer seeing a different pattern and drawing a different conclusion. Those
are contradictions, not falsifications.

Say we're an alien, visiting earth, and we see cars moving along a road. By
observation we conclude that the cars have to stay in line, no one can force
their way through all the other cars. It's a "theory".

Then a fire truck appears and does exactly what we asserted could not happen
-- it makes all the other cars move out of the way. But our "theory" is not
falsified, it's contradicted. A falsification would require (a) a deep
understanding of why cars behave the way they do, and (b) a theoretical
falsification based on that deep understanding, not simply a new observation
that contradicts an old one.

Another example. For a while there was a mental illness called "Asperger
Syndrome". It came into being in meetings of psychologists who talked about
it, and who eventually voted it into the DSM.

Everybody liked this new mental illness, it became very popular. Some
psychologists even claimed that a lot of famous people had it -- Isaac Newton,
Thomas Jefferson, Albert Einstein and Bill Gates, to name just a few. This
roster of famous "Aspies" made the mental illness even more popular,
especially among young people.

Then things got out of control, and people were actually proactively demanding
the diagnosis, for themselves and/or their children. The fact that they could
collect Social Security disability payments might have been a factor.

Seeing the clamor about this disease and fearing the consequences of a public
backlash, the psychologists held another vote and voted Asperger Syndrome out
of the DSM
([http://www.nytimes.com/2009/11/03/health/03asperger.html](http://www.nytimes.com/2009/11/03/health/03asperger.html)).

So, was Asperger Syndrome falsified? No, not at all. It wasn't falsified
because it was never more than an observation -- it never had a theoretical
basis. As a result, Asperger Syndrome is neither true nor is it false, and
anyone can contradict anyone else while discussing it. By the way, the same
thing was true about homosexuality about 30 years ago, with the same
controversy and the same outcome -- it was a recognized, listed mental
illness, then it wasn't.

This is not science. And it won't be until we understand the brain.
Understanding the mind is not only not helping, it's an obstacle, because
people have come to think of the mind as a cause of behavior, when it's
clearly an effect of the workings of the brain, and science can't be based on
effects -- it must be based on causes.

------
misnome
How about always having a professional, non-field statistician on the review
panel?

Non-reproduceability should probably be interpreted as criticism of the
reported certainly of the results.

------
Ranlot
A simple discussion to find out more about the "statistical power" mentioned
in the article: [https://p-value-convergence.herokuapp.com/](https://p-value-
convergence.herokuapp.com/)

------
clamprecht
It's interesting the parallel between the blockchain (requiring confirmations
by peers) and this discussion.

------
fiatjaf
We need less published stuff, much less. Much much less.

