
High-energy experiment is generally statistically unsound - scottlocklin
https://kirstenhacker.wordpress.com/2019/09/29/the-walrus-and-the-carpenter/
======
ivalm
Seems like a misguided rant. The whole beginning of "but what about
applications?" makes no sense.

The complaints about what is essentially p-hacking might apply to CERN (but
probably not), but doesnt work with LIGO where many events were detected by
their model-free pipeline.

~~~
sprash
Indeed, but if you ignore much of the crack-pottery and the lack of
understanding of fundamental statistics you could find some valid criticism of
current physics in this piece (but more like needles in a hay stack).

Physics is now entering a new era of "exploration". This means lots of data
and little results. The career of the average scientist will be a series of
papers where they prove that they found nothing. I can see how this is
frustrating or boring to many people but it is the only way forward. The low
hanging fruit has been harvested decades ago. The methodologies of
measurements have to adjust accordingly and the purpose of experiments has to
be advertised correctly.

~~~
vanderZwan
> _Indeed, but if you ignore much of the crack-pottery_

She claims to have studied physics for twenty years and have a PhD in
accelerator physics. If true (and she seems too well-informed to be lying),
her crack-pottery is at the very least well-informed enough to likely contain
many useful insights and views worth considering. Especially regarding
physicist _culture_.

There is a page where she shares a few of her novel[0], and speaking as
someone who dropped out of physics myself (just a lot earlier than she did) I
certainly recognize more than a few themes in the feedback contained therein.

[0]
[https://kirstenhacker.wordpress.com/2019/07/30/reviews/](https://kirstenhacker.wordpress.com/2019/07/30/reviews/)

~~~
blix
It's kinda neat that one of her major complaints with physicist culture is
pernicious groupthink, and that some of the primary arguments against her are
prima facie dismissals for not subscribing to consensus.

~~~
ivalm
Not subscribing to consensus is not an argument against her. Not addressing
that there are non-model-dependent detections (ie not in danger of p-hacking)
and her support of much disproved (experimentally) tired light are why people
don't take her seriously.

There are plenty of things wrong with high energy physics and cosmology, the
fact that most theory is not-even-wrong is a big one. She is, however, often
barking up the wrong tree.

~~~
blix
It sounds like you are more familiar with her beliefs and this subject
material than me. Would you mind elaborating on your disagreements?

Personally, I don't think any measurement is non-model-dependent. It's a
question of which model. You need a model to turn some physical phenomenon
into bits in memory and yet another model to interpret these bits.

I am also not convinced tired light is any more falsifiable than any other
cosmology. I'm sure all it's problems could be rectified by adding enough
terms that depend on variables we cannot ever measure. A version of it may be
falsified, but there could always be another version with dark-whatever that
explains any deviation. I feel this is just another way of expressing
dismissal by non-adherence to consensus.

------
Roritharr
Reading this felt like a breath of fresh air.

Her blog is really worthwhile to read, also this article about the first
picture of a black hole:

[https://kirstenhacker.wordpress.com/2019/08/24/signs-of-
the-...](https://kirstenhacker.wordpress.com/2019/08/24/signs-of-the-times/)

I'm not qualified to judge either way, but I like the style of her reasoning
very much.

~~~
amadsen
The whole thing is just a big straw man argument. Well, actually the piece is
pretty incoherent and I found it hard to find a red thread through the musings
on such diverse topics as WIMPs, pentaquarks and the functioning of drift
chambers. But the core of it seems to be at she thinks that scientists are now
sitting on massive piles of data, mine through it looking for any oddities and
then jump on the chance to publish a discovery of a new physical phenomenon as
soon as one is found.

I can't judge whether she is just ignorant or deliberately misleading (she
claims to have worked in physics), but this is just not how experiments are
done. You _almost_ always start with a theoretical prediction by a new theory,
and there is no lack of theories out there so it has to be a pretty good one
that has plenty of justification to merit attention enough to invest the 10+
person-years of effort it typically takes to conduct a rigorous analysis of a
chunk of data from a modern large scale scientific instrument. You carefully
demonstrate in various side bands your ability to correctly model every
background process and instrumental effect that could influence the
measurement. After that, if the data very clearly favors the new prediction
over the null hypothesis, you may publish the discovery in order to invite
other scientists to confirm or refute it. This does _not_ mean that you or
your peers accept that the new theory is correct.

And this is all supposedly in support of the thesis that large experimental
facilities are a waste of money, which is patently nonsensical. They have been
estimated to pay for themselves in direct returns alone (training of students
and researchers, spin-off applications to silicon sensors, cryo technology,
vacuum technology, lasers, accelerators, computing, etc). And the long term
importance of investments in fundamental research are incalculable. For
perspective, remember that the electron was considered a "useless" discovery
back in 1897.

~~~
blix
It's a bit of a ramble, but it's hardly incoherent. What seem to like diverse
topics to a physicist are actually not all that diverse when looking from a
holistic perspective. It's not that scientists are sitting on huge piles of
data and mining them for interesting results without testing hypotheses
(though we are doing exactly this in many domains), it's that the fundemental
approach to statistical analysis is flawed.

With enough data, good enough filters, and a wide selection of adjustments for
background processes, any model can be made to work. Putting the blocks
together is fundementally an excercise in bias, and truly limiting this bias
requires significant discipline that is highly disincentivized and therefore
uncommon.

The way she writes invites dismissal from working scientists due to its
imprecise and conversational style, but I think she makes many salient points
about the flaws in the modern scientific institution.

~~~
amadsen
> the fundemental approach to statistical analysis is flawed

This is a very strong assertion. What changes do you think are needed?

> With enough data, good enough filters, and a wide selection of adjustments
> for background processes, any model can be made to work.

Sure. Which is why no scientist will care that a particular signal model "can
be made to work". You try everything you can to explain your data with only
background processes and only if this fails do you consider alternatives. The
more adjustments you allow, the harder it becomes to favor signal over
background.

> Putting the blocks together is fundementally an excercise in bias, and truly
> limiting this bias requires significant discipline that is highly
> disincentivized and therefore uncommon

Another wild accusation without evidence. Why do you believe this is true? And
if it is, where are all the false discoveries? In a small team with less
oversight, I'm sure cutting corners happens. In a large experiment like those
discussed here? No way. The embarrassment of having to retract a false
discovery is a pretty strong incentive to ensure the integrity of results, and
they have enough internal controls to enforce it.

~~~
blix
It's not a "wild accusation" or personal attack, it's a fundemental truth
about the nature of modeling. You can do better against it if you start off by
recognizing that it is there. Nothing is unbiased.

The embarassment of retraction is also a strong incentive to not challenge the
status quo. If everyone is using the same bias and assumptions no one has to
worry about retraction. The more small teams you have, the easier it is for
different biases to live; the more centralized, the easier it is to succumb to
groupthink.

------
koalala
>Let’s put aside the fact that one cannot directly observe what is inside of a
proton or a neutron and that it is quite a logical leap to assume that the
extremely unstable particles that come out of a collision exist in a stable
form within the non-collided particles.

This is nonsense, deep inelastic scattering experiments have for example
directly shown that protons have the predicted internal structure [1].

[1]
[https://en.wikipedia.org/wiki/Deep_inelastic_scattering](https://en.wikipedia.org/wiki/Deep_inelastic_scattering)

------
buboard
This is the gist i believe:

> In 2015, CERN’s LHCb found pentaquarks in their data with 9 sigma
> significance!!! This either proved that CERN can find whatever it wants in
> its data or it proved that the experiments which showed the pentaquark did
> not exist were wrong. I suspect that the former is true because I wonder
> what the expected probability of seeing the particle was in their
> application of Bayes’ theorem.

I 'm not sure that's an indictment of LHC itself or rather of the sloppy way
with which science attribute importance to whatever is statistically
significant. It's probably true that there is too much data and not enough
good models.

~~~
amadsen
> the sloppy way with which science attribute importance to whatever is
> statistically significant.

But we don't. Just because an effect is statistically significant doesn't mean
it could not also be explained by some systematic effect and physicists are
very aware of that.

The famous five sigma rule prevents a claim of discovery without statistically
significant evidence. It does NOT mean that we automatically accept every five
sigma deviation from the background model as new physics.

------
qtplatypus
“A question which does not get asked in physics circles is: ‘do impossible to
measure things objectively exist?’”

The reason that the question isn’t asked in physics circles is because the
question is a philosophical question not a physics question.

~~~
vanderZwan
Her larger point is that there are still _assumed answers to these questions
anyway_ that physics is building on.

~~~
qtplatypus
I am not sure that physists do. Many of the physics I know seem to subscribe
to the “Shut up and calculate” school of physics where they say all of these
things are a model that gives results that correspond to the experiments.

~~~
vanderZwan
Yes, and they say this at the same time as they say share plenty of stories
about alternative formulations that work as well but rejected because they
lack "elegance" or whatever, and somehow they do not see any conflict in this.
It's a weird kind of denial of making first principle assumptions.

~~~
qtplatypus
True but if you have two alternates that are of equal predictive power then
the choice between them can be totally arbitrary.

~~~
nixtaken
With the contrast she made between WIMPs+tiredlight and Higgs+bigbang, I think
she suggested that when both languages have equal predictive power, one should
choose the language that is consistent with existing definitions. If a new
language confuses definitions and creates the appearance of paradoxes, it is
not a good language. This is drawn out in some of her other articles.

