Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] High-energy experiment is generally statistically unsound (kirstenhacker.wordpress.com)
29 points by scottlocklin on Oct 8, 2019 | hide | past | favorite | 26 comments



Seems like a misguided rant. The whole beginning of "but what about applications?" makes no sense.

The complaints about what is essentially p-hacking might apply to CERN (but probably not), but doesnt work with LIGO where many events were detected by their model-free pipeline.


IIRC the author doesn't believe black holes exist, so not surprised.


Sure, but there are two possible ways to "not believe" in black holes.

One can say "there is an alternative not yet known explanation to gravity and this explanation will not have singularities/math artifacts." While GR is one of the most verified theories in physics I have some modicum of sympathy for that view.

What is disingenuous is her view "black holes dont exist because GR measurements are bad." Because they really are good and statistically significant. For LIGO results you can reject the nature of the compact bodies (although the fits are very good), but you cant reject the event of the detection.


Indeed, but if you ignore much of the crack-pottery and the lack of understanding of fundamental statistics you could find some valid criticism of current physics in this piece (but more like needles in a hay stack).

Physics is now entering a new era of "exploration". This means lots of data and little results. The career of the average scientist will be a series of papers where they prove that they found nothing. I can see how this is frustrating or boring to many people but it is the only way forward. The low hanging fruit has been harvested decades ago. The methodologies of measurements have to adjust accordingly and the purpose of experiments has to be advertised correctly.


Judging by how many open problems and paradoxes remain in physics and cosmology, I would not agree with the low hanging fruit being harvested along time ago. It seems more like they harvested the fruit from a spindly looking diseased part of the orchard and missed the healthy part of the orchard. If the physicists and cosmologists were to simply rearrange their models in a more sensible fashion I think they would quickly figure out nature and productivity would return. First there would be a lot of rewriting of all the current information in a much more sensible theory of nature. Then there would be pushing forward on a much sounder foundation. And biggest of all, the applications that would arise are amazing. Imagine drawing energy from spacetime. Imagine manufacturing with spacetime as your raw materials.

Clue: Start with taking that Big-Bang/Inflation picture and overlaying it on each of the jets from every SMBH. Now force yourself to believe that as true and it won't take long to figure out what is going on.


> Indeed, but if you ignore much of the crack-pottery

She claims to have studied physics for twenty years and have a PhD in accelerator physics. If true (and she seems too well-informed to be lying), her crack-pottery is at the very least well-informed enough to likely contain many useful insights and views worth considering. Especially regarding physicist culture.

There is a page where she shares a few of her novel[0], and speaking as someone who dropped out of physics myself (just a lot earlier than she did) I certainly recognize more than a few themes in the feedback contained therein.

[0] https://kirstenhacker.wordpress.com/2019/07/30/reviews/


It's kinda neat that one of her major complaints with physicist culture is pernicious groupthink, and that some of the primary arguments against her are prima facie dismissals for not subscribing to consensus.


Not subscribing to consensus is not an argument against her. Not addressing that there are non-model-dependent detections (ie not in danger of p-hacking) and her support of much disproved (experimentally) tired light are why people don't take her seriously.

There are plenty of things wrong with high energy physics and cosmology, the fact that most theory is not-even-wrong is a big one. She is, however, often barking up the wrong tree.


It sounds like you are more familiar with her beliefs and this subject material than me. Would you mind elaborating on your disagreements?

Personally, I don't think any measurement is non-model-dependent. It's a question of which model. You need a model to turn some physical phenomenon into bits in memory and yet another model to interpret these bits.

I am also not convinced tired light is any more falsifiable than any other cosmology. I'm sure all it's problems could be rectified by adding enough terms that depend on variables we cannot ever measure. A version of it may be falsified, but there could always be another version with dark-whatever that explains any deviation. I feel this is just another way of expressing dismissal by non-adherence to consensus.


The only mention of tired light in this article was here:

>This story has elements of self-consistency and inconsistency; WIMPs + tired light and the Higgs + big bang are two ways to say the same thing, but why do physicists insist on speaking in five, different languages at the same time? It is as though the tower of Babel fell down when the first nuclear bomb went off.


I mean I left physics after doing my PhD precisely because I felt it wasnt exciting, but I disagree with the criticism of "big data." I know analytic results are romanticized and even now physicists are looking to express things in but a few equations, but it might also be that the world is complicated and to learn the details you really need to crunch a lot. Kind of like in avalanches, lots of things can be modeled by power laws but the actual details are incredibly messy and non-integrable.


Reading this felt like a breath of fresh air.

Her blog is really worthwhile to read, also this article about the first picture of a black hole:

https://kirstenhacker.wordpress.com/2019/08/24/signs-of-the-...

I'm not qualified to judge either way, but I like the style of her reasoning very much.


The whole thing is just a big straw man argument. Well, actually the piece is pretty incoherent and I found it hard to find a red thread through the musings on such diverse topics as WIMPs, pentaquarks and the functioning of drift chambers. But the core of it seems to be at she thinks that scientists are now sitting on massive piles of data, mine through it looking for any oddities and then jump on the chance to publish a discovery of a new physical phenomenon as soon as one is found.

I can't judge whether she is just ignorant or deliberately misleading (she claims to have worked in physics), but this is just not how experiments are done. You almost always start with a theoretical prediction by a new theory, and there is no lack of theories out there so it has to be a pretty good one that has plenty of justification to merit attention enough to invest the 10+ person-years of effort it typically takes to conduct a rigorous analysis of a chunk of data from a modern large scale scientific instrument. You carefully demonstrate in various side bands your ability to correctly model every background process and instrumental effect that could influence the measurement. After that, if the data very clearly favors the new prediction over the null hypothesis, you may publish the discovery in order to invite other scientists to confirm or refute it. This does not mean that you or your peers accept that the new theory is correct.

And this is all supposedly in support of the thesis that large experimental facilities are a waste of money, which is patently nonsensical. They have been estimated to pay for themselves in direct returns alone (training of students and researchers, spin-off applications to silicon sensors, cryo technology, vacuum technology, lasers, accelerators, computing, etc). And the long term importance of investments in fundamental research are incalculable. For perspective, remember that the electron was considered a "useless" discovery back in 1897.


It's a bit of a ramble, but it's hardly incoherent. What seem to like diverse topics to a physicist are actually not all that diverse when looking from a holistic perspective. It's not that scientists are sitting on huge piles of data and mining them for interesting results without testing hypotheses (though we are doing exactly this in many domains), it's that the fundemental approach to statistical analysis is flawed.

With enough data, good enough filters, and a wide selection of adjustments for background processes, any model can be made to work. Putting the blocks together is fundementally an excercise in bias, and truly limiting this bias requires significant discipline that is highly disincentivized and therefore uncommon.

The way she writes invites dismissal from working scientists due to its imprecise and conversational style, but I think she makes many salient points about the flaws in the modern scientific institution.


> the fundemental approach to statistical analysis is flawed

This is a very strong assertion. What changes do you think are needed?

> With enough data, good enough filters, and a wide selection of adjustments for background processes, any model can be made to work.

Sure. Which is why no scientist will care that a particular signal model "can be made to work". You try everything you can to explain your data with only background processes and only if this fails do you consider alternatives. The more adjustments you allow, the harder it becomes to favor signal over background.

> Putting the blocks together is fundementally an excercise in bias, and truly limiting this bias requires significant discipline that is highly disincentivized and therefore uncommon

Another wild accusation without evidence. Why do you believe this is true? And if it is, where are all the false discoveries? In a small team with less oversight, I'm sure cutting corners happens. In a large experiment like those discussed here? No way. The embarrassment of having to retract a false discovery is a pretty strong incentive to ensure the integrity of results, and they have enough internal controls to enforce it.


It's not a "wild accusation" or personal attack, it's a fundemental truth about the nature of modeling. You can do better against it if you start off by recognizing that it is there. Nothing is unbiased.

The embarassment of retraction is also a strong incentive to not challenge the status quo. If everyone is using the same bias and assumptions no one has to worry about retraction. The more small teams you have, the easier it is for different biases to live; the more centralized, the easier it is to succumb to groupthink.


>Let’s put aside the fact that one cannot directly observe what is inside of a proton or a neutron and that it is quite a logical leap to assume that the extremely unstable particles that come out of a collision exist in a stable form within the non-collided particles.

This is nonsense, deep inelastic scattering experiments have for example directly shown that protons have the predicted internal structure [1].

[1] https://en.wikipedia.org/wiki/Deep_inelastic_scattering


This is the gist i believe:

> In 2015, CERN’s LHCb found pentaquarks in their data with 9 sigma significance!!! This either proved that CERN can find whatever it wants in its data or it proved that the experiments which showed the pentaquark did not exist were wrong. I suspect that the former is true because I wonder what the expected probability of seeing the particle was in their application of Bayes’ theorem.

I 'm not sure that's an indictment of LHC itself or rather of the sloppy way with which science attribute importance to whatever is statistically significant. It's probably true that there is too much data and not enough good models.


> the sloppy way with which science attribute importance to whatever is statistically significant.

But we don't. Just because an effect is statistically significant doesn't mean it could not also be explained by some systematic effect and physicists are very aware of that.

The famous five sigma rule prevents a claim of discovery without statistically significant evidence. It does NOT mean that we automatically accept every five sigma deviation from the background model as new physics.


“A question which does not get asked in physics circles is: ‘do impossible to measure things objectively exist?’”

The reason that the question isn’t asked in physics circles is because the question is a philosophical question not a physics question.


Her larger point is that there are still assumed answers to these questions anyway that physics is building on.


I am not sure that physists do. Many of the physics I know seem to subscribe to the “Shut up and calculate” school of physics where they say all of these things are a model that gives results that correspond to the experiments.


Yes, and they say this at the same time as they say share plenty of stories about alternative formulations that work as well but rejected because they lack "elegance" or whatever, and somehow they do not see any conflict in this. It's a weird kind of denial of making first principle assumptions.


True but if you have two alternates that are of equal predictive power then the choice between them can be totally arbitrary.


With the contrast she made between WIMPs+tiredlight and Higgs+bigbang, I think she suggested that when both languages have equal predictive power, one should choose the language that is consistent with existing definitions. If a new language confuses definitions and creates the appearance of paradoxes, it is not a good language. This is drawn out in some of her other articles.


That only is really true in a proverbial vacuum - the choice will still affect the context within which these alternates exist. And as such, it will influence the perception of other measurements and the different options that will be explored to explain new data, so it really is not that arbitrary at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: