The complaints about what is essentially p-hacking might apply to CERN (but probably not), but doesnt work with LIGO where many events were detected by their model-free pipeline.
One can say "there is an alternative not yet known explanation to gravity and this explanation will not have singularities/math artifacts." While GR is one of the most verified theories in physics I have some modicum of sympathy for that view.
What is disingenuous is her view "black holes dont exist because GR measurements are bad." Because they really are good and statistically significant. For LIGO results you can reject the nature of the compact bodies (although the fits are very good), but you cant reject the event of the detection.
Physics is now entering a new era of "exploration". This means lots of data and little results. The career of the average scientist will be a series of papers where they prove that they found nothing. I can see how this is frustrating or boring to many people but it is the only way forward. The low hanging fruit has been harvested decades ago. The methodologies of measurements have to adjust accordingly and the purpose of experiments has to be advertised correctly.
Clue: Start with taking that Big-Bang/Inflation picture and overlaying it on each of the jets from every SMBH. Now force yourself to believe that as true and it won't take long to figure out what is going on.
She claims to have studied physics for twenty years and have a PhD in accelerator physics. If true (and she seems too well-informed to be lying), her crack-pottery is at the very least well-informed enough to likely contain many useful insights and views worth considering. Especially regarding physicist culture.
There is a page where she shares a few of her novel, and speaking as someone who dropped out of physics myself (just a lot earlier than she did) I certainly recognize more than a few themes in the feedback contained therein.
There are plenty of things wrong with high energy physics and cosmology, the fact that most theory is not-even-wrong is a big one. She is, however, often barking up the wrong tree.
Personally, I don't think any measurement is non-model-dependent. It's a question of which model. You need a model to turn some physical phenomenon into bits in memory and yet another model to interpret these bits.
I am also not convinced tired light is any more falsifiable than any other cosmology. I'm sure all it's problems could be rectified by adding enough terms that depend on variables we cannot ever measure. A version of it may be falsified, but there could always be another version with dark-whatever that explains any deviation. I feel this is just another way of expressing dismissal by non-adherence to consensus.
>This story has elements of self-consistency and inconsistency; WIMPs + tired light and the Higgs + big bang are two ways to say the same thing, but why do physicists insist on speaking in five, different languages at the same time? It is as though the tower of Babel fell down when the first nuclear bomb went off.
Her blog is really worthwhile to read, also this article about the first picture of a black hole:
I'm not qualified to judge either way, but I like the style of her reasoning very much.
I can't judge whether she is just ignorant or deliberately misleading (she claims to have worked in physics), but this is just not how experiments are done. You almost always start with a theoretical prediction by a new theory, and there is no lack of theories out there so it has to be a pretty good one that has plenty of justification to merit attention enough to invest the 10+ person-years of effort it typically takes to conduct a rigorous analysis of a chunk of data from a modern large scale scientific instrument. You carefully demonstrate in various side bands your ability to correctly model every background process and instrumental effect that could influence the measurement. After that, if the data very clearly favors the new prediction over the null hypothesis, you may publish the discovery in order to invite other scientists to confirm or refute it. This does not mean that you or your peers accept that the new theory is correct.
And this is all supposedly in support of the thesis that large experimental facilities are a waste of money, which is patently nonsensical. They have been estimated to pay for themselves in direct returns alone (training of students and researchers, spin-off applications to silicon sensors, cryo technology, vacuum technology, lasers, accelerators, computing, etc). And the long term importance of investments in fundamental research are incalculable. For perspective, remember that the electron was considered a "useless" discovery back in 1897.
With enough data, good enough filters, and a wide selection of adjustments for background processes, any model can be made to work. Putting the blocks together is fundementally an excercise in bias, and truly limiting this bias requires significant discipline that is highly disincentivized and therefore uncommon.
The way she writes invites dismissal from working scientists due to its imprecise and conversational style, but I think she makes many salient points about the flaws in the modern scientific institution.
This is a very strong assertion. What changes do you think are needed?
> With enough data, good enough filters, and a wide selection of adjustments for background processes, any model can be made to work.
Sure. Which is why no scientist will care that a particular signal model "can be made to work". You try everything you can to explain your data with only background processes and only if this fails do you consider alternatives. The more adjustments you allow, the harder it becomes to favor signal over background.
> Putting the blocks together is fundementally an excercise in bias, and truly limiting this bias requires significant discipline that is highly disincentivized and therefore uncommon
Another wild accusation without evidence. Why do you believe this is true? And if it is, where are all the false discoveries? In a small team with less oversight, I'm sure cutting corners happens. In a large experiment like those discussed here? No way. The embarrassment of having to retract a false discovery is a pretty strong incentive to ensure the integrity of results, and they have enough internal controls to enforce it.
The embarassment of retraction is also a strong incentive to not challenge the status quo. If everyone is using the same bias and assumptions no one has to worry about retraction. The more small teams you have, the easier it is for different biases to live; the more centralized, the easier it is to succumb to groupthink.
This is nonsense, deep inelastic scattering experiments have for example directly shown that protons have the predicted internal structure .
> In 2015, CERN’s LHCb found pentaquarks in their data with 9 sigma significance!!! This either proved that CERN can find whatever it wants in its data or it proved that the experiments which showed the pentaquark did not exist were wrong. I suspect that the former is true because I wonder what the expected probability of seeing the particle was in their application of Bayes’ theorem.
I 'm not sure that's an indictment of LHC itself or rather of the sloppy way with which science attribute importance to whatever is statistically significant. It's probably true that there is too much data and not enough good models.
But we don't. Just because an effect is statistically significant doesn't mean it could not also be explained by some systematic effect and physicists are very aware of that.
The famous five sigma rule prevents a claim of discovery without statistically significant evidence. It does NOT mean that we automatically accept every five sigma deviation from the background model as new physics.
The reason that the question isn’t asked in physics circles is because the question is a philosophical question not a physics question.