> There’s a more general point to be made here. The primary reason that we use dark matter and dark energy to explain cosmological observations is that they are simple.
> A creation term is basically a magic fix by which you can explain everything and anything.
Dark matter is an unknown 'substance' that interacts only through gravity (weakly) and must have a very specific and complex distribution in the universe to 'work'. That strikes me as neither 'simple' nor much different than tossing a constant into an equation to make it work. Similarly Dark Energy is an unknown form of energy that in uniformly distributed through the universe, which to me is just a fancy way of saying "we added a constant to make it work". In both cases I don't see how either are implicitly better than adding a 'creation term'.
Seemingly "crazy" ideas sometimes turn out to be correct! But in science, we demand radical ideas at least make testable hypotheses.
My favorite bit from the author on Twitter: https://threadreaderapp.com/thread/1070302359325151233.html
> The next-generation radio telescope - the Square Kilometre Array @SKA_telescope will be able to test this theory, and directly confirm or invalidate its predictions. 13/17 It is rather surprising that the model predicts the properties of a LambdaCDM universe.
As a casual observer this is is what gets me excited! We'll get our answers one way or another
Basically, if you mess the the equation, you have to be very sure you aren't simulating something silly. Which is easy to do, unfortunately I've done it often.
I still need to read the original paper in detail to confirm, but if the post is correct, the N-body simulation might have some issues.
Almost no computation in Quantum Field Theory can be done exactly. What physicists do is using perturbation theory which is very similar to doing Taylor series in introductory calculus. Feynman, in a genius inspiration, found a way to represent the various mathematical terms that occur in these types of calculations as pictures which could be described in words. In this pictorial language, one would say things like "this term correspond to the creation of a virtual pair of particles", etc." The perturbation expansion is a mathematical "trick" done so that we can do obtain approximate results. Each individual term in that expansion has no physical meaning - in spite of the pictorial language used.
Dark energy is less well studied but the idea of vacuum energy is not a new one. We don't understand why vacuum energy is not zero but instead a very tiny number, but it's only a little surprising. It is, after all, fundamentally a theory that is a century old, dating back to Einstein's work with the cosmological constant.
Maybe it isn’t a ‘thing’ doing it, who knows, but there must be a reason.
> Einstein originally introduced the concept in 1917  to counterbalance the effects of gravity and achieve a static universe, a notion which was the accepted view at the time. Einstein abandoned the concept in 1931 after Hubble's discovery of the expanding universe.
> Einstein reportedly referred to his failure to accept the validation of his equations—when they had predicted the expansion of the universe in theory, before it was demonstrated in observation of the cosmological red shift—as his "biggest blunder".
Dark Energy being the CC is just _one possible explanation_, it hasn't been tested.
Well.. not really directly "derivable."
To be more specific, the "problem" is: when using the the "Standard Model of particle physics" (which is confirmed time and again for anything we do with the particles, and which, of course, we anyway know that is still inconsistent with the General Relativity model) to calculate the "renormalized value of the zero-point vacuum energy density" as the contribution to the cosmological constant, the number is calculated that, to our present understanding of the factors involved, can't match our cosmological measurements. Note that that "renormalization" process is a method used otherwise to "extract" the finite answer from the divergent expression (i.e. one that would involve infinities). Applying such method in this derivations gets the "wrong" number.
It's much less surprising when it is stated precisely. Attempting specific derivations in which the "infinities" are "avoided" by using a specific approach which for some other cases works, we discover that in these derivations the mentioned approach "doesn't work", that is, that something is missing in an attempt to compare two theories for which we anyway know that they aren't consistent when they have to be applied together.
Luckily, these inconsistencies aren't something that prevents us to use both General Relativity and the Standard Model independently to great success. Using them together is needed only to model very extreme conditions like writing the equations for some point inside of the black hole or something like that. And that doesn't disprove black holes in any way: we measured even their collisions(!) using the predictions of the models.
And we can claim already:
"The Laws Underlying The Physics of Everyday Life Are Completely Understood"
The author of the article actually did that: she provably wrote many, many papers filled with formulas attempting different approaches: it's her profession, she does that for living, and her statements represent something that she really knows.
Other (pseudo)arguments of yours are also the consequence of the lack of understanding of the basics of the topic which you attempt to comment.
>Trust me, I do not enjoy doing this, but I do not want false claims to spread in the popular science literature.
The only way to stop popular science lit from misinterpreting/aggrandizing scientific findings and publishing what eventually amounts to false claims would be to just never publish anything provocative at all...and then they'd still probably find some BS to publish. I don't really see why such reasoning would prompt this response for the paper.
This is her personality. I don't think it's productive to resolving scientific disagreements and don't endorse it; she looks obnoxious in contrast to the author's polite reply in the comments. But it's definitely her natural state, and it seems to produce a writing style that readers enjoy.
> The only way to stop popular science lit from misinterpreting/aggrandizing scientific findings and publishing what eventually amounts to false claims would be to just never publish anything provocative at all
Hossenfelder is claiming much more than that the work is provocative. She's claiming it's highly disfavored for widely known reasons that the authors do not sufficiently address. She's probably also implicitly suggesting the authors are allowing their work to be marketed directly to the lay public, who do not have the expertise to assess the work, for personal gain. Obviously you can always give alternative explanations (the establishment is close-minded, or whatever), but keep in mind that Hossenfelder is waaaay outside the establishment.
There was a bit of discussion about this on https://news.ycombinator.com/item?id=18609375 as well. Working through the exercise of how spin-2 mediated forces differ from spin-1 is a worthwhile exercise for those who are so inclined.
Indeed, a scientific revolution generally starts by treating as false an assumption previously held as true. Not to say that this is a revolution, but if you push back too hard it won't be, whether it's true or not.
Why did Dun invent it (probably)… well here's the thing, Christianity has a God with three faces, son, father and holy ghost thing. Why? The answer is - don't multiply entities beyond necessity, so God has the number of faces necessary to do the job, no more, no less.
I'm wittering on because this is where that heuristic came from, literally it's angels on pins stuff. So don't invest in it, I'd bet a bit that if we got Dun and Bill together with a few pints of mead they'd laugh themselves silly to here that 21st Century physics pins any weight onto their measure.
The Greeks wouldn't have, the Chinese didn't, why do we?
We should've listened to (and built on) Aristarchus of Samos instead of Ptolemy's deferent/epicycle contraption. All of this because everyone wanted to hang on to Plato's assertion that the heavens obeyed the mathematical beauty of circular forms.
My metaphysics teacher showed me a line in the Summa Contra Gentiles where St. Thomas Aquinas enunciated the same idea behind the Razor almost a hundred years before Ockham did. Sorry, I don't have the reference handy. Furthermore, the Wikipedia article on Ockham's Razor traces the basic idea all the way back to Aristotle's Posterior Analytics.
That doesn't mean much without a concrete probability model, though, and there probably isn't one in this case.
Sabine's point, if it stands up, is that the author used a different equation of motion in their simulation than they write up in the paper. If true that's sloppy and bad, but it's not a point against the actual theory underlying the simulation, no?
> This dream still drives research today. But we do not know whether more fundamental theories necessarily have to be simpler. The assumption that a more fundamental theory should also be simpler—at least perceptually simpler—is a hope and not something that we actually have reason to expect.
The author is Sabine Hossenfelder :)
Otherwise, why not just say that the missing mass is unicorns? And the energy driving the accelerated expansion of the universe is the energy of their love? There's no shortage of radical explanations for dark mass/energy. The scientific community takes them seriously to the extent that they are compatible with past observations and make testable hypotheses that distinguish them from other theories.
That's not uncommon. In their early development, most theories don't make testable predictions, especially when they attempt to expand the status quo, and not just add some incremental detail...
No. Her main argument is, the way I read her article, is:
"Farnes in his paper instead wants negative gravitational masses to mutually repel each other. But general relativity won’t let you do this. "
The way I understand it, the author of the paper fails to be compatible with the theory that was confirmed time and again during the last 100 years. She just avoided to formulate that so bluntly.
Eventually, quantitative experiments revealed problems, including the fact that some metals gained mass when they burned, even though they were supposed to have lost phlogiston. Some[who?] phlogiston proponents explained this by concluding that phlogiston had negative weight;
Eventually, the mass paradox was resolved by the realization that combustion is really something else altogether: the combination with a then-unknown element, oxygen:
Phlogiston remained the dominant theory until the 1770s when Antoine-Laurent de Lavoisier showed that combustion requires a gas that has mass (specifically, oxygen) and could be measured by means of weighing closed vessels.
I don’t know if oxygen theory would have developed without phlogiston theory first linking these phenomena.
(FYI, I just happen to know all this because I'm in the middle of preparing a series of lectures on the history of science, and I just finished the segment on the atomic theory. It's a really fascinating story.)
Progress can be stopped if you interfere properly with disruptive advances in science. Much of the world is still trying to roll back the development and widespread dissemination of small arms technology because it interferes too much with the maintenance of social order. The last major physics discovery gave us atomic weapons. Who knows what deadly forces future advances in physics would unleash? Heaven forbid they were easy to engineer! Perhaps it's in the interest of national security to direct fundamental physics research into "how many angels can fit on the head of a pin" type discussions via directing grant money such as we see with string theory. These endless manipulations of already existing knowledge and an aversion to more daring experimentalism should keep scientists from developing the successors to atomic weapons.
There are several reasons for this: a blog's visibility is low compared to a preprint server (in the scientific community at least), the contents of the blog probably won't be as well-preserved, and there is a tendency to be more casual with the arguments.
Why not? It all depends on the qualification of the people writing blog posts and comments. Remember that some 300 years ago a lot of scientific results existed in epistolary form only, and that first journals that looked like today's thing (like the Bulletin of the Royal Society) were basically compilations of letters sent by astronomers, naturalists etc and reactions of their peers sent as follow-up letters.
Naturally, this doesn't mean that anybody and their dog should have a voice in discussing matters like cosmology if they actually have no clue about it.
(I don't agree with your statement that it depends on qualifications -- I don't think qualification is a measure of the importance of a contribution.)
The author of the paper is now in a position where they feel like they have to defend themselves publicly, and I don't think this is the forum for a scientific defense (for the three reasons I listed above).
Due in large part to the fact that his paper was attached to a press release from Oxford--a point of contention for both Hossenfelder and Sean Carroll.
Likewise, most of the initial challenges to scientific papers/theories/experimental results happens informally and only later will there be a peer reviewed counter (if the original has not between withdrawn or modified before). Reference the OPERA ftl neutrinos for a recent example.
I feel like I am misunderstanding what the author wants to say here. It seems to me that this would only be the case if you change the sign of the gravitational mass (F[-m_1,-m_2]=F[m_1,m_2]), but not of the inertial mass?
In the Newtonian case you get that the force is proportional to m_1m_2, so ++=+, +-=- and --=+, but then F=ma flips the direction of the acceleration, right?
++ gives F>0 and a>0, so attraction.
-- gives F>0, but negative inertial mass yield a<0 and hence repulsion.
+*- gives F<0, so the positive inertial mass sees a<0 and is repelled, while the negative inertial mass sees a>0 and is attracted.
What am I missing here?
But her argument is basically "You didn't understand the math, and you misunderstood the work that you cited".
I feel like the burden is a little higher -- i.e. put in some effort to at least show the readers the math she's talking about and the counterfactual conclusions they arrive it if worked out.
I would be much more convinced if she took the original paper's claims at their strongest and most convincing and formulating a simple proof or mathematical argument why the paper is wrong -- instead it feels like she's knocking down a straw man and saying "you're too stupid to be doing this kind of work"...
The math for this would not make any sense to a layperson, but is widely accepted. Proposing a new theory of negative mass means proposing much more significant alterations to the underlying theory of gravity; the theoretical machinery supporting this current understanding is huge.
I definitely trust the general relativity math -- gravitational lensing /GPS atomic clock corrections are perhaps the easiest bits to wrap my head around as evidence.
Anyways, all that is to ask the question -- Is this negative mass model in conflict with observations or is it in conflict with other models of those observations?
My understanding is that the spin of a field or particle is more of a result of the equation (specifically, the Lagrangian) which governs its dynamics. This is irrespective of whether you consider it as a field or as a quantized particle of that field; either way the Lagrangian has certain symmetries. The Fermion Lagrangian has symmetry on 4pi rotation, but not 2pi, which (confusingly) we call spin-1/2. I suppose that the GR Lagrangian has symmetry under pi rotations, which corresponds to spin-2.
(that would make sense if the stress-energy tensor is contracted with two vectors; it would essentially boil down to the fact that (-x^T) M (-x) = x^T M x if you wrote everything as matrices. But while I have studied GR I haven't studied it as a field theory so I'm not sure it's this simple.)
So the Spin-2 thing is not too questionable. I don't anything about how to turn that into a statement about gravitational charges, though.
I suspect she is insulted, but more so by the press release surrounding a new paper unconfirmed by experimentation. That is now how science works.
One could say it's like calling a buggy alpha version of a software project a stable finished product, if we pretended people actually did give a damn about that in software.
But to say that its unconfirmed by experimentation and "not how science works" is a bit silly, don't you think? The paper proposes a model, shows outputs of the model, and demonstrates how it predicts things we have already observed. It's a good starting point for other experiments and observations, don't you think? Plenty of papers do purely theoretical work and science is better for them, too.
No, because she is not saying that the paper is rubbish, she is addressing the hype among an uninformed public and telling them to chill.
Science news is not scarce. Click-bait titles aren't either. There's no shortage of catchy on-paper theories that 1)are not backed by evidence, 2) make no predictions.
Cosmology has a big problem, and thrashing around in desperation is not becoming.
That's not a good way to talk about these things. Usually it's the really small stuff that is very strongly interacting - see quarks.
> and interact too weakly
When you read breathless science journalism about the latest revolutionary paper, remember how it usually works out. Enjoy it as entertainment if you wish, but don't get your hopes up.