Hacker News new | comments | ask | show | jobs | submit login
No, negative masses have not revolutionized cosmology (backreaction.blogspot.com)
115 points by benwr 40 days ago | hide | past | web | favorite | 82 comments



I'm not a physicist, just an interested bystander but some of these arguments don't seem to hold water to me:

> There’s a more general point to be made here. The primary reason that we use dark matter and dark energy to explain cosmological observations is that they are simple.

> A creation term is basically a magic fix by which you can explain everything and anything.

Dark matter is an unknown 'substance' that interacts only through gravity (weakly) and must have a very specific and complex distribution in the universe to 'work'. That strikes me as neither 'simple' nor much different than tossing a constant into an equation to make it work. Similarly Dark Energy is an unknown form of energy that in uniformly distributed through the universe, which to me is just a fancy way of saying "we added a constant to make it work". In both cases I don't see how either are implicitly better than adding a 'creation term'.


I believe the author's argument is: however simple or complicated dark matter and energy are, negative (inertial) mass just makes things even more complicated, without offering any new (testable) insights. The author herself points out that she (and the GR community) may be wrong about a lot of this stuff (e.g. that gravity is mediated by a spin-2 field), but absent a testable hypothesis, negative masses don't actually move the field forward at all.

Seemingly "crazy" ideas sometimes turn out to be correct! But in science, we demand radical ideas at least make testable hypotheses.


That's the thing, I think it's exciting because that researcher does believe it to be testable. My comment from the last negative mass thread:

My favorite bit from the author on Twitter: https://threadreaderapp.com/thread/1070302359325151233.html > The next-generation radio telescope - the Square Kilometre Array @SKA_telescope will be able to test this theory, and directly confirm or invalidate its predictions. 13/17 It is rather surprising that the model predicts the properties of a LambdaCDM universe.

As a casual observer this is is what gets me excited! We'll get our answers one way or another


> "it’s highly problematic to introduce negative inertial masses because this means the vacuum becomes unstable. If you do this, you can produce particle pairs from a net energy of zero in infinitely large amounts. This fits badly with our observations."

Basically, if you mess the the equation, you have to be very sure you aren't simulating something silly. Which is easy to do, unfortunately I've done it often.

I still need to read the original paper in detail to confirm, but if the post is correct, the N-body simulation might have some issues.


Waaait a minute, isn't that describing something we've already observed - spontaneous creation of virtual particle pairs in a vacuum?


We have definitely not observed spontaneous creation of virtual pairs in a vacuum. I do not blame you for thinking so as it is often described this way.

Almost no computation in Quantum Field Theory can be done exactly. What physicists do is using perturbation theory which is very similar to doing Taylor series in introductory calculus. Feynman, in a genius inspiration, found a way to represent the various mathematical terms that occur in these types of calculations as pictures which could be described in words. In this pictorial language, one would say things like "this term correspond to the creation of a virtual pair of particles", etc." The perturbation expansion is a mathematical "trick" done so that we can do obtain approximate results. Each individual term in that expansion has no physical meaning - in spite of the pictorial language used.


In infinitely large amounts?


Dark matter is a weakly interacting massive particle which hasn't been detected directly yet. This is not a terribly controversial theory because we already know of other particles with very similar behavior (neutrinos) and we also know for a fact that our theories of particle physics are incomplete. Moreover, the most straightforward "improvements" of our current theories of particle physics (supersymmetry, for example) would all necessarily involve the existence of previously undiscovered particles. Additionally, the conditions by which such "new" particles would be created (high energy densities) are precisely the conditions which would occur during the early Big Bang, leading to a very natural process of formation of a "cosmic dark matter background" (just as there exists a cosmic neutrino background and a cosmic microwave background) as a relic of the Big Bang, which unlike the other background relics would evolve in structure significantly after the Big Bang, responding to the evolution of baryonic matter (into clumps which became galaxies and galaxy clusters). In short, it's not very far of a leap from everything we already know about physics and cosmological history, and it matches the observational evidence remarkably well, which is why it is the accepted theory in astronomy and cosmology.

Dark energy is less well studied but the idea of vacuum energy is not a new one. We don't understand why vacuum energy is not zero but instead a very tiny number, but it's only a little surprising. It is, after all, fundamentally a theory that is a century old, dating back to Einstein's work with the cosmological constant.


I don’t know about dark matter, but the case for dark energy or something like it is pretty compelling. The expansion of the universe is accelerating. Things don’t generally accelerate unless there is something pushing them. So what is it that’s pushing everything in the universe apart?

Maybe it isn’t a ‘thing’ doing it, who knows, but there must be a reason.


As an extension of this argument. What caused the initial acceleration of the Big Bang? Isn't it possible the current acceleration of the universe is an extension of this force?


Dark energy is the cosmological constant introduced by Einstein. So that criticism is rather ironic.


Maybe? We don't know, also Einstein wasn't a fan:

> Einstein originally introduced the concept in 1917 [2] to counterbalance the effects of gravity and achieve a static universe, a notion which was the accepted view at the time. Einstein abandoned the concept in 1931 after Hubble's discovery of the expanding universe.

> Einstein reportedly referred to his failure to accept the validation of his equations—when they had predicted the expansion of the universe in theory, before it was demonstrated in observation of the cosmological red shift—as his "biggest blunder".


It was actually a double-blunder. Einstein missed the opportunity to predict the expanding universe (and, as a corollary, the big bang). And then, when it turned out the universe was expanding, he retracted the CC despite the fact that it actually turned out that there is a (non-zero) CC, the expanding universe notwithstanding. He got the right answer, but he abandoned it because he got it for the wrong reason.


> he retracted the CC despite the fact that it actually turned out that there is a (non-zero) CC

Dark Energy being the CC is just _one possible explanation_, it hasn't been tested.


I think you have that backwards. Physics doesn’t arise from mathematics; reality just is and the math describes it. The universe is expanding, and the cosmological constant simply describes this relationship between space and the matter inhabiting it.


Exactly right. Einstein had no idea what the physical mechanism behind the CC might be (and we still don't). It was just something he threw into the equations because he believed the universe was static, and the only way to make GR support a static universe is with a non-zero CC.


It turns out that, when he thought he was wrong, he was mistaken.


Actually, a cosmological constant is derivable from the Standard Model. But it's off by a very remarkable amount. Dark energy is the observation that that the Standard Model mispredicts the cosmological constant, so there must be something wrong with it.


Also known as the Vacuum Catastrophe, "the largest discrepancy between theory and experiment in all of science".

https://en.wikipedia.org/wiki/Vacuum_catastrophe


> cosmological constant is derivable from the Standard Model.

Well.. not really directly "derivable."

To be more specific, the "problem" is: when using the the "Standard Model of particle physics" (which is confirmed time and again for anything we do with the particles, and which, of course, we anyway know that is still inconsistent with the General Relativity model) to calculate the "renormalized value of the zero-point vacuum energy density" as the contribution to the cosmological constant, the number is calculated that, to our present understanding of the factors involved, can't match our cosmological measurements. Note that that "renormalization" process is a method used otherwise to "extract" the finite answer from the divergent expression (i.e. one that would involve infinities). Applying such method in this derivations gets the "wrong" number.

It's much less surprising when it is stated precisely. Attempting specific derivations in which the "infinities" are "avoided" by using a specific approach which for some other cases works, we discover that in these derivations the mentioned approach "doesn't work", that is, that something is missing in an attempt to compare two theories for which we anyway know that they aren't consistent when they have to be applied together.

https://arxiv.org/abs/1205.3365

Luckily, these inconsistencies aren't something that prevents us to use both General Relativity and the Standard Model independently to great success. Using them together is needed only to model very extreme conditions like writing the equations for some point inside of the black hole or something like that. And that doesn't disprove black holes in any way: we measured even their collisions(!) using the predictions of the models.

And we can claim already:

http://www.preposterousuniverse.com/blog/2010/09/29/seriousl...

"The Laws Underlying The Physics of Everyday Life Are Completely Understood"


It's not an "argument." The first sentence you quote is a statement of the fact. The theory containing dark matter and dark energy is the simplest that we have and to appreciate/understand that statement you would have to be able to actually write the formulas for different approaches and compare.

The author of the article actually did that: she provably wrote many, many papers filled with formulas attempting different approaches: it's her profession, she does that for living, and her statements represent something that she really knows.

Other (pseudo)arguments of yours are also the consequence of the lack of understanding of the basics of the topic which you attempt to comment.


This is a critical response to https://news.ycombinator.com/item?id=18609375 ; in the comments is a response by Jamie Farnes, the author of the paper, and a rebuttal by the blog's author.


To clarify, the response and rebuttal benwr mentions can be found in the comments on the blog post, not in the HN comments from the previous thread. :)


I have absolutely no idea who is right, but I don't see why Dr. Hossenfelder is being so patronizing? In her rebuttal in the comments she says

>Trust me, I do not enjoy doing this, but I do not want false claims to spread in the popular science literature.

The only way to stop popular science lit from misinterpreting/aggrandizing scientific findings and publishing what eventually amounts to false claims would be to just never publish anything provocative at all...and then they'd still probably find some BS to publish. I don't really see why such reasoning would prompt this response for the paper.


> I don't see why Dr. Hossenfelder is being so patronizing

This is her personality. I don't think it's productive to resolving scientific disagreements and don't endorse it; she looks obnoxious in contrast to the author's polite reply in the comments. But it's definitely her natural state, and it seems to produce a writing style that readers enjoy.

> The only way to stop popular science lit from misinterpreting/aggrandizing scientific findings and publishing what eventually amounts to false claims would be to just never publish anything provocative at all

Hossenfelder is claiming much more than that the work is provocative. She's claiming it's highly disfavored for widely known reasons that the authors do not sufficiently address. She's probably also implicitly suggesting the authors are allowing their work to be marketed directly to the lay public, who do not have the expertise to assess the work, for personal gain. Obviously you can always give alternative explanations (the establishment is close-minded, or whatever), but keep in mind that Hossenfelder is waaaay outside the establishment.


"[T]he gravitational interaction is exchanged by a spin-2 field, whereas the electromagnetic force is exchanged by a spin-1 field. Note that for this to be the case, you do not need to speak about the messenger particle that is associated with the force if you quantize it (gravitons or photons). It’s simply a statement about the type of interaction, not about the quantization. Again, you don’t get to choose this behavior. Once you work with General Relativity, you are stuck with the spin-2 field and you conclude: like charges attract and unlike charges repel."

There was a bit of discussion about this on https://news.ycombinator.com/item?id=18609375 as well. Working through the exercise of how spin-2 mediated forces differ from spin-1 is a worthwhile exercise for those who are so inclined.


If negative mass experiences an attractive force (that is, one that adds momentum toward the source of the force), doesn't that push it away? After all, with negative mass, the momentum vector and the velocity vector point in opposite directions.


Occam's razor is a heuristic used in the investigative process, not a principle. While it's fair to point out that a theory makes everything more complicated and thus might make it less likely to be sound, it's not fair to push back on the first bit of research into a theory because they haven't had the time to fully develop it.

Indeed, a scientific revolution generally starts by treating as false an assumption previously held as true. Not to say that this is a revolution, but if you push back too hard it won't be, whether it's true or not.


History : Dun Scotus (really) probably invented Ockham's razor - William was his student and used it to avoid the thing that all medieval philosophers needed to avoid, namely being tied up and put on a bonfire.

Why did Dun invent it (probably)… well here's the thing, Christianity has a God with three faces, son, father and holy ghost thing. Why? The answer is - don't multiply entities beyond necessity, so God has the number of faces necessary to do the job, no more, no less.

I'm wittering on because this is where that heuristic came from, literally it's angels on pins stuff. So don't invest in it, I'd bet a bit that if we got Dun and Bill together with a few pints of mead they'd laugh themselves silly to here that 21st Century physics pins any weight onto their measure.

The Greeks wouldn't have, the Chinese didn't, why do we?


I mean, we still call it Occam's razor, but I don't think that it's really even the same principle by this point, it just shares the same name. In science today, we don't really care about the number of entities. We don't reject the idea that the stars are suns like our own just because that would imply that the number of atoms in the universe is drastically larger than the number of atoms in the solar system. Instead of hypotheses with the least number of entities, we favour hypotheses with the smallest Kolmogorov complexity. [1] As a slogan for the modern version, I like the phrasing by John Von-Neumann: "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." [2]

[1] https://en.wikipedia.org/wiki/Kolmogorov_complexity [2] https://www.johndcook.com/blog/2011/06/21/how-to-fit-an-elep...


Nitpick: The historical problem with stars being suns wasn't (just) the universe being larger than the solar system, but that the telescopic observations available at the time seemed to imply that every other visible star would have to be much larger than the Sun, in fact larger than the orbit of Saturn. This was because early astronomers didn't understand optical diffraction and thought the Airy disks visible around stars were the stars themselves, making their angular radius in Earth's sky seem vastly larger than the reality. [1] Both characterization of the Airy disk and observation of stellar parallax didn't occur until the 19th century, by which point religious objections wouldn't have had the same status as in Galileo's day anyway (for example, Darwin's work was published only a few decades later).

Source: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.612...


You ought to read a little bit into the history of astronomy, particularly planetary motion. If the ancient Greeks had applied Occam's razor then we would've abandoned geocentrism long before Copernicus.

We should've listened to (and built on) Aristarchus of Samos instead of Ptolemy's deferent/epicycle contraption. All of this because everyone wanted to hang on to Plato's assertion that the heavens obeyed the mathematical beauty of circular forms.


Although it's attributed to him, William of Ockham did not come up with Occam's Razor.

My metaphysics teacher showed me a line in the Summa Contra Gentiles where St. Thomas Aquinas enunciated the same idea behind the Razor almost a hundred years before Ockham did. Sorry, I don't have the reference handy. Furthermore, the Wikipedia article on Ockham's Razor traces the basic idea all the way back to Aristotle's Posterior Analytics.


Occam's razor has a practical interpretation in Bayesian statistics, in that a more complex hypothesis will tend to have exponentially lower prior probability, because in comes from a more heavily parametrized component of the hypothesis space.

That doesn't mean much without a concrete probability model, though, and there probably isn't one in this case.


Also that point seemed odd. Isn't unification of two previously separate phenomenon (dark matter and dark energy) simpler in Occam terms? It's using one thing to explain two, even if the actual mathematics are more complex to state. Also there's other troubling cosmological issues that this would appear to address, such as Hubble's constant having slight variation outside of measurement tolerance, which slightly differing fluid densities would explain (vs the current assumption of unknown sources of error).

Sabine's point, if it stands up, is that the author used a different equation of motion in their simulation than they write up in the paper. If true that's sloppy and bad, but it's not a point against the actual theory underlying the simulation, no?


Whether it is simpler depends on the details of the unification. Two comparatively simple theories are not made simpler with a very complex unification.


The electroweak force is most certainly not easier to describe than electromagnetism and the weak force separately. But we still talk about electroweak interaction as being a simpler theory than two separate fundamental forces, because it reduces the number of things that are needed to describe the universe, even with added computational difficulty.


One could also think of it in terms of the the amount of "magic" required to make the simplification work.


As far as I can tell Occam terms seem to differ depending on the convenience of whoever happens to be wielding Occam's razor.


It's not totally a heuristic, though it's intractable to use it the real way. Which theory has a shorter program that generates it?


As I read that paragraph, I thought to myself that the author should read Sabine Hossenfelder's recent book Lost in Math: How Beauty Leads Physics Astray. In that book, Sabine argues quite convincingly that our aesthetic judgements of new theories can mislead us. She says of simplicity,

> This dream still drives research today. But we do not know whether more fundamental theories necessarily have to be simpler. The assumption that a more fundamental theory should also be simpler—at least perceptually simpler—is a hope and not something that we actually have reason to expect.


> As I read that paragraph, I thought to myself that the author should read Sabine Hossenfelder's recent book Lost in Math: How Beauty Leads Physics Astray.

The author is Sabine Hossenfelder :)


I realized when I reached the bottom of the post. It is odd to me to see her saying that the new theory is disqualified for not being as simple as alternatives after reading her book.


It's not disqualified for not being simple, it's disqualified for not making testable predictions. Given two theories, neither of which make testable predictions, the simpler one (and the one that has been studied more thoroughly) is the more reliable.

Otherwise, why not just say that the missing mass is unicorns? And the energy driving the accelerated expansion of the universe is the energy of their love? There's no shortage of radical explanations for dark mass/energy. The scientific community takes them seriously to the extent that they are compatible with past observations and make testable hypotheses that distinguish them from other theories.


The author of the negative mass theory said that it will be testable via the cubic kilometer ice array in Antarctica when it comes online (though I'm not sure how). He mentions it in his chain of 17 tweets that summarize the theory .


I thought he was planning to use the Square Kilometre Array radio telescope which is still in the works, and not to be confused with the Ice Cube neutrino detector which has been operational for several years.

https://cdr.skatelescope.org/

https://icecube.wisc.edu/science/icecube/detector


Yep, you are right - I was confusing the Square Kilometer Array for the Ice Cube neutrino detector. Heres a direct link to his tweet: https://twitter.com/Astro_Jamie/status/1070304980224172033


Then we'll find out when it comes online! If the new theory is compatible with past observations, and correctly predicts something that GR gets wrong, that will be a huge finding! But the press has gone wild prematurely, as they often do. As others have mentioned, that seems to be the real issue to which Dr. Hossenfelder objects.


>It's not disqualified for not being simple, it's disqualified for not making testable predictions

That's not uncommon. In their early development, most theories don't make testable predictions, especially when they attempt to expand the status quo, and not just add some incremental detail...


> odd to me to see her saying that the new theory is disqualified for not being as simple

No. Her main argument is, the way I read her article, is:

"Farnes in his paper instead wants negative gravitational masses to mutually repel each other. But general relativity won’t let you do this. "

The way I understand it, the author of the paper fails to be compatible with the theory that was confirmed time and again during the last 100 years. She just avoided to formulate that so bluntly.


In which case she probably has read the book.


Can't help but think of phlogiston:

Eventually, quantitative experiments revealed problems, including the fact that some metals gained mass when they burned, even though they were supposed to have lost phlogiston. Some[who?] phlogiston proponents explained this by concluding that phlogiston had negative weight;

https://en.wikipedia.org/wiki/Phlogiston_theory

Eventually, the mass paradox was resolved by the realization that combustion is really something else altogether: the combination with a then-unknown element, oxygen:

Phlogiston remained the dominant theory until the 1770s when Antoine-Laurent de Lavoisier showed that combustion requires a gas that has mass (specifically, oxygen) and could be measured by means of weighing closed vessels.


Side note: it’s pretty incredible how phlogiston theory correctly linked burning and rusting, despite being unaware of oxidation. (Also, that plants have a part to play in the oxidation-carbon cycle.)

I don’t know if oxygen theory would have developed without phlogiston theory first linking these phenomena.


Almost certainly. Joseph Priestly, the discoverer of oxygen, was a confirmed phlogistonist. He even called oxygen "de-phlogisticated air". The reason oxygen theory wasn't developed sooner was technological limitations, not intellectual ones: scientists simply didn't have any way of preparing pure oxygen to experiment on before Priestly.

(FYI, I just happen to know all this because I'm in the middle of preparing a series of lectures on the history of science, and I just finished the segment on the atomic theory. It's a really fascinating story.)


The Egyptians and Babylonians didn't really technologically advance much for 1000 years. They placed the most emphasis in their society on maintaining the social order above all else.

Progress can be stopped if you interfere properly with disruptive advances in science. Much of the world is still trying to roll back the development and widespread dissemination of small arms technology because it interferes too much with the maintenance of social order. The last major physics discovery gave us atomic weapons. Who knows what deadly forces future advances in physics would unleash? Heaven forbid they were easy to engineer! Perhaps it's in the interest of national security to direct fundamental physics research into "how many angels can fit on the head of a pin" type discussions via directing grant money such as we see with string theory. These endless manipulations of already existing knowledge and an aversion to more daring experimentalism should keep scientists from developing the successors to atomic weapons.


It’ll keep scientists in the countries that do this from discovering such weapons. Meanwhile elsewhere in the world...


What if they've already developed these weapons, but don't find it useful to let the rest of the world know about them? For example, there's the mystery of the very real, but unexplained brain injuries to diplomats at the U.S embassies in China and Cuba.


It is fun to read these things, and of course discussion is a good thing, but I do hope that blogs and comment sections do not end up serving any formal role in the scientific literature.

There are several reasons for this: a blog's visibility is low compared to a preprint server (in the scientific community at least), the contents of the blog probably won't be as well-preserved, and there is a tendency to be more casual with the arguments.


> I do hope that blogs and comment sections do not end up serving any formal role in the scientific literature.

Why not? It all depends on the qualification of the people writing blog posts and comments. Remember that some 300 years ago a lot of scientific results existed in epistolary form only, and that first journals that looked like today's thing (like the Bulletin of the Royal Society) were basically compilations of letters sent by astronomers, naturalists etc and reactions of their peers sent as follow-up letters.

Naturally, this doesn't mean that anybody and their dog should have a voice in discussing matters like cosmology if they actually have no clue about it.


The continuation of the system you mention still exists today, and is the main channel of scientific discourse. I think it is a better channel to use when challenging someone's work.

(I don't agree with your statement that it depends on qualifications -- I don't think qualification is a measure of the importance of a contribution.)


Why is that even a concern for you here? This is clearly written for laypeople.


It may be just text, but the points being made are technical, and the blog's author is an expert.

The author of the paper is now in a position where they feel like they have to defend themselves publicly, and I don't think this is the forum for a scientific defense (for the three reasons I listed above).


>The author of the paper is now in a position where they feel they have to defend themselves publicly

Due in large part to the fact that his paper was attached to a press release from Oxford--a point of contention for both Hossenfelder and Sean Carroll.


If paper in question becomes fodder for the mainstream media (or at least the mainstream scientific media) then there is no reason why they should not have to defend their theory/arguments/speculations publicly in a less than formal peer reviewed forum, especially one of fairly high regard in that subject area.


There is a good reason: we should not be passing judgement on the validity of scientific work based on informal discussions.


Disagree. As just one example, the Solvay conferences of the 1910s-20s. Much of QM was developed by informal discussions, both written and verbal.

Likewise, most of the initial challenges to scientific papers/theories/experimental results happens informally and only later will there be a peer reviewed counter (if the original has not between withdrawn or modified before). Reference the OPERA ftl neutrinos for a recent example.


I think it's reasonable for experts to publicly explain their opinions on news in their field to a lay audience. Obviously this is not where rigorous scientific arguments between experts should be made.


Not every piece of discussion in science is formal and rigorous. Einstein didn't communicate his ideas exclusively through peer-reviewed papers. There is a time and place for informal talk, and this is it.


> These equations tell you that like masses attract and unlike masses repel. We don’t normally talk about this because for all we know there are no negative gravitational masses, but you can see what happens in the Newtonian limit.

I feel like I am misunderstanding what the author wants to say here. It seems to me that this would only be the case if you change the sign of the gravitational mass (F[-m_1,-m_2]=F[m_1,m_2]), but not of the inertial mass?

In the Newtonian case you get that the force is proportional to m_1m_2, so ++=+, +-=- and --=+, but then F=ma flips the direction of the acceleration, right?

++ gives F>0 and a>0, so attraction.

-- gives F>0, but negative inertial mass yield a<0 and hence repulsion.

+*- gives F<0, so the positive inertial mass sees a<0 and is repelled, while the negative inertial mass sees a>0 and is attracted.

What am I missing here?


Tone of this blog post is extremely pedantic and comes across as if the author were personally insulted by the paper.

But her argument is basically "You didn't understand the math, and you misunderstood the work that you cited".

I feel like the burden is a little higher -- i.e. put in some effort to at least show the readers the math she's talking about and the counterfactual conclusions they arrive it if worked out.

I would be much more convinced if she took the original paper's claims at their strongest and most convincing and formulating a simple proof or mathematical argument why the paper is wrong -- instead it feels like she's knocking down a straw man and saying "you're too stupid to be doing this kind of work"...


The argument presupposes that a person either believes (if they're not a physicist) or knows (if they are) that a spin-2 field must obey those rules, and that gravity is a spin-2 field, which makes the conclusion follow naturally.

The math for this would not make any sense to a layperson, but is widely accepted. Proposing a new theory of negative mass means proposing much more significant alterations to the underlying theory of gravity; the theoretical machinery supporting this current understanding is huge.


True. I'm not a physicist, so help me understand -- we've only ever observed fermions with spin 1/2 and bosons with spin 1 (and recently, Higgs at 0). We've postulated that gravitons would have to be spin-2 because of how the math works out (I don't understand the math but wikipedia suggests that its because gravity is defined by the use of 2nd order tensors) but we've not confirmed the existence of gravitons. Hopefully I'm not talking past you by speaking of particles, correct me if I'm wrong but particles/fields are interchangeable as a matter of quantization, right?

I definitely trust the general relativity math -- gravitational lensing /GPS atomic clock corrections are perhaps the easiest bits to wrap my head around as evidence.

Anyways, all that is to ask the question -- Is this negative mass model in conflict with observations or is it in conflict with other models of those observations?


I am not an expert on this either, though I guess I hope to be someday. But when Hossenfelder says that Spin-2 necessitates like charges attracting, I believe her. I've also heard that result elsewhere.

My understanding is that the spin of a field or particle is more of a result of the equation (specifically, the Lagrangian) which governs its dynamics. This is irrespective of whether you consider it as a field or as a quantized particle of that field; either way the Lagrangian has certain symmetries. The Fermion Lagrangian has symmetry on 4pi rotation, but not 2pi, which (confusingly) we call spin-1/2. I suppose that the GR Lagrangian has symmetry under pi rotations, which corresponds to spin-2.

(that would make sense if the stress-energy tensor is contracted with two vectors; it would essentially boil down to the fact that (-x^T) M (-x) = x^T M x if you wrote everything as matrices. But while I have studied GR I haven't studied it as a field theory so I'm not sure it's this simple.)

So the Spin-2 thing is not too questionable. I don't anything about how to turn that into a statement about gravitational charges, though.


> Tone of this blog post is extremely pedantic and comes across as if the author were personally insulted by the paper.

I suspect she is insulted, but more so by the press release surrounding a new paper unconfirmed by experimentation. That is now how science works.

One could say it's like calling a buggy alpha version of a software project a stable finished product, if we pretended people actually did give a damn about that in software.


Yeah the press release and subsequent science journalism is all annoying and breathless and overblown -- totally agreed.

But to say that its unconfirmed by experimentation and "not how science works" is a bit silly, don't you think? The paper proposes a model, shows outputs of the model, and demonstrates how it predicts things we have already observed. It's a good starting point for other experiments and observations, don't you think? Plenty of papers do purely theoretical work and science is better for them, too.


> But to say that its unconfirmed by experimentation and "not how science works" is a bit silly, don't you think?

No, because she is not saying that the paper is rubbish, she is addressing the hype among an uninformed public and telling them to chill.


"highly problematic to introduce negative inertial masses .... This fits badly with our observations."

Science news is not scarce. Click-bait titles aren't either. There's no shortage of catchy on-paper theories that 1)are not backed by evidence, 2) make no predictions.

Cosmology has a big problem, and thrashing around in desperation is not becoming.


Negative matter is created spontaneously everywhere, according the article in theconversation.com. Why then can't we collect it, isolate it, and poke and prod it? Because the particles are too small and interact too weakly?


> the particles are too small

That's not a good way to talk about these things. Usually it's the really small stuff that is very strongly interacting - see quarks.

> and interact too weakly

That's better.


Obligatory XKCD: https://xkcd.com/955/

When you read breathless science journalism about the latest revolutionary paper, remember how it usually works out. Enjoy it as entertainment if you wish, but don't get your hopes up.


Reminds me of the SUSY bets.


Well, the negative consensus is disappointing, but the criticisms are absolutely massive.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: