Hacker News new | past | comments | ask | show | jobs | submit login
Gravitational-lensing measurements push Hubble-constant discrepancy past 5σ (scitation.org)
155 points by digital55 on Feb 18, 2020 | hide | past | favorite | 33 comments



Redacted years ago as an undergraduate in an astro class, there were jokes about the "Hubble not so constant", as it was difficult to pin it down at the time. I see that we're continuing the tradition.

(Astro was a bit of a rude awakening to an engineer -- anything that wasn't an order of magnitude could safely be shoveled into the "constant" part of the calculation.)


Famously, Hubble's 1931 paper that detected the expansion of the universe found H0 ~ 500 km/s/Mpc (fig 5 in [1]). The distances that he was using were way off...

Through most of the late 1900s the uncertainty was between 50 and 100 km/s/Mpc.

Now we know it at least as well as most other things, but this history of uncertainty means it is treated differently. Most annoyingly, simulations often work in units of Distance/h (where h = H0/100). This causes anyone who uses them incredible annoyance as you need to get your factors of little h right. Someone even wrote a paper called "Damn you little h" [2]. It's a total pain...

[1] http://articles.adsabs.harvard.edu/pdf/1931ApJ....74...43H

[2] https://arxiv.org/abs/1308.4150


> Led by Sherry Suyu, the H0LiCOW (H_0 Lenses in COSMOGRAIL’s Wellspring) collaboration uses gravitationally lensed quasars to independently measure H_0.

OMG I love physicists so much


"Its earliest known appearance was in a tongue-in-cheek letter to the editor: "A lover of the cow writes to this column to protest against a certain variety of Hindu oath having to do with the vain use of the name of the milk producer. There is the profane exclamations, 'holy cow!' and, 'By the stomach of the eternal cow!'""


Another recent item that raises questions about dark energy: https://news.ycombinator.com/item?id=21974117


Also a MOND simulation which is interesting https://news.ycombinator.com/item?id=22278215


There's a comment at the end of the article that poses a good alternate explanation:

> Measurements that yield a value for H0 from the CMB amount to an average over the entire age of the universe, whereas the two other measurements described average over a relatively recent fraction of that age. Considering that high-z redshift surveys have given fairly solid indications that cosmic expansion is accelerating, and thus H0 should be increasing over time, the 5.3 sigma divergence between the two values could be a direct result of the different durations for each average.


The comment doesn't quite make sense, but I think what it's trying to get at is that in order to compare the different measurements of H0, you have to assume a physical model.

You can calculate the rate of expansion of the Universe in our vicinity using Type-Ia supernovae. One calls that a "local" measurement of H0. One can also do various measurements of the Cosmic Microwave Background (CMB), and using the standard model of cosmology (Lambda CDM, meaning a theory with a cosmological constant and cold dark matter), predict what H0 should be. In order to connect the local and CMB measurements, in other words, you need a theory of cosmology. If the local and CMB measurements don't match, then there are basically two possibilities:

1. There are systematic errors in the measurements. We just have to figure out who made a mistake in their measurements. (I think most people believe this to be what's happening).

2. We need to modify the theory, Lambda CDM, by adding in new types of matter (such as sterile neutrinos) or modifying the theory of gravity in some way (this is very difficult to do without violating other experimental results).


Nope, that's total garbage. H0 is defined as the rate of expansion now.


>>> the researchers studied quasars whose light is so strongly deflected by foreground galaxies that they appear as multiple distinct images, as shown in the figure

So, I think I get the dummies guide, except this bit - I am surprised that a lens can produce multiple images from one light source - is this something i don't experience much on human scales or am I just not using the right lenses?


Remember that these "lenses" are very irregularly shaped, not at all like the smooth lens of a camera. A good real-world analogy is to look through a wine glass, for example:

https://laser.physics.sunysb.edu/_samantha/journal/ringpic.p...


OK, so TFA says:

> Combining the H0LiCOW and standard-candle measurements gives an H0 of 73.8 ± 1.1 km/s/Mpc, which differs from the ΛCDM value by 5.3 standard deviations.

So what is it about ΛCDM that could lead to the discrepancy?

Is it simplistic to conclude that it's this?

> Dark energy, the model presumes, takes the form of a cosmological constant Λ ...

That is, reducing Λ would mean no discrepancy?

But what other discrepancies would that create?


> Is it simplistic to conclude that it's this? > Dark energy, the model presumes, takes the form of a cosmological constant Λ ...

Yup quite possibly, and there are people investigating it! The extended model is "Time dependent dark energy" See [1] or many papers [2]

> But what other discrepancies would that create?

This is kinda the crux - modifying something to fix the current problems causes other problems. An example of this is the proposal that DE is just a result of us having the wrong model for gravity (GR) and that gravity is different at cosmological distances (note that this is not MOND which was proposed to not require dark matter and pretty universally unfavoured). However, gravity is really really well measured at solar system distances so you somehow need a theory of gravity that looks a lot like GR at small ranges and quite different at long ones, and that it hard.

[1] https://www.forbes.com/sites/startswithabang/2017/05/30/is-t... [2] https://ui.adsabs.harvard.edu/search/q=title%3A%22time%20dep...


Thanks.

That sounds a little like parameter fitting. But maybe that's ignorantly harsh. The fantasy of simple being beautiful and so more likely "true" (which itself is an iffy concept).

Anyway, isn't Λ basically a constant term in the gravity equation? So then you argue that Λ isn't constant. Maybe it depends on time. Or on distance, which I guess just makes it a polynomial. Something like that?


> That sounds a little like parameter fitting.

You're exactly right. We have a model (Lambda CDM + GR + a few other details). We have ways to generate descriptions of how the universe would look given certain parameters (H0=70, Omega_Lambda=0.73, etc, etc) and we basically just see what range of parameters gives a universe that looks like (quantified using some statistics) the one we see through our telescopes.

But this is just phenomenology. The next step is working out the physics. For example, let's say we know there is X amount of something that looks like a cosmological constant - but what is that. This is what e.g. the search the dark matter particle is about - we know there is something that is cold + collisionless but what particle is it.

> So then you argue that Λ isn't constant. Maybe it depends on time. Or on distance, which I guess just makes it a polynomial. Something like that?

Yup, I'm pretty sure the only models we have tested are wCMD which allows w (the equation of state of DE) to be something other than -1 (which is what the cosmological constant is). There is also w(a) which parameterizes the equation of state of dark energy as a linear function of scale factor (just think of it as time, a=1 now a=0 at the big bang). So linear rather than constant. We haven't gone to higher order than that.

The downside to adding parameters though is that, while you can always fit your data better (or at least as well) with more parameters,

1: Your error bars often blow up

2: Getting from phenomenology to physics might become hard. There are some models people have proposed that might allow us to fit the data, but then you need to explain why w changed in a very particular way at a very particular time. Basically it starts to look a little like overfitting.


Thanks again.

Overfitting a model for global climate change, for example, isn't an issue, because you're not interested in something like physics. I mean, it's based on physics, but that's buried way down in the model.

But physics has different goals. Closer to math, I guess.


It's actually very easy to overfit climate models. They are fit to observed data with statistical inverse problem techniques (the same as I imagine they do with astronomical data). Climate change models are just directly discretized physical equations. Just like astronomy, the decisions are made on what physics are represented in the model and what are parameterized.


> Climate change models are just directly discretized physical equations.

I'm no expert, but it's my understanding that they're hugely more complicated than that implies. Sure, there's lots of physics there. But also chemistry and biology. The best ones are general circulation models,[0] and the outcomes will never fit some pretty theoretical structure.

0) https://en.wikipedia.org/wiki/General_Circulation_Model


> Overfitting a model for global climate change, for example, isn't an issue, because you're not interested in something like physics. I mean, it's based on physics, but that's buried way down in the model.

It's not so much that as that the goals are different. We want to understand cosmology for its own sake. We want to understand climate change because that knowledge drives policy. For that purpose, it doesn't really matter that we're unable to predict the exact weather in Denver at 11:23 AM on October 27, 2091. What matters is that we are able to predict in broad brushstrokes that the consequences of business as usual will probably be bad, and so we ought to seriously consider doing something about it. There is no conceivable outcome of cosmological modeling that will drive policy changes like that.


I agree that models for the overall development of the universe are a lot like models for global climate change. The scale is vastly different, of course. But I bet that the relative cell sizes in our models are similar. Because they're running on similar machines.

But the goals for a theory of gravity, and its integration with QM, are totally different. Or at least, that's my perhaps naive opinion.

Edit: That is, relative cell sizes and total cell counts.


"There is no conceivable outcome of cosmological modeling that will drive policy changes like that."

We hope


> 5 sigma

Sounds like there's new physics to be gleaned from the phenomenon.


I don't know why this is downvoted (maybe I'm missing a bad joke)? But this is exactly why people are excited about this. Either,

a) we have a systematic in our measurement and our model is right and eventually we'll work this out when we fix up everything, or

b) our model isn't right and there is new physics (i.e. something not in Lambda CDM + GR) that explains this discrepancy.


Perhaps I should have used the conditional - "there might be new physics".

But yeah, that's the long and short of it.


It was obvious you were just guessing. No need to be politically correct about it.


There'll need to be a few more funerals first.


I thought the CMB value for H0 (measured by the Planck satellite) was already known to be incorrect as the measurements for it did not take into account that the earth is moving through the CMB and only measured in one direction.

This is the big discussion in the astrophysics field at the moment.


I'm 99% sure this isn't true. Can you point to a single paper that mentions it?

Edit: actually I'm 100% sure this isn't true. See for example the all sky map from planck (http://www.bbc.co.uk/news/special/2013/newsspec_5106/img/pla...). And a paper discussing how they will measure the CMB dipole using planck https://ui.adsabs.harvard.edu/abs/2002A%26A...393..359P/abst...


I stand corrected, the current Planck analysis seems to take in account that we are moving through the CMB rest frame although I need to read a bit more. It's a long paper [1]

It was the earlier Planck measurements that did not account for this.

[1] https://arxiv.org/pdf/1907.12875.pdf


This effect seems to be treated and dismissed as insignificant w/r/t the cosmological parameters, in section 3.11 (page 78) of the paper you reference.

There were a lot of new systematic effects in the Planck data, and a lot of the data analysis work amounted to identifying the most significant ones and modeling them out.


Quasars are very far away. What if the Hubble constant was just that much higher back then? Many models of the early universe show a period of rapid expansion in the first billion years.


The Hubble parameter. It's not defined to be constant in time, only in space.


The Hubble parameter is a function of redshift (or time, equivalently): H(z). The Hubble constant is H_0 := H(z=0).

The discrepancy is in H_0. That implies a discrepancy at other redshifts as well, though how you connect H_0 to H(z) depends on your cosmological model. That's why the discrepancy in H_0 measurements is model-dependent. Different experiments are sensitive to the expansion rate during different epochs of cosmic history.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: