
LIGO black hole echoes hint at general-relativity breakdown - privong
http://www.nature.com/news/ligo-black-hole-echoes-hint-at-general-relativity-breakdown-1.21135
======
vanderZwan
> _The echoes could be a statistical fluke, and if random noise is behind the
> patterns, says Afshordi, then the chance of seeing such echoes is about 1 in
> 270, or 2.9 sigma. To be sure that they are not noise, such echoes will have
> to be spotted in future black-hole mergers. “The good thing is that new LIGO
> data with improved sensitivity will be coming in, so we should be able to
> confirm this or rule it out within the next two years.”_

Can I take a moment to commend the level-headed non-hype of this paragraph? It
gives the impression that their first priority is finding the "truth" (I know,
I know; I'm using it as a shorthand), not whatever they _want_ to be
confirmed. Gives the research much more credibility.

I mean, I know it's Nature so we should expect it, but it's still nice to see.

~~~
wyager
It's actually a bit hyped for particle physics. It ignores the fact that
you're very likely to see a lot of "significant" effects in physics because
there are tens of thousands of data analyses performed every year, so we're
bound to find some chance occurrences at this significance level. See the
diphoton excess a few years ago.

This is why the particle physics community has a very strict unofficial
standard of 5 sigma for significance. People don't generally publish "serious"
papers at 3 sigma.

~~~
roywiggins
We've only observed a small handful of black hole collisions though so seeing
something at 2.9 sigma would be somewhat more surprising, right? Definitely
not rising to the level of a discovery yet of course.

~~~
maxander
Only a small handful of observations (only one, last I heard, but I may be out
of date), but lots and lots of _analyses._ And for every scientist that has
their own model and does their own data processing, there's a chance that the
model lines up with some arbitrary noise in the data.

~~~
lamontcg
You also have to add in that once these experimental results were released
that a hundred or so theoretical physicists immediately started working on
massaging the data into supporting their pet theories. After every anomalous
result there's immediately hundreds of published papers by someone trying to
simply be the first to publish in case their idea happens to pan out. It is a
kind of shotgun approach to winding up being the next Dirac discovering the
positron from math or whatever.

Since there's 99 other PhDs who probably looked at this data and found their
pet theories didn't match the data and haven't been able to publish you have
to account for that filtering effect that this was the 1-in-100 paper that
managed to match the data. Adding that "Look Elsewhere Effect" to the 2.9
sigma would push the global significance (in the literal sense of global --
meaning all the research teams across the whole world) of this result down
into meaninglessness.

Of course its likely that any discovery would start out looking like something
on the edge of significance exactly like this. The safe bet is that this
disappears, but all we can do is wait for more data to come in and see if the
significance improves or disappears.

And I do really hope that someone finds something like this via the LIGO data.
I'm convinced there's something very interesting out there to find, and sooner
or later it should pop up experimentally and shake up our model of the
universe.

~~~
jdmichal
Yes, this is basically a distributed version of the issues discussed in
psychology regarding researcher degrees of freedom with a single data set. If
you throw enough models at the data, one of them is bound to stick, whether it
is predictive or not.

------
zeroer
This is super exciting stuff! We know that the two most accurate models of the
physical world, Quantum Mechanics and General Relativity, contradict each
other so at least one, and probably both, are approximations to the real laws
that govern our universe. Since the QM and GR disagree about what happens for
small massive objects, and in particular black hole event horizons, this is a
place to look for divergence to existing theories. If these echos holds up
under repeated measurements, it could be one of the most consequential
measurements of this century. This is another example of how taking
measurements to verify a theory you think you know can lead you in completely
unexpected directions.

Though, for now, the LIGO team is apparently saying that these results could
be the result of noise which would occur 1 out of 270 times. That's not strong
enough evidence (in my mind) to overcome the overwhelmingly likely prior that
General Relativity is correct. In time, we'll see.

Also, the article mentions that LIGO has witnessed 3 black hole mergers. Last
I heard LIGO had only witnessed 2.

~~~
nonbel
>"Quantum Mechanics and General Relativity, contradict each other so at least
one, and probably both, are approximations to the real laws that govern our
universe. [...] overwhelmingly likely prior that General Relativity is
correct"

If you think both QM and GR are likely incorrect, then why do you use "a
overwhelmingly likely prior" that GR is correct?

~~~
raattgift
Neither is "incorrect"; the Standard Model and General Relativity are two of
our best physical theories in that they both accord entirely with
observational and experimental evidence to date.

Either or both may be _incomplete_ , however. Correctness and completeness of
any theory in mathematical physics are esentially orthogonal. You can have a
complete theory that is just wrong, for example.

As I wrote a bit earlier in this thread, the most straightforward approach to
quantizing General Relativity fails in strong gravity. Additionally, the
classical field theory that is General Relativity is defined on a smooth
manifold and yet so far we have been unable to escape the conclusion that some
systems of mass-energy inevitably produce a non-smooth discontinuity. A
completion of classical General Relativity requires the smoothing of these
regions. Sharpening this, the problem with GR is the prediction of a
gravitational singularity; if singularities are physical at all (even if they
are in a region of spacetime that is inaccessible outside event horizons),
then General Relativity is incomplete in its own terms.

The Standard Model as a paradigm of quantum field theory, on the other hand,
is defined against a flat spacetime and thus relies on the result from General
Relativity that the flat spacetime metric is induced on the tangent space of
every point in a smooth spacetime. So if GR is incomplete, so is the Standard
Model, in its own terms. (This is not just an academic point; any theory of
gravity that does not reproduce the Poincaré invariance of flat spacetime in
the energy scales of the Standard Model has a terrible _correctness_ problem.)
Additionally, the Standard Model is not especially well-defined at GUT energy
scales. Additionally, the Standard Model does not describe the whole of the
non-gravitational content of the universe; for example, it is silent on dark
matter.

The Standard Model is highly correct, however, in the limits where it is
effectively complete. It's a pity it has so many free parameters that have to
be determined by experiment.

Likewise, General Relativity is both highly correct in the limits of present
observability, and it is complete in its own terms _if_ one admits the
possibility that gravitational singularities only arise in our idealized
models and that, for example, there are no exactly Schwarzschild black holes
anywhere in the past, present or future of our universe. (One have to show
that, and also that there are no other physically realizable systems of matter
that can generate non-smoothness in our spacetime. That's not an easy ask.
Although General Relativity has only one of the free paramaters complained
about in the previous paragraph, it doesn't offer much guidance about how to
show that you can't actually generate a low-Q Kerr-Newman metric in reality,
and worse, some of that guidance must come from the high-energy behaviour of
matter fields -- we can only be as complete as the Standard Model right now.)

~~~
lambdadmitry
Posts like this is why I read HN. Thanks a lot! :)

------
ThePhysicist
Personally, I would not be surprised if we discover that our understanding of
general relativity is wrong for extreme values (i.e. very high mass
densities). The whole problem of dark energy and dark matter -which has failed
to show up in any conceivable form so far- also gives reason to doubt the
validity of our current theory of gravitation.

I think we're in a similar situation like at the end of the 19th century, were
many physicists thought that everything that could be discovered already was,
apart from some "edge phenomena" that would need to be resolved somehow using
the current theories. In the end, these edge cases turned out to be the first
hints of some completely new theories that dramatically improved our
understanding of nature. I think that gravity and quantum mechanics are due
for a similar change, and in the coming decades we might just get the data
that we need to make this change happpen.

I also have difficulties "buying" the current theory of black hole physics,
especially the concepts of an event horizon and the infinite mass density, as
well as the problems which arise from them (e.g. black hole energy evaporation
through virtual particle generation at the horizon). And as previous theories
of gravitation have broken down at points of extreme value (high energy, high
speed), I think black holes are a hot candidate for breaking general
relativity.

~~~
raattgift
There's plenty of hot dark matter coursing right through you right now !

Fermi -> Wang Ganchang -> Harrison, Kruse & McGuire took a while. I wouldn't
expect a quick detection of something with the properties of a WIMP, as I
would expect any such particle to be even harder to detect than a neutrino,
especially if it doesn't feel the weak nuclear force.

I don't know why you think that dark energy has failed to show up in any
conceivable form -- how do you explain the cosmological redshift without it?
Dark energy in its simplest form is just the cosmological constant, and can be
an inertial effect.

In any case, even if the concordance cosmology is simply _wrong_ , that does
not mean that GR is incorrect as much as we are wrong about the mechanisms
that generate the metric (Afshordi, one of the authors of the paper at the
top, has proposed non-universal coupling to the single metric of GR), or
alternatively we are wrong about the way we choose and stitch together metrics
(i.e., we're misusing GR in a way that introduces serious errors at long
length scales).

Do you really think there are working scientists who think that there's
nothing more to discover? Conversely, are there many who deny that the huge
preponderance of evidence we have so far favours the Standard Model and
General Relativity? Even if we "demote" SM and GR to effective field theories,
the effective limit of each is very nearly everywhere readily accessible,
isn't it?

Buying the BH singularity would be, I think, a pretty extreme position. Every
viable post-GR effort I know about is to some extent focused on abolishing
singularities somehow. (You could alternatively keep them always hidden and
resolve things like BH thermodynamics; if you always keep information locked
up in another region of spacetime -- behind a horizon -- there's no
information loss problem to consider, and you can "cut" singularities out of
the manifold recovering everywhere-smoothness. But black holes might evaporate
completely in the far de Sitter like future.)

Not buying an event horizon in a system of local physics with a maximum
propagation of local state from one point to another seems even more extreme.
The existence of a maximum local speed -- whatever it is, it could be much
faster than light -- sets the slope of a nonempty open convex cone of tangent
vectors (a causal cone at each point for the field-values at that point) which
in turn lets us fix a first order quasilinear system of PDEs admitting a
hyperbolization, and in that you can always find an observer that sees an
event horizon.

The formation of the BH creates a dynamical spacetime with an acceleration
between observers before and after the collapse, and that alone is sufficient
to produce an event horizon.

Abolishing 'c' (as a general free parameter defined at every point; the
definition can even vary by location in spacetime) seems a lot harder to
swallow than abolishing event horizons. If you accept 'c', then while
Schwarzschild event horizons is pretty easy (nonzero J or not-always-zero Q at
all physical compact dense objects, for example), abolishing _all_ event
horizons requires a lot of contortions to avoid immediate conflict with local
experiment, much less astrophysical observation.

~~~
ThePhysicist
> There's plenty of hot dark matter coursing right through you right now !

That statement is exactly the problem as I see it, as dark matter is an
attempt to save an existing paradigm using a trick that makes use of unknown
but conceptually understandable matter.

The same was and is true for Einstein's cosmological constant: It's a hack
that was necessary to make a theory match with the observations.

Introducing hypothetical/imvisible matter to make a theory fit observations
does not mean that this matter really exists.

I did not say that scientists think that there is nothing more to discover,
just that there is a tendency to try to fix up existing theories instead of
accepting that they might be wrong. I'm no expert in particle physics or
relativity (my field is quantum mechanics), so I'm not able to judge the merit
of different theories involving dark matter, I'm just not convinced that dark
matter / dark energy is real. If anyone shows me compelling experimental
evidence I'll be happy to change my mind.

So far we haven't seen any convincing arguments for the existence of dark
energy or dark matter though, and I think there's a chance that they end up as
the 21st century equivalent of the "ether".

~~~
ggreer
I'm pretty sure raattgift was referring to neutrinos when he said hot dark
matter was "coursing right through you". He wasn't assuming the existence of
any speculative form of dark matter.

~~~
raattgift
Yes. "Hot" because neutrinos move quickly compared to the speed of light,
"dark" because they do not feel electromagnetism, and "matter" because they
couple to the metric.

They explain the anomalous momentum in beta decays, among other things, and
are still difficult to detect.

To explain the anomalous momentum we infer around large scale structures at z
<< 1, it's pretty reasonable to consider neutrinos or neutrino-like particles
that are "cold" \-- moving slowly compared to the speed of light, thus more
likely to "hang around" in a region of spacetime instead of quickly running
away to infinity. Although they interact very weakly with matter, they still
impart momentum, so hot dark matter would tend to smear apart gas clouds
rather than encouraging them to collapse into denser objects like stars.
Likewise, it is perfectly reasonable to search for them in ways analogous to
how the neutrino itself was searched for experimentally and observationally,
and like with the first detection of the neutrino, it is liable to take time
to detect or let various non-detections exclude all the regions of the
particle mass vs nucleon cross-section parameter space.

Moreover, the search for this sort of cold dark matter does not preclude
concurrent searches for other possibilities.

So I can't agree with ThePhysicist that there is a problem here, other than
that there is apparently a communications gap that affects even people with
backgrounds in quantum mechanics.

------
lamontcg
The breathlessness of this article is fairly annoying.

LIGO wasn't really setup to confirm GR around black holes. It was designed to
study highly energetic, high-curvature gravitational phenomena where it would
be expected that there might be deviations from GR. Measuring deviations from
GR predictions is exactly why you'd built the experiment and isn't "ironic" at
all (and not even in the Alanis Morrissette sense since finding something new
would be more like having a party on your wedding day rather than raining on
it).

GR is also fully expected to break down at the central singularity of a black
hole. The curvature of space-time become infinite there with infinite force.
At the very least its expected that quantum gravity would smear this out.

The problem of black hole entropy at the event horizon of the black hole has
also been known for decades and is one of the drivers behind doing research
like LIGO. The "firewall" problem is recently all the rage in the west coast
theoretical community, but its been known for some time that we can't make
sense of black hole entropy entirely classically with GR, so that finding non-
GR effects near the event horizon is at least hoped-for, if not expected, and
LIGO is precisely the kind of experiment that could shed light on that.

Its legitimately very exciting, but its the result of methodically grinding
away at a very hard problem for decades.

------
maverick_iceman
2.9 sigma is hardly evidence of anything in fundamental physics. There was a 4
sigma evidence of diphoton excess from ATLAS and CMS last year which went away
this year. 3-4 sigma discrepancies come and go. It's not for nothing that
physicists have the discovery criteria set at 5 sigma.

What's more, one should be extremely skeptical when observations seem to
violate long held physical theories. The superluminal neutrinos from OPERA
ostensibly had >5 sigma evidence but nobody (correctly) took it seriously as
it violated special relativity. Unsurprisingly, it was ultimately traced to a
loose GPS cable.

------
noobermin
Minor comment from another physicist here, not in this field but from what my
friends from this field say, most people expected hints of quantum gravity to
come specifically from black holes, so if there is anything new to be learned
about GR's limits, looking at black holes is the right place to look.

------
m_mueller
I've had this idea in my mind how Black Holes could be connected to universe
generation. It came about when I learned that the known universe would be a
black hole if its mass was concentrated in the center, i.e. its size is about
the same as the event horizon for such a mass.

Thinking backwards, obviously the universe at some point would have been
described as a black hole by GR. Then of course spacetime expansion comes into
play, that somehow makes it into not-a-black hole.

So here is the idea: What if a Big Bang is exactly what happens when matter
falls into itself until the original spacetime continuum breaks? I.e. the
energy of the original structure forms and gets linked into a new spacetime
continuum - part of it as dark energy that expands the new spacetime, part of
it as normal energy and matter.

Is there anything we know makes my idea impossible? If it were true, would
there be a chance that we could combine our empirical knowledge of the Big
Bang with this new empirical knowledge of gravitational waves to come up with
a testable unified theory (i.e. Quantum Gravity)?

~~~
pdonis
_> the known universe would be a black hole if its mass was concentrated in
the center_

The universe doesn't have a "center". The universe did have a much higher
density right after the Big Bang, but it was expanding rapidly; that's why it
was (and is) not a black hole.

 _> Is there anything we know makes my idea impossible?_

Yes, the fact that it's based on a misconception about the universe's
spacetime geometry. See above.

There are certainly "bounce" models being considered for what preceded the Big
Bang (although they're by no means the only models being considered). But they
don't work like what you are describing.

~~~
m_mueller
My point is that the way the universe works, i.e. spacetime expansion,
inflation and acceleration (dark energy) could all be governed by processes
inside a black hole's singularity - something we afaik don't have a good model
yet. It's only a black hole from the reference point of the parent universe. I
don't mean the old bounce model, more like bubbles around a water hose that
get smaller the farther away from the source - i.e a stellar black hole
creates a mini universe through its own spacetime rip.

~~~
pdonis
_> My point is that the way the universe works, i.e. spacetime expansion,
inflation and acceleration (dark energy) could all be governed by processes
inside a black hole's singularity_

The singularity doesn't have an "inside". See below.

 _> something we afaik don't have a good model yet_

The models that are being looked at get rid of the singularity altogether.
They don't try to model it as being made up of internal parts.

 _> a stellar black hole creates a mini universe through its own spacetime
rip._

Some physicists have considered models in which black holes give birth to
"baby universes" (Hawking and Lee Smolin are two that come to mind). But these
models don't "rip" spacetime; they remove the singularity, which in the
standard classical GR model is just a spacelike surface--a moment of time--
that represents a boundary of spacetime, and instead just extend the spacetime
further on, into the spacetime of the new universe.

------
yk
Going quickly through the awesomely titled paper [1], they fit a template to
the data and obtain something like 2.9 sigma for their best value, without a
obvious way how they deal with the look elsewhere effect. On the other hand,
this is probably the window on nature that is worst understood, we understand
gravity at solar system field strength and distances very well, we have great
data from particle physics, but until last year our only evidence for the high
field regime of gravity came from pointing telescopes toward astronomical
objects - and note that telescopes work with the electro-magnetically, they
are using the wrong force.

I think this is exciting, but only the first step. With this, one can deal
with the look elsewhere effect by pointing to this paper and using their
analysis, but I wouldn't think of this by itself as a hint towards deviations
from general relativity.

[1] Abedi &al. "Echoes from the Abyss: Evidence for Planck-scale structure at
black hole horizons"
[https://arxiv.org/abs/1612.00266](https://arxiv.org/abs/1612.00266)

------
nonbel
>"if random noise is behind the patterns, says Afshordi, then the chance of
seeing such echoes is about 1 in 270, or 2.9 sigma. To be sure that they are
not noise, such echoes will have to be spotted in future black-hole mergers."

We are getting closer and closer... So finally we see a correct interpretation
of a p-value in the media, but the connection to the following sentence is not
clear so I am not sure the meaning was really understood.

How does spotting more such echos allow us to "be sure they are not noise",
and how does this relate to that 1/270 number?

If the probability of such an observation was 1/1.7 million assuming a random
noise model (rather than 1/270), would that mean we could "be sure it was not
noise"? Shouldn't that depend on how well the observations could be fit by
alternative models?

~~~
Natanael_L
You throw a dice 3 times. It always show 6. How do you know the dice is
loaded, and that it wasn't a fluke? Repeat the experiment, and see if it
starts looking random (pattern disappears) or if the pattern is strengthened
(always saying 6).

~~~
nonbel
Also, I couldn't get the original documents at the time, but you reminded me
of this:

>'The simplest assumption about dice as random-number generators is that each
face is equally likely, and therefore the event “five or six” will occur with
probability 1/3 and the number of successes out of 12 will be distributed
according to the binomial distribution. When the data are compared to this
“fair binomial” hypothesis using Pearson’s 2 test without any binning,
Pearson found a p-value of 0.000016, or “the odds are 62,499 to 1 against such
a system of deviations on a random selection.”'
[https://galton.uchicago.edu/about/docs/2009/2009_dice_zac_la...](https://galton.uchicago.edu/about/docs/2009/2009_dice_zac_labby.pdf)

The point is you will always find deviations (with extremely low p-values) if
you look hard enough. It is about collecting data as carefully as possible,
and determining which model fits best, not which fits perfectly.

~~~
GFK_of_xmaspast
> The point is you will always find deviations (with extremely low p-values)
> if you look hard enough

I don't know how you can get that point from the article instead of 'an
1894-era die is biased but you need a lot of statistical power in order to see
that'.

~~~
nonbel
I think the lesson is clear... the more messed up your methods the easier to
see the deviation (ie the historical vs modern experiment). Also, where do you
get this:

>"you need a lot of statistical power in order to see that"

------
daxfohl
"The most exciting phrase to hear in science, the one that heralds new
discoveries, is not 'Eureka!' but 'That's funny...' " \-- Isaac Asimov

------
wfunction
Does any layman expect GR _NOT_ to break down? To me it's intuitively only
correctly describing emergent phenomena in the limiting case of large scales;
it's bound to be less accurate than something that is correct on small scales.

------
hanso
They're hopelessly out of their depth. This kind of talk makes zero sense

------
WhitneyLand
No aspersions here. I'm keeping the faith, but can anyone recall what these
allude to?

1) Utah has solved the energy crisis on a table top with deuterium

2) That bump in the collider data is looking pretty odd

3) Remind your child to chelate if the autism acts up

4) Wow those neutrinos are moving so fast

5) Bigotry can stop if we'd go door to door and talk about it

6) Arsenic can kill but it enables growth for at least one family

7) This theory will be perfect if we get rid of Λ

In all fairness, this topic is a little different because we know for sure
something big has to happen eventually to reconcile QM/Gravity.

------
guard-of-terra
It's weird that they had the idea of firewall just in 2012. I had this exact
idea maybe ten years prior while being a school student. A fairly obvious one
if you think of that - some photons will orbit the black hole.

~~~
TheOtherHobbes
It's potentially more complicated than that.

Someone on Reddit asked a brilliant question a while back - what happens to
quantum fields at the event horizon?

In QFT, fields are everywhere. But to support a field, you need a mechanism
that allows causal propagation - which is exactly what isn't allowed across an
event horizon.

So at the very least you have a discontinuity where three and possibly all
four fundamental forces stop working, and which is separate to any
hypothetical relativistic singularity.

Whatever is left is going to be some kind of unimaginably weird sub-quantum
soup.

I don't know if that's the same firewall that was invented in 2012. But the
takeaway is that relativity isn't complete enough to model black holes. You
absolutely need to include quantum effects - and when you do, things get very
strange indeed.

~~~
danbruc
_In QFT, fields are everywhere._

The fields in quantum field theory are mathematical tools, they are not
physical entities.

 _But to support a field, you need a mechanism that allows causal propagation
- which is exactly what isn 't allowed across an event horizon._

But that is only a one-way thing - the future light cones of events inside the
event horizon are contained inside the event horizon but the future light
cones of events outside the event horizon certainly overlap the inside of the
black hole.

~~~
vanderZwan
> The fields in quantum field theory are mathematical tools, they are not
> physical entities.

Still, they only maintain physical relevance as long as they are continuous,
no? Otherwise you literally have a break in reality.

~~~
guitarbill
> a break in reality.

Such as a singularity (e.g. gravitational)? I think in Physics (just as when
analysing functions), the interesting things happen when you approach such
limits.

