
Stephen Hawking’s Final Paper: How to Escape from a Black Hole - mgalka
https://www.nytimes.com/2018/10/23/science/stephen-hawking-final-paper.html
======
jaimehrubiks
I wish I could understand the minds of these people. They lived with us and
yet they are so much ahead in the comprehension of the universe. I've always
loved science, I chose to do computers in the end though. I don't understand a
single world of the abstract.

I also wish I could just know what people in a thousand years learn about the
universe. It seems like the knowledge we gain (science and technology) is
exponential and just started a few years ago.

It also feels like this contribution is so cool, even more seeing that Hawking
is sharing it with us things after dying somehow.

~~~
IAmGraydon
Stephen Hawking would not have understood a single line of your code. The
language of his and your profession are different and must be learned. Would
you go to Japan and feel that the people there must be eons beyond your
comprehension because you don’t speak Japanese? No! You would learn Japanese,
and would find out that they’re just like you, but they just speak a different
language which they spent many years perfecting.

~~~
orbifold
To keep in your analogy working in mathematics is more like working with a
codebase that is ~100 years old and that still contains all the dead ends,
unused code and false starts, because no code ever gets deleted. It is
basically impossible to navigate modern research mathematics without a team of
professionals guiding you for quite some time. Some of the most successful
research mathematicians manage to either find a largely unexplored niche
(Jacob Lurie with Higher Category Theory) or manage to work their way
backwards to a research level understanding of a specific already established
field (Peter Scholze, he claims that he basically worked his way backwards
from the Langlands Conjectures and never took a course in Linear Algebra, but
just picked it up on the way).

------
iMuzz
This was so difficult for me to understand because the author tried _so hard_
to make this sound poetic.

Here's what I got..

\- For a long time we thought that any information (matter/light) that goes
into a blackhole is lost forever and is "corrupted".

\- Hawking believed this for a long time and said “God not only plays dice,
but he often throws them where they can’t be seen." No one really knows _how_
the blackhole actually "corrupted" the information but had some nutty
theories.

\- 30 years later (in 2004) Hawking changed his mind and said that information
can actually be retrieved from a blackhole.

\- A dude named Andrew Strominger recently discovered that black holes have
this "soft hair" property that can be "read" to theoretically "see" what is
inside the blackhole.

\- Hawkings last paper says that he thinks the information inside will be re-
emitted when the black hole evaporates.

TL;DR: Hawking for a long time thought matter/information that went into a
blackhole was lost forever - and then changed his mind about it.

~~~
a1369209993
FWIW, my understanding is that black holes are the physics equivalent of a
cryptographic mixing function as used in eg chacha20; reversible in the strict
sense, but missing any single bit of output completely 'random'ises the
recovered input.

Assuming Hawking radiation exists (which seems very likely based on what we
know about relativistic and quantum physics), it must carry quantum
information in the form of position/momentum, photon polarization, etc, and
it's not clear where else that information could possibly come from.
(Orthogonal-basis measurements can _sort of_ generate classical information
out of nothing, but not in a sense that's useful here.)

~~~
raattgift
> cryptographic mixing function

If only that were true; that'd be no problem at all.

The problem is procedural, and has to do with slicing up spacetime-filling
fields into field-values on spacelike hypersurfaces (values-surfaces). I'll
focus on _one_ procedure -- there are others that have their place as well.

In a spacetime without any black holes at all, we can take any such values-
surface whereupon all the values are specified, and from that we can recover
_all_ the values of the spacetime-filling fields everywhere in the spacetime.
This is the
[https://en.wikipedia.org/wiki/Initial_value_formulation_(gen...](https://en.wikipedia.org/wiki/Initial_value_formulation_\(general_relativity\))

The important thing about the initial value formulation is that we can on our
chosen values-surface perturb a single field-value, and trace the consequences
to neighbouring values-surfaces, and their neighbouring values-surfaces, and
eventually recover the whole set of spacetime-filling fields everywhere in the
spacetime. Indeed, one family of slicing,
[https://en.wikipedia.org/wiki/Hamiltonian_constraint#Hamilto...](https://en.wikipedia.org/wiki/Hamiltonian_constraint#Hamiltonian_of_classical_general_relativity)
lends itself to
[https://en.wikipedia.org/wiki/Canonical_quantum_gravity](https://en.wikipedia.org/wiki/Canonical_quantum_gravity)
(CQG). CQG works everywhere in the absence of strong gravity, and even
provides a clear definition of strong gravity in terms of renormalization:
[http://www.preposterousuniverse.com/blog/2013/06/20/how-
quan...](http://www.preposterousuniverse.com/blog/2013/06/20/how-quantum-
field-theory-becomes-effective/) (Below I'll generalize this to the Effective
Field Theory (EFT)).

If we have no black holes, and no early singularity, the effective theory is
almost certainly correct everywhere in the space-time. (Here I won't even
consider the early universe problem; there is a problematical ultradense phase
in the Hot Big Bang model that requires beyond-the-standard-model physics that
wrecks fields-of-the-standard-model values-surfaces _before we get to strong
gravity_.)

If we add black holes, but without Hawking Radiation (that is, they only ever
grow) on each hypersurface we have to "cut out" the field-values at the
boundary of any region containing strong gravity. These regions are,
crucially, _well inside_ the event-horizons of massive black holes. That is,
the EFT does not _end_ at horizons, it ends near gravitational singularities.

While there are some annoyances, for most reasonable slicings, we can still
recover the full spacetime-filling fields everywhere in spacetime. The field-
values that enter the horizon are trapped within the horizon, and eventually
they are trapped within our "cut out" region. As our black holes never
evaporate, those field values have no impact on _future_ slices. We have,
however, found ourselves with a new constraint that picks out a direction of
time: the future is the direction in which the "cut out" has no impact, but
the past is one in which the "cut out" emits field-values. That's the main
source of annoyance, and stresses the "initial" part of "initial values".
Picking out just any surface will only guarantee you recovery of the _future_
successor surfaces; in most cases you cannot even in principle recover the
past values-surfaces, with the result that you also cannot recover the whole
set of spacetime-filling fields. _THIS_ is the incompatibility between quantum
mechanics and general relativity.

(In practice, researchers -- including Hawking in his original Hawking
Radiation paper -- choose to study "eternal" black holes that never grow or
shrink, so that the field-values are always recoverable everywhere outside the
horizon. However, because the black hole doesn't grow, you have to play some
tricks to deal with matter that crosses the horizon. Those tricks lead to the
negative-energy particles in Hawking's paper and in many popularizations of
Hawking Radiation. In a more realistic model, one would let the black hole
grow or shrink, and do away with the need for negative energy altogether,
although it would not have been tractable for Hawking to take that more
realistic approach in the fancifully named "Black hole explosions" paper of
1974,
[https://www.nature.com/articles/248030a0](https://www.nature.com/articles/248030a0)
).

Let's condense the point made above: we cannot reconstruct the full past of a
black hole that forms by gravitational collapse of matter. (This gives rise to
the black hole uniquess theorems and in particular
[https://en.wikipedia.org/wiki/No-
hair_theorem](https://en.wikipedia.org/wiki/No-hair_theorem) ). Without black
hole evaporation, we _can_ still predict the full future.

If we add black hole evaporation via thermal Hawking Radiation, we have a new
problem that breaks the future predictability as well. Black holes _at every
time in their history from initial collapse to final evaporation_ emit Hawking
quanta fully determined by their no-hair parameters[1]. In a typical black
hole, the mass parameter is the driving term. If one starts with an initial
values-surface just before strong gravity appears, then the very next (future)
values-surface _probably_ has Hawking quanta. The spectrum of the Hawking
quanta is statistical: it is, in quantum field theory terms, a _mixed state_.
But the spectrum of all the quanta in the fields just before strong gravity
arises is a _pure state_. In more relaxed terms, we have full knowledge of the
pure state, but we can only talk in terms of statistics for the mixed state.

The problem persists across the whole of the future spacetime: a Hawking
quantum can fly off to infinity, and for realistic fields (e.g. the standard
model), it may interact with other matter at arbitrarily large distances from
the black hole. (Hawking Radiation was initially modelled with all matter
represented as a non-interacting scalar field; the field-values of the Hawking
scalars propagate to infinity, but don't really matter all that much in the
model. But if a small-mass black hole emits an electron-positron pair, the
former could fly off and meet a proton some time in the future, and probably
we would want to know about a proton gas being neutralized with the result
that it may begin to collapse gravitationally, whereas in the absence of
Hawking electrons, it likely would not. Although the initial model was very
restricted, these sorts of implications were almost immediately clear: large
scale effects can be triggered by Hawking radiation, and as Hawking radiation
is inherently probabilistic, we have a _cosmic_ Schroedinger's Cat problem.)

So, back to your words:

> cryptographic mixing function

Hawking radiation converts a pure state into a mixed state. A cryptographic
mixing function converts a pure state into a pure state in a way which is hard
to trace.

Now, back to this article. Hawking et al. decided to break the no-hair
theorem, and to decorate black holes in such a way that you can still recover
the past of a (never-evaporating, always-growing, no Hawking radiation) whole
spacetime from a values-surface on which there is already strong gravity.
Additionally, the same mechanism allows one to recover the whole future of the
spacetime from a values-surface on which there is strong gravity (and thus
Hawking radiation). The downside is that one has to have the full set of
values on the fields with strong-gravity, and those will (under the idea in
the OP paper) include extremely low energy "soft hair" particles (the OP paper
does not decide whether "soft hair" is just photons, or may be the whole set
of standard model particles; as with the original 1976 paper Hawking and his
coauthors consider a restricted form representation of all matter in the
spacetime).

So in a way, what they are doing is introducing a "cryptographic mixing
function" to avoid producing a mixed state. You get determinism everywhere
(instead of determinism before strong gravity, and probability after) in
initial-values formalisms, by doing away with the no-hair theorem (which
raises questions about the uniqueness of theoretical black hole models like
Schwarzschild and Kerr).

It is an interesting idea that deserves further study (and will get it), but
it is too early to make bets on whether it will be fully succcessful at
repairing the "damage" that strong gravity does to the EFT.

Moreover, it is _not_ an answer to the question, "what happens in strong
gravity", and in particular does not prevent the formation of a gravitational
singularity inside a black hole. It also has nothing to say about what happens
at extremely high energies (much higher than the electroweak scale) in the
early hot, dense universe.

However, just making the EFT work in a wider variety of spacetimes is a fine
goal!

\- --

[1] The Hawking radiation when a black hole initially forms by gravitational
collapse of matter is pretty extreme and is relevant to the early black hole
and its immediate environment. It's hard enough to take into account that the
difficulty gets its own name: the backreaction problem. The gravitational
backreaction (much less matter interactions) of hairs produced at young black
holes is not mentioned in the OP paper by Hawking et al. :/

~~~
a1369209993
This is a significantly more... more response than I was expecting, thank you.

I do have a couple of quibbles, though:

> Hawking radiation converts a pure state into a mixed state. A cryptographic
> mixing function converts a pure state into a pure state in a way which is
> hard to trace.

This is actually specifically what I meant by "the physics equivalent of";
that is, a quantum-computational mixing function that converts a arbitrary,
possibly mixed state to another, probably[0] mixed state, such that a
hypothetical extra-physical observer with full knowledge of both states would
see the result as uniformly pseudorandom from the range of all possible output
states.

Also, I take as a sort of provisional axiom[1] that physics is time-reverible
(not necessarily symmetric, although probably CPT symmetric) and therefore
cannot throw away or generate new any (quantum mechanical/qubit-based, so
orthogonal-basis measurements aren't) information. (Given this and something
like the Bekenstein bound, that a evaporating black hole must be leaking its
information to somewhere, and its radiation must be _getting_ its information
content from somewhere, are seemingly trivial.)

0: In the thermodynamics/statistical mechanics "almost certainly" sense. 1:
similar to and as serious as Conservation of Energy or "Scientists are made
out of atoms and cannot cause magical
non-(unitary/linear/differentiable/local/CPT symmetric/Liouville
uniform/deterministic/etc) events by looking at things."

~~~
raattgift
> more response than I was expecting, thank you.

You're welcome.

> evaporating black hole must be leaking its information to somewhere

The information about the contents of the BH during its formation and growth
is in the region of strong gravity. Classically, it's squashed into the
gravitational singularity; fully classically the singularity is always hidden
behind an event horizon, so it does no harm to predicting events outside the
horizon.

However, we now add a quantum field theory to the picture.

The origin of Hawking radiation is the acceleration between observers before
the formation of the strong gravity and the observers after that; the
accelerated (later) observers see particles where the non-accelerated (early)
observers see none. The particles appear in the dynamical spacetime around
(but outside) the horizon. The reason they are there is (rougly) that the
creation and annihiliation operators that line up in "unstretched" vacuum
separate in "stretched" vacuum, and annihilation operators miss the created
particles (that is, the annihilation happens at the right spatial coordinate,
but too early or too late: the created particle is elsewhere). The analogy
with Unruh radiation, which appears for accelerated observers in flat
spacetime but not for unaccelerated observers in the same spacetime is not
accidental. In the Unruh case, the acceleration mechanism (say, a rocket
engine) is the reason the accelerated observer sees the extra particles. In
the Hawking case, the acceleration mechanism for later observers is the
dynamically collapsing spacetime.

If nothing exits the horizon of a black hole (at least until final
evaporation; and for that we are stuck with not knowing enough about the
behaviour of quantum fields in strong gravity) then the only parameters
available at any instant in the (QED-filled) dynamical spacetime that is the
origin of Hawking radiation are mass (1 component), charge (1 component),
angular momentum (3 components), linear momentum (3 components), and spatial
position (3 components). The last six components fall away for some families
of observers with a suitable choice of spatial coordinates. ("Instant" in this
context is a _coordinate_ time defining a spacelike hypersurface, and one has
lots of freedom there). You get a handful of extra components (individual
"charges") as you go from QED to the standard model.

There have been attempts to break this picture by _inter alia_ having things
never enter the horizon in the first place, by implanting extra information in
the spacetime around the black hole ("hair"), and by locking up all the
infalling matter into a crystal that preserves details of the matter's
microscopic states either forever or until evaporation is almost entirely
complete. It is extremely hard to do this without introducing unlikely
observables.

> Conservation of Energy

... is not a global symmetry of a dynamically collapsing spacetime. You only
get conservation of energy locally within a suitably small region of spacetime
(which can be quite large far from the collapse, assuming asymptotic
flatness).

> time-reverible

Locally. This is most sharply obvious in strong curvature.

> CPT symmetric

This is a problem with unitary time evolution of _any_ quantum system in this
setting; CPT doesn't enter into it. There is neither antimatter nor chirality
in the model non-interacting scalar field that exposes the information loss
problem for a collapsing black hole. ("Negative energy" is only a trick used
when one wants to use a static background instead of a dynamical one; it does
not interact at all with its pair-partner or other "negative energy" quanta;
there is no local symmetry, it is the global symmetries of the Schwarzschild
solution that are being preserved through the trick. You entangle the real
Hawking quanta with false quanta instead of entangling the real Hawking quanta
with the spacetime (which would change the metric, which is exactly what one
is trying to avoid in some studies)).

Indeed, the problem is mostly centred on "time" in the first sentence of the
previous paragraph. There is no unique slicing of a general curved 4-spacetime
into 3-spaces, and if one does it wrong, one gets problems (see ref to
Giddings 2006 below). This is in some ways an argument that black hole
information loss is mainly about the
[https://en.wikipedia.org/wiki/Problem_of_time](https://en.wikipedia.org/wiki/Problem_of_time)
.

\- --

See also

[https://arxiv.org/abs/1511.08221](https://arxiv.org/abs/1511.08221) (Giddings
2015)

[http://inspirehep.net/record/775859](http://inspirehep.net/record/775859)
(Unruh 2009)

[http://inspirehep.net/record/775859](http://inspirehep.net/record/775859)
(Unruh 2007)

[https://arxiv.org/abs/hep-th/0606146](https://arxiv.org/abs/hep-th/0606146)
(Giddings 2006)

and refs therein (e.g. Unruh 1977).

or with a concise summary of the work above and related work

[http://backreaction.blogspot.com/2015/12/hawking-
radiation-i...](http://backreaction.blogspot.com/2015/12/hawking-radiation-is-
not-produced-at.html)

------
brian-armstrong
If you want to dive in, I believe this is the right arxiv link
[https://arxiv.org/abs/1810.01847](https://arxiv.org/abs/1810.01847)

------
notadog
Here's a link to the actual paper, if anyone's interested:

[https://arxiv.org/pdf/1810.01847.pdf](https://arxiv.org/pdf/1810.01847.pdf)

------
curiousgal
I get that Hawking's work is incredible and that he, himself, is an amazing
person. But is his work really worth putting him on the same pedestal as
Newton?

~~~
cyphar
I have the same feeling, and I think people often mix his personal achievement
(fighting a degenerative disease for so long and being a very large mind in
the world of cosmology) with his scientific achievements (helping refine
cosmology and think about black holes and their interactions). I've read
several of his books and papers, and while they are definitely very brilliant,
he never produced a Principa Mathematica (even Einstein didn't).

In many ways I would argue that Einstein would be a more apt comparison.
Einstein based most of his fundamental musings by identifying issues you see
when combining different fields and then using thought-experiments to think
about these issues. Hawking similarly operated through though-experiments and
trying to piece together different concepts from different fields (though he
focused far more on cosmology).

Einstein was a brilliant scientist (obviously), but people often over-
complicate his work. In many ways, the real beauty of Einstein's work was just
how simple and fundamental it was -- and how un-intuitive the final
conclusions are. Newton is a whole different ballgame (even though he was
wrong about absolute space and time -- but that conclusion took Maxwell's
equations to discover).

~~~
Maro
Are you a physicist? Not trying to be snarky.

I think GR is at Newton's level. They say most of Physics is very iterative,
and if X didn't discover Z, the probably another person Y would have 5-10 yrs
later. But this is not true for GR. GR came out of the blue, it wasn't
strictly required to explain anything important back then. It was just
Einstein sitting down, doing thought experiments about elevators in space,
then a huge, incomprehensible (to me) mental leap to manifolds and tensors,
and the Einstein equation. You can try this yourself: read his popular GR book
(it's excellent), then pick up a GR textbook and read the first chapter, and
see if you could get from the thought experiments to constructing the math.

It's hard to compare it to Newton, bc Newton also had to invent Calculus, but
it's up there.

~~~
cyphar
> Are you a physicist? Not trying to be snarky.

Yes, I've studied and done physics research (I have yet to finish my degree,
but I've completed all of the physics topics). (My research topics were on
astroseismology and non-linear optics -- not related to SR or GR, but we went
through the derivations and reasoning of SR in class.)

> I think GR is at Newton's level.

I think you've misunderstood what I meant by "people over-complicate
Einstein's work". My point wasn't that Einstein is somehow inferior to Newton,
it was that a large part of Einstein's work is actually incredibly simple
(compared to how it is pitched) which to me makes it all the more genius.

> GR came out of the blue, it wasn't strictly required to explain anything
> important back then.

The core idea behind GR came from thought experiments trying to understand how
SR might be extended to non-inertial frames. In many ways this is the most
obvious thing to consider after you've come up with SR: "what if we start
accelerating?"

Also, SR similarly wasn't required to explain anything important. Einstein was
thinking very deeply about what does it mean to "measure" something, and the
first section of his paper was discussions of synchronized clocks and how they
relate to measurements (possibly the furthest thing from a "real" problem you
can have).

> then a huge, incomprehensible (to me) mental leap to manifolds and tensors,
> and the Einstein equation.

In Einstein's case, he had quite a bit of help with the mathematics (again,
not to detract at all -- but we should separate the mathematical derivation
from intuition). Intuition is the driving force in physics (with mathematics
fleshing out what the logical conclusion of an intuition must be), and so I
find discussing the intuition to be far more critical when talking about
physical theories.

The core genius was the realisation that acceleration changes how light beams
look to observers -- and the intuition that acceleration must have an
equivalence to gravity. Neither of these things are complicated, and you could
explain them to anyone who has seen a projectile or stood in an elevator. But
the conclusion you come to is far from obvious.

And that, to me, is the beauty of physics. Obviously the intuition is just the
first step, and there is plenty of brilliance in all of the manifold and
tensor equations (it's definitely above my pay-grade), but I think that over-
hyping the mathematics isn't quite right either.

> It's hard to compare it to Newton, bc Newton also had to invent Calculus,
> but it's up there.

That is effectively what I was saying. Newton had Principa Mathematica and in
many ways pioneered the mathematical viewpoint that we use in physics today.

~~~
meekaaku
"SR similarly wasn't required to explain anything important"

Wasnt it the case that no one was able to explain Michaelson-Morley
experiment? That speed of light was measured to be the same irresective of
observer speed? Ppl came with all sorts of explanations (ether/lorentz tx
etc), but it was Einstein who had the creativity to suggest that maybe _time_
itself was slowing down. With that insight, the rest was beautifuly derivable.

And this is also why, Hilbert was able to (and otherws were racing to) derive
the field equations before Einstein, since Einstein needed help from
mathematicians (Grossman?), and Hilbert was an already excellent
mathematician. SR and GR is derivable from beautifuly simple insights (time
slowdown, gravity/acceleration equivalence).

One could say Einstein came with this insight out of the blue, but Poincare
was also investigating time dilation, but stopped because it was too counter-
intuitive.

Edit: typos

