Hacker News new | past | comments | ask | show | jobs | submit login
Stephen Hawking’s Final Paper: How to Escape from a Black Hole (nytimes.com)
154 points by mgalka 10 months ago | hide | past | web | favorite | 44 comments



I wish I could understand the minds of these people. They lived with us and yet they are so much ahead in the comprehension of the universe. I've always loved science, I chose to do computers in the end though. I don't understand a single world of the abstract.

I also wish I could just know what people in a thousand years learn about the universe. It seems like the knowledge we gain (science and technology) is exponential and just started a few years ago.

It also feels like this contribution is so cool, even more seeing that Hawking is sharing it with us things after dying somehow.


Stephen Hawking would not have understood a single line of your code. The language of his and your profession are different and must be learned. Would you go to Japan and feel that the people there must be eons beyond your comprehension because you don’t speak Japanese? No! You would learn Japanese, and would find out that they’re just like you, but they just speak a different language which they spent many years perfecting.


To keep in your analogy working in mathematics is more like working with a codebase that is ~100 years old and that still contains all the dead ends, unused code and false starts, because no code ever gets deleted. It is basically impossible to navigate modern research mathematics without a team of professionals guiding you for quite some time. Some of the most successful research mathematicians manage to either find a largely unexplored niche (Jacob Lurie with Higher Category Theory) or manage to work their way backwards to a research level understanding of a specific already established field (Peter Scholze, he claims that he basically worked his way backwards from the Langlands Conjectures and never took a course in Linear Algebra, but just picked it up on the way).


I don't think you're giving yourself enough credit. Sure, being a programmer and being Stephen Hawking are two very different things, but we still take an occasional dip in the ocean of abstract thinking when working deeply on complex, generalized problems in programming.

One person's mundane work is another person's magical wizardry.

Also, unfortunately there's no evidence that I know of that our knowledge is going to continue growing at an exponential pace. It seems an equally plausible hypothesis that we're reaching a plateu that may last for quite a length of time.

(But... I personally have my hopes up that we'll be doing Star Trek style space exploration within the next millennium!)


> One person's mundane work is another person's magical wizardry.

Absolutely. Sometimes I get excited about a topic and try to learn as much as possible, but once you peek behind the curtain it's just as mundane as anything else.

Most of the time an abstract is hard to understand because the concepts have been abstracted. If you just break it down and pick it apart, it's not hard to understand, though it can be time consuming depending on what you already know.


once you peek behind the curtain it's just as mundane as anything else

So much of interesting computation is just matrix manipulation/linalg under the covers. Which is pretty cool actually: it means the skills are very transferrable, which it turns means it is worthwhile going deep.

This is a breath of fresh air compared to most sorts of computing, where the skill really is "memorize all the workarounds to the bugs", and all the time you know as soon as the next version comes out, all that is obsolete, so you try to avoid wasting too much time learning it.


I'm tempted to believe that cosmology and astrophysics are the exception to that rule.


Astrophysics and cosmology are both relatively straightforward. There is one set of partial differential equations that are relevant, a few more when you also want to understand things like the magneto-hydrodynamics of stars or charged gas clouds. In the end it is nothing you can't manage to understand with a bit of study. What is truly hard is to prove rigorous mathematical statements about any of those equations and the resulting geometry. But the number of people who attempt this are dwarfed by the total number of people working in this field 1:1000 maybe.


I mean, what's behind the curtain of most things is "a bunch of differential equations".

With things like astrophysics you also have "and computer simulations".


It's not magic man. You haven't studied and pondered these concepts for decades, as Hawking and his peers have. I love science but let's tone down the intelligence and hero worship a bit.


> They lived with us and yet they are so much ahead in the comprehension of the universe.

Aren't these people just theorists? Black hole is a theory and not a proven reality that can be directly observed.


This was so difficult for me to understand because the author tried _so hard_ to make this sound poetic.

Here's what I got..

- For a long time we thought that any information (matter/light) that goes into a blackhole is lost forever and is "corrupted".

- Hawking believed this for a long time and said “God not only plays dice, but he often throws them where they can’t be seen." No one really knows _how_ the blackhole actually "corrupted" the information but had some nutty theories.

- 30 years later (in 2004) Hawking changed his mind and said that information can actually be retrieved from a blackhole.

- A dude named Andrew Strominger recently discovered that black holes have this "soft hair" property that can be "read" to theoretically "see" what is inside the blackhole.

- Hawkings last paper says that he thinks the information inside will be re-emitted when the black hole evaporates.

TL;DR: Hawking for a long time thought matter/information that went into a blackhole was lost forever - and then changed his mind about it.


FWIW, my understanding is that black holes are the physics equivalent of a cryptographic mixing function as used in eg chacha20; reversible in the strict sense, but missing any single bit of output completely 'random'ises the recovered input.

Assuming Hawking radiation exists (which seems very likely based on what we know about relativistic and quantum physics), it must carry quantum information in the form of position/momentum, photon polarization, etc, and it's not clear where else that information could possibly come from. (Orthogonal-basis measurements can sort of generate classical information out of nothing, but not in a sense that's useful here.)


>black holes are the physics equivalent of a cryptographic mixing function

chuckle

  OpenSSL Changelog.txt:
  v5.1.0 released on 2172-03-21:
  - ...
  - add support for Sagittarius A* message digest
  - ...


> cryptographic mixing function

If only that were true; that'd be no problem at all.

The problem is procedural, and has to do with slicing up spacetime-filling fields into field-values on spacelike hypersurfaces (values-surfaces). I'll focus on one procedure -- there are others that have their place as well.

In a spacetime without any black holes at all, we can take any such values-surface whereupon all the values are specified, and from that we can recover all the values of the spacetime-filling fields everywhere in the spacetime. This is the https://en.wikipedia.org/wiki/Initial_value_formulation_(gen...

The important thing about the initial value formulation is that we can on our chosen values-surface perturb a single field-value, and trace the consequences to neighbouring values-surfaces, and their neighbouring values-surfaces, and eventually recover the whole set of spacetime-filling fields everywhere in the spacetime. Indeed, one family of slicing, https://en.wikipedia.org/wiki/Hamiltonian_constraint#Hamilto... lends itself to https://en.wikipedia.org/wiki/Canonical_quantum_gravity (CQG). CQG works everywhere in the absence of strong gravity, and even provides a clear definition of strong gravity in terms of renormalization: http://www.preposterousuniverse.com/blog/2013/06/20/how-quan... (Below I'll generalize this to the Effective Field Theory (EFT)).

If we have no black holes, and no early singularity, the effective theory is almost certainly correct everywhere in the space-time. (Here I won't even consider the early universe problem; there is a problematical ultradense phase in the Hot Big Bang model that requires beyond-the-standard-model physics that wrecks fields-of-the-standard-model values-surfaces before we get to strong gravity.)

If we add black holes, but without Hawking Radiation (that is, they only ever grow) on each hypersurface we have to "cut out" the field-values at the boundary of any region containing strong gravity. These regions are, crucially, well inside the event-horizons of massive black holes. That is, the EFT does not end at horizons, it ends near gravitational singularities.

While there are some annoyances, for most reasonable slicings, we can still recover the full spacetime-filling fields everywhere in spacetime. The field-values that enter the horizon are trapped within the horizon, and eventually they are trapped within our "cut out" region. As our black holes never evaporate, those field values have no impact on future slices. We have, however, found ourselves with a new constraint that picks out a direction of time: the future is the direction in which the "cut out" has no impact, but the past is one in which the "cut out" emits field-values. That's the main source of annoyance, and stresses the "initial" part of "initial values". Picking out just any surface will only guarantee you recovery of the future successor surfaces; in most cases you cannot even in principle recover the past values-surfaces, with the result that you also cannot recover the whole set of spacetime-filling fields. THIS is the incompatibility between quantum mechanics and general relativity.

(In practice, researchers -- including Hawking in his original Hawking Radiation paper -- choose to study "eternal" black holes that never grow or shrink, so that the field-values are always recoverable everywhere outside the horizon. However, because the black hole doesn't grow, you have to play some tricks to deal with matter that crosses the horizon. Those tricks lead to the negative-energy particles in Hawking's paper and in many popularizations of Hawking Radiation. In a more realistic model, one would let the black hole grow or shrink, and do away with the need for negative energy altogether, although it would not have been tractable for Hawking to take that more realistic approach in the fancifully named "Black hole explosions" paper of 1974, https://www.nature.com/articles/248030a0 ).

Let's condense the point made above: we cannot reconstruct the full past of a black hole that forms by gravitational collapse of matter. (This gives rise to the black hole uniquess theorems and in particular https://en.wikipedia.org/wiki/No-hair_theorem ). Without black hole evaporation, we can still predict the full future.

If we add black hole evaporation via thermal Hawking Radiation, we have a new problem that breaks the future predictability as well. Black holes at every time in their history from initial collapse to final evaporation emit Hawking quanta fully determined by their no-hair parameters[1]. In a typical black hole, the mass parameter is the driving term. If one starts with an initial values-surface just before strong gravity appears, then the very next (future) values-surface probably has Hawking quanta. The spectrum of the Hawking quanta is statistical: it is, in quantum field theory terms, a mixed state. But the spectrum of all the quanta in the fields just before strong gravity arises is a pure state. In more relaxed terms, we have full knowledge of the pure state, but we can only talk in terms of statistics for the mixed state.

The problem persists across the whole of the future spacetime: a Hawking quantum can fly off to infinity, and for realistic fields (e.g. the standard model), it may interact with other matter at arbitrarily large distances from the black hole. (Hawking Radiation was initially modelled with all matter represented as a non-interacting scalar field; the field-values of the Hawking scalars propagate to infinity, but don't really matter all that much in the model. But if a small-mass black hole emits an electron-positron pair, the former could fly off and meet a proton some time in the future, and probably we would want to know about a proton gas being neutralized with the result that it may begin to collapse gravitationally, whereas in the absence of Hawking electrons, it likely would not. Although the initial model was very restricted, these sorts of implications were almost immediately clear: large scale effects can be triggered by Hawking radiation, and as Hawking radiation is inherently probabilistic, we have a cosmic Schroedinger's Cat problem.)

So, back to your words:

> cryptographic mixing function

Hawking radiation converts a pure state into a mixed state. A cryptographic mixing function converts a pure state into a pure state in a way which is hard to trace.

Now, back to this article. Hawking et al. decided to break the no-hair theorem, and to decorate black holes in such a way that you can still recover the past of a (never-evaporating, always-growing, no Hawking radiation) whole spacetime from a values-surface on which there is already strong gravity. Additionally, the same mechanism allows one to recover the whole future of the spacetime from a values-surface on which there is strong gravity (and thus Hawking radiation). The downside is that one has to have the full set of values on the fields with strong-gravity, and those will (under the idea in the OP paper) include extremely low energy "soft hair" particles (the OP paper does not decide whether "soft hair" is just photons, or may be the whole set of standard model particles; as with the original 1976 paper Hawking and his coauthors consider a restricted form representation of all matter in the spacetime).

So in a way, what they are doing is introducing a "cryptographic mixing function" to avoid producing a mixed state. You get determinism everywhere (instead of determinism before strong gravity, and probability after) in initial-values formalisms, by doing away with the no-hair theorem (which raises questions about the uniqueness of theoretical black hole models like Schwarzschild and Kerr).

It is an interesting idea that deserves further study (and will get it), but it is too early to make bets on whether it will be fully succcessful at repairing the "damage" that strong gravity does to the EFT.

Moreover, it is not an answer to the question, "what happens in strong gravity", and in particular does not prevent the formation of a gravitational singularity inside a black hole. It also has nothing to say about what happens at extremely high energies (much higher than the electroweak scale) in the early hot, dense universe.

However, just making the EFT work in a wider variety of spacetimes is a fine goal!

- --

[1] The Hawking radiation when a black hole initially forms by gravitational collapse of matter is pretty extreme and is relevant to the early black hole and its immediate environment. It's hard enough to take into account that the difficulty gets its own name: the backreaction problem. The gravitational backreaction (much less matter interactions) of hairs produced at young black holes is not mentioned in the OP paper by Hawking et al. :/


This is a significantly more... more response than I was expecting, thank you.

I do have a couple of quibbles, though:

> Hawking radiation converts a pure state into a mixed state. A cryptographic mixing function converts a pure state into a pure state in a way which is hard to trace.

This is actually specifically what I meant by "the physics equivalent of"; that is, a quantum-computational mixing function that converts a arbitrary, possibly mixed state to another, probably[0] mixed state, such that a hypothetical extra-physical observer with full knowledge of both states would see the result as uniformly pseudorandom from the range of all possible output states.

Also, I take as a sort of provisional axiom[1] that physics is time-reverible (not necessarily symmetric, although probably CPT symmetric) and therefore cannot throw away or generate new any (quantum mechanical/qubit-based, so orthogonal-basis measurements aren't) information. (Given this and something like the Bekenstein bound, that a evaporating black hole must be leaking its information to somewhere, and its radiation must be getting its information content from somewhere, are seemingly trivial.)

0: In the thermodynamics/statistical mechanics "almost certainly" sense. 1: similar to and as serious as Conservation of Energy or "Scientists are made out of atoms and cannot cause magical non-(unitary/linear/differentiable/local/CPT symmetric/Liouville uniform/deterministic/etc) events by looking at things."


> more response than I was expecting, thank you.

You're welcome.

> evaporating black hole must be leaking its information to somewhere

The information about the contents of the BH during its formation and growth is in the region of strong gravity. Classically, it's squashed into the gravitational singularity; fully classically the singularity is always hidden behind an event horizon, so it does no harm to predicting events outside the horizon.

However, we now add a quantum field theory to the picture.

The origin of Hawking radiation is the acceleration between observers before the formation of the strong gravity and the observers after that; the accelerated (later) observers see particles where the non-accelerated (early) observers see none. The particles appear in the dynamical spacetime around (but outside) the horizon. The reason they are there is (rougly) that the creation and annihiliation operators that line up in "unstretched" vacuum separate in "stretched" vacuum, and annihilation operators miss the created particles (that is, the annihilation happens at the right spatial coordinate, but too early or too late: the created particle is elsewhere). The analogy with Unruh radiation, which appears for accelerated observers in flat spacetime but not for unaccelerated observers in the same spacetime is not accidental. In the Unruh case, the acceleration mechanism (say, a rocket engine) is the reason the accelerated observer sees the extra particles. In the Hawking case, the acceleration mechanism for later observers is the dynamically collapsing spacetime.

If nothing exits the horizon of a black hole (at least until final evaporation; and for that we are stuck with not knowing enough about the behaviour of quantum fields in strong gravity) then the only parameters available at any instant in the (QED-filled) dynamical spacetime that is the origin of Hawking radiation are mass (1 component), charge (1 component), angular momentum (3 components), linear momentum (3 components), and spatial position (3 components). The last six components fall away for some families of observers with a suitable choice of spatial coordinates. ("Instant" in this context is a coordinate time defining a spacelike hypersurface, and one has lots of freedom there). You get a handful of extra components (individual "charges") as you go from QED to the standard model.

There have been attempts to break this picture by inter alia having things never enter the horizon in the first place, by implanting extra information in the spacetime around the black hole ("hair"), and by locking up all the infalling matter into a crystal that preserves details of the matter's microscopic states either forever or until evaporation is almost entirely complete. It is extremely hard to do this without introducing unlikely observables.

> Conservation of Energy

... is not a global symmetry of a dynamically collapsing spacetime. You only get conservation of energy locally within a suitably small region of spacetime (which can be quite large far from the collapse, assuming asymptotic flatness).

> time-reverible

Locally. This is most sharply obvious in strong curvature.

> CPT symmetric

This is a problem with unitary time evolution of any quantum system in this setting; CPT doesn't enter into it. There is neither antimatter nor chirality in the model non-interacting scalar field that exposes the information loss problem for a collapsing black hole. ("Negative energy" is only a trick used when one wants to use a static background instead of a dynamical one; it does not interact at all with its pair-partner or other "negative energy" quanta; there is no local symmetry, it is the global symmetries of the Schwarzschild solution that are being preserved through the trick. You entangle the real Hawking quanta with false quanta instead of entangling the real Hawking quanta with the spacetime (which would change the metric, which is exactly what one is trying to avoid in some studies)).

Indeed, the problem is mostly centred on "time" in the first sentence of the previous paragraph. There is no unique slicing of a general curved 4-spacetime into 3-spaces, and if one does it wrong, one gets problems (see ref to Giddings 2006 below). This is in some ways an argument that black hole information loss is mainly about the https://en.wikipedia.org/wiki/Problem_of_time .

- --

See also

https://arxiv.org/abs/1511.08221 (Giddings 2015)

http://inspirehep.net/record/775859 (Unruh 2009)

http://inspirehep.net/record/775859 (Unruh 2007)

https://arxiv.org/abs/hep-th/0606146 (Giddings 2006)

and refs therein (e.g. Unruh 1977).

or with a concise summary of the work above and related work

http://backreaction.blogspot.com/2015/12/hawking-radiation-i...


Isn’t that What he called Hawking radiation? Didn’t think it was news at this point.

It’s funny how we talk of information “not being lost” since it’s emitted as radiation... would we be able to decypher anything? If not, it’s still lost.


Hawking Radiation, in its original form, did not allow anyone to learn the quantum information about what went into the black hole — if it was made entirely of photons; or entirely of antimatter; or made of any mix of photons, normal matter, and antimatter… all would be indistinguishable.

What’s news is that it isn’t all lost, that the starting state has any influence at all on what comes out. I am only an enthusiastic amateur at this, but I get the impression this is as surprising as the discovery of Hawking Radiation in the first place.


> but I get the impression this is as surprising as the discovery of Hawking Radiation in the first place

It should be noted that we have yet to experimentally observe Hawking radiation (it is so low-power we don't have the means to observe it). People talk about Hawking radiation as though it is a discovered phenomena -- it isn't. It's a prediction based on our understanding of quantum mechanics and some thinking about how it interacts with event horizons -- and observing it would help reinforce our understanding of the underlying theories.


Indeed no existing stellar-mass-or-above black hole is a net emitter of radiation in the current cosmological era. They swallow cosmic microwave background photons at a much higher rate than they emit Hawking radiation and thus grow slowly even if infalling matter is not present. Indeed, they're the best heat sinks in existence. The universe will be billions of times its current age before the CMB has redshifted enough to become cooler than black holes.

Hypothetical primordial blackholes [1] could be much smaller and hotter, and indeed Hawking radiation could provide one way to detect those that have survived to present day, but as of now none have been found.

[1] https://en.wikipedia.org/wiki/Primordial_black_hole


ETA: I don't think we disagree at all. I slightly misread your second sentence before hitting "reply". I'll keep this reply in place because I think it amplifies and adds to your point.

Hawking radiation is always present around a dynamical black hole -- it is produced by the dynamical spacetime itself[1]. (All black holes that aren't eternal -- that includes any that form by gravitational collapse of matter -- are dynamical, and thus have Hawking radiation, even while they're growing.)

In principle we should be able to detect Hawking radiation as black holes first form, since it will backreact with the black hole, and probably even interact with the collapsing matter. Studying BH-forming supernovae and the like will lead to discoveries in this difficult area of https://en.wikipedia.org/wiki/Semiclassical_gravity as hot Hawking radiation is in principle directly observable, and there will be indirect traces.

The problem is that BH formation typically happens in a bright environment. The candidate black holes we know about aren't that young and as a result the Hawking gas will be cold enough to have negligible impact: basically no interaction with nearby matter, basically no backreaction on the black hole itself, and much colder than the surrounding environment (infalling matter including the CMB gas) and thus in practice impossible to detect directly with telescopes.

- --

[1] Well, more precisely, given Einstein-Maxwell electrovacuum and general relativity with a black hole metric, Hawking radiation is inevitable. Hawking's original work dealt with a static spacetime (i.e., an eternal, unchanging black hole) and used negative energy quanta as a trick to proxy for a dynamic spacetime. Using a dynamical black hole (i.e., one that grows and shrinks), one does not need negative energy quanta at all, much less a mechanism which tosses only those halves of pairs into the BH (in order to keep the metric unchanged from pure static Schwarzschild).


> The problem is that BH formation typically happens in a bright environment.

That must be the understatement of the week. Love it! I guess observations of failed supernovae and possible direct-collapse black holes could shed some light (hah!) on the matter.


I really want to shout out “infinity billion” right now, but I’m not going to do that because it would be childish and silly. (strains)


>Isn’t that What he called Hawking radiation? Didn’t think it was news at this point.

No, that's emitting random particles (or antiparticles) from pairs created near the horizon, where one falls in and the other escapes.

That's not getting the information that falls in out.


No. As I understand it, it is exactly that - the Hawking radiation is how the information escapes.

EDIT - See https://en.wikipedia.org/wiki/Black_hole_information_paradox for more details, and note that I may well be wrong...!


> I may well be wrong

Unfortunately, you are. The information that is lost is what went into the black hole in the first place.

It's not strictly a quantum problem: if black holes have no hair, then we cannot tell by looking at a spherically symmetric non-rotating black hole if it was formed by one spherical shell of infalling matter of mass M, or two concentric spherical shells of infalling matter of mass M/2, or three of M/3, etc. When we add electromagnetism to the picture, we get Hawking radiation inversely proportional to the black hole mass; but that mass does not encode the number of shells or their composition, just their total mass.

When we add in quantum electrodynamics, we find that Hawking radiation has a thermal spectrum (so, cold photons for a stellar-mass black hole, but when the black hole is very small you'll get electrons and positrons too; and potentially the whole zoo of particles if we use the full standard-model as the quantum field theory). But we could start with a black hole formed by squashing together neutral composites (positronium, atoms) and with some probability get out nothing but photons: no massive particles at all. With some smaller probability we get mostly photons but also electrons and positrons. The main problem is that we are stuck talking probabilistically about the spectrum Hawking quanta even if we know every single detail of what we threw into the black hole; there is no unitary evolution from known-in-every-detail state to known-in-every-detail state. The "every detail" part is the information that is lost.

There are a variety of ways one can try to deal with the conversion of "we know every detail" (a pure state, quantum mechanically) to "we can only talk probabilistically" (a mixed state, quantum mechanically), and some are listed in the wikipedia page you link to. Hawking's final paper is yet another approach, and throws away the idea that black holes have no hair; that is, a black hole cannot be described with a small number of parameters (dominated by mass and angular momentum) but rather develop an enormous number of parameters encoded as perturbations of the vacuum. Those perturbations in turn influence the spectrum of the Hawking quanta in such a way that it is fully predictable -- even though it looks like a thermal bath, the vacuum perturbations ("soft hairs") fully determine it. It an idea is worth further investigation, but is not much more compelling than several alternatives.

One problem is that when we take an exact analytical black hole solution to the Einstein Field Equations of general relativity, we have "no hair" as a mathematical theorem. If we perturb around such a solution we generate observables that closely match what we see of candidate astrophysical black holes in the sky. Hawking wants to treat astrophysical black holes as even more different than the theoretical models, and while that's not a crazy idea, it's also not very parsimonious as many many many more perturbations ("hairs") are necessary than the minimum required to match the observed systems, and it's not clear that a "no hair" black hole must be measurably different from a "soft hair" black hole.

(More detail here https://news.ycombinator.com/item?id=18327614 )


If you want to dive in, I believe this is the right arxiv link https://arxiv.org/abs/1810.01847


Here's a link to the actual paper, if anyone's interested:

https://arxiv.org/pdf/1810.01847.pdf


I get that Hawking's work is incredible and that he, himself, is an amazing person. But is his work really worth putting him on the same pedestal as Newton?


I have the same feeling, and I think people often mix his personal achievement (fighting a degenerative disease for so long and being a very large mind in the world of cosmology) with his scientific achievements (helping refine cosmology and think about black holes and their interactions). I've read several of his books and papers, and while they are definitely very brilliant, he never produced a Principa Mathematica (even Einstein didn't).

In many ways I would argue that Einstein would be a more apt comparison. Einstein based most of his fundamental musings by identifying issues you see when combining different fields and then using thought-experiments to think about these issues. Hawking similarly operated through though-experiments and trying to piece together different concepts from different fields (though he focused far more on cosmology).

Einstein was a brilliant scientist (obviously), but people often over-complicate his work. In many ways, the real beauty of Einstein's work was just how simple and fundamental it was -- and how un-intuitive the final conclusions are. Newton is a whole different ballgame (even though he was wrong about absolute space and time -- but that conclusion took Maxwell's equations to discover).


Are you a physicist? Not trying to be snarky.

I think GR is at Newton's level. They say most of Physics is very iterative, and if X didn't discover Z, the probably another person Y would have 5-10 yrs later. But this is not true for GR. GR came out of the blue, it wasn't strictly required to explain anything important back then. It was just Einstein sitting down, doing thought experiments about elevators in space, then a huge, incomprehensible (to me) mental leap to manifolds and tensors, and the Einstein equation. You can try this yourself: read his popular GR book (it's excellent), then pick up a GR textbook and read the first chapter, and see if you could get from the thought experiments to constructing the math.

It's hard to compare it to Newton, bc Newton also had to invent Calculus, but it's up there.


This is actually the subject of a long-running dispute. Fun fact: Hilbert submitted a paper with the field equations of GR before Einstein - but it was published later.

https://en.wikipedia.org/wiki/Relativity_priority_dispute#Ge...


> Are you a physicist? Not trying to be snarky.

Yes, I've studied and done physics research (I have yet to finish my degree, but I've completed all of the physics topics). (My research topics were on astroseismology and non-linear optics -- not related to SR or GR, but we went through the derivations and reasoning of SR in class.)

> I think GR is at Newton's level.

I think you've misunderstood what I meant by "people over-complicate Einstein's work". My point wasn't that Einstein is somehow inferior to Newton, it was that a large part of Einstein's work is actually incredibly simple (compared to how it is pitched) which to me makes it all the more genius.

> GR came out of the blue, it wasn't strictly required to explain anything important back then.

The core idea behind GR came from thought experiments trying to understand how SR might be extended to non-inertial frames. In many ways this is the most obvious thing to consider after you've come up with SR: "what if we start accelerating?"

Also, SR similarly wasn't required to explain anything important. Einstein was thinking very deeply about what does it mean to "measure" something, and the first section of his paper was discussions of synchronized clocks and how they relate to measurements (possibly the furthest thing from a "real" problem you can have).

> then a huge, incomprehensible (to me) mental leap to manifolds and tensors, and the Einstein equation.

In Einstein's case, he had quite a bit of help with the mathematics (again, not to detract at all -- but we should separate the mathematical derivation from intuition). Intuition is the driving force in physics (with mathematics fleshing out what the logical conclusion of an intuition must be), and so I find discussing the intuition to be far more critical when talking about physical theories.

The core genius was the realisation that acceleration changes how light beams look to observers -- and the intuition that acceleration must have an equivalence to gravity. Neither of these things are complicated, and you could explain them to anyone who has seen a projectile or stood in an elevator. But the conclusion you come to is far from obvious.

And that, to me, is the beauty of physics. Obviously the intuition is just the first step, and there is plenty of brilliance in all of the manifold and tensor equations (it's definitely above my pay-grade), but I think that over-hyping the mathematics isn't quite right either.

> It's hard to compare it to Newton, bc Newton also had to invent Calculus, but it's up there.

That is effectively what I was saying. Newton had Principa Mathematica and in many ways pioneered the mathematical viewpoint that we use in physics today.


"SR similarly wasn't required to explain anything important"

Wasnt it the case that no one was able to explain Michaelson-Morley experiment? That speed of light was measured to be the same irresective of observer speed? Ppl came with all sorts of explanations (ether/lorentz tx etc), but it was Einstein who had the creativity to suggest that maybe time itself was slowing down. With that insight, the rest was beautifuly derivable.

And this is also why, Hilbert was able to (and otherws were racing to) derive the field equations before Einstein, since Einstein needed help from mathematicians (Grossman?), and Hilbert was an already excellent mathematician. SR and GR is derivable from beautifuly simple insights (time slowdown, gravity/acceleration equivalence).

One could say Einstein came with this insight out of the blue, but Poincare was also investigating time dilation, but stopped because it was too counter-intuitive.

Edit: typos


"SR similarly wasn't required to explain anything important"

I don't agree. The Maxwell equations (1865-1875) already implicitly encode the Lorentz transformation as the symmetry group, that's one of those "sooner or later somebody would have come along and put a spacetime theory behind it" thing, in this sense it's not up there with Newton. As you probably know there were a bunch of physicists who were looking at this (eg. Lorentz, Poincare, Larmor, etc). Then it was Einstein who tied it all together, and it was called the Special Theory of Relativity. (But this is one of those cases where this was going to happen anyway.)

I agree that Einstein was able to borrow manifolds and tensor calculus from the mathematicians, unlike Newton, who had to invent everything.

Btw. this is a great discussion.


> read his popular GR book (it's excellent)

No it's not, "and that, my friend, you will surely believe me".


Not sure what you mean here.

I mean this book: https://www.amazon.com/Relativity-Special-General-Albert-Ein...

It's excellent.


Indeed. He is a media darling, was a great science popularizer, and decidedly influential in his own sub-subfield of black hole cosmology, but in the greater context of physics "just another" accomplished researcher. On a purely personal level, of course, his career was quite remarkable given his condition.


And did a fantastic job popularising science. He even was in a Star Trek episode as himself!

It brings a stupid happy smile to my face every time I think about it.


My favorite quote is that he sold more books on physics than Madonna did on sex.


>But is his work really worth putting him on the same pedestal as Newton?

Sure. Hawking's work greatly extended our understanding of black holes is fascinating and important to our understand of both the origin of the universe and it's ultimate demise. For the question of "What was the Universe prior to the Big Bang", the Hartle-Hawking state is, at current, one of our best answers - likely a singularity of both space and time, meaning that the idea of a boundary for the beginning of time isn't something that actually exists.

Newtonian physics are important in that at the energy and mass levels we experience life in, they work out to be close enough to how things actually work as to not be meaningfully distinct.

But they're not actually (the most) correct. Special relativity, general relativity, and quantum mechanics show that we have more correct understandings of physics than Newton's, and certain ideas of Newton's are incompatible with our current understanding. Gravity is an interesting and simple one - with Newtonian physics, the apple falls to earth. However, Galileo would have disagreed with Newton, had they been able to discuss the topic. Leibniz did disagree. Newton was a believer in an absolute frame of reference in the universe, whereas the underpinning of general relativity is that all frames of reference are relative - that if you use the apple as your frame of reference, it is the earth falling towards it. And this wasn't something new that came from Einstein and Hilbert - Galileo recognized this, Leibniz recognized this, etc. Even when Newton was using Galileo's principal of relativity to develop Newtonian physics, he diverged on this fairly central part.

In fact, the work of Einstein and others has shown us that gravity is almost certainly not a force at all, that mass is not attracting mass over a distance.

TLDR: Newtonian physics aren't actually "correct", yet we venerate him because of how important of a body of work they are. Hawking's work on black holes and singularities is important to our understanding of where the universe came from and how it will end.


Thank you. In the same vein, it bothers me that Hawking weighs in on AI/ML when he clearly hasn't written a line of ML code in his life. The popular press picks up on his drivel as if he is some authority in the field leading to unsubstantiated fears and misinformation.


you know he's dead, right? don't think he'll be weighing in on much any more.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: