Hacker News new | past | comments | ask | show | jobs | submit login
The Quantum Thermodynamics Revolution (2017) (quantamagazine.org)
105 points by uberdru on March 8, 2018 | hide | past | favorite | 49 comments



The article states:

"A central pillar of quantum theory is that the information — the probabilistic 1s and 0s representing particles’ states — is never lost. (The present state of the universe preserves all information about the past.)"

What if the current state of the universe could have been arrived at by two different historical paths? That would imply that neither one could be a preferred history, that we would have to consider the present state to be -- arrived at through both histories?

Do photons have their probability distributions with interference patterns on screens to resolve a some sort of preferential history information storage? Like there is not enough bits available to say which screen the photons went through so they must say both?


Just to make sure I'm not completely confused here: "Information can't be lost" and "you can't arrive to the same state of the universe through two different paths" are two ways to state exactly the same thing, right? Regardless of the the details of the rest of your physics (though you do need various notions to build up that far - time, with at least a past and a present, the ability to call states the "same" or not, etc.)

For instance, in cellular automata, the way that you would state that same concept is "the update function is injective". In our universe's physics, AIUI injectivity is involved somewhere in the definition of unitarity.


It depends how you define information and state. If the state of the universe includes all history of previous states, then you cannot arrive to the same state by two different paths because that would imply a different history.

If information cannot be lost, it must mean that past states are included in the current state, and then the two statements are equivalent.


Strictly speaking there is no "single" historical path in the quantum world. Looking at Feynmans Path Integral method, you come to realize that the probaboility of a system to transition from state A to state B is the (weighted) sum of the probabilities of all possible trajectories the system could have taken to get from A to B.


One property of the laws of physics is that paths through history do not cross. If it is possible for two situation to time-evolve in to one, then we've got the wrong laws of physics.


For the sake of completeness, another way to say the same thing is: The laws of nature are the same if we reverse the direction of time.

Until you start talking about entropy and chaos and statistics (which is a fascinating can of worms), the laws that you use to predict the next state of a system are the same that you would use to calculate the previous state of a system.

To restate it in another way once again: The function mapping "state of the universe at time t" to "state of the universe at time t+dt" is a one-to-one map, hence two possible pasts could not have created the same present.


This

> The laws of nature are the same if we reverse the direction of time.

and this

> To restate it in another way once again: The function mapping "state of the universe at time t" to "state of the universe at time t+dt" is a one-to-one map

are not the same. Even for deterministic theories, time-reversal invariance implies one-to-one mappings but the reverse is not true. And for a stochastic theory it breaks down completely; you can have time-reversal symmetry but still lose information about the past (e.g., each time step the system changes into a completely random state form the state space).

Whether quantum mechanics is considered fundamentally stochastic or deterministic depends on interpretation. (Roughly, Copenhagen is stochastic and Many Worlds is deterministic.)


Ok imagine you are seeing a screen and from behind you is coming a photon. How do you know whether there was a double slit behind you or not for a single photon?

Those are two situations that will lead to the same outcome and you don't know which happened.

Also reverse is possible. You have the same situation evolving into two different situations with double slit and interference.


You've just described the motivation for multiverse: Copenhagen has to stick in a concept of "measurement" that projects wavefunctions onto specific states. As a projection operator it's idempotent and many-to-one.

However, this mechanism is not necessarily a law of physics: we can explain all of the same results without it. So, deciding whether or not wavefunctions "actually" collapse is 100% philosophy.


While you no longer need wave functions to collapse, you need universes to branch out. The latter interpretation doesn't seem "better" than the former as far as being a reasonable description of the real world goes. And of course for all practical purposes they are equivalent.


The universes don't need an additional mechanism in order to branch out, wavefunctions will already do this if they're left alone (to be clear about what I'm saying, I mean that if you isolated some particles in a box they would start doing multiverse). The natural behavior of wavefunctions is to do multiverse, and if you want something else you have to introduce an additional mechanism that collapses them.


And what does it mean for a few-particles isolated system to start doing multiverse? Cannot you just describe it with a wave function without any branching out? The standard interpretation of quantum mechanics doesn't have any issues with the evolution of an isolated system, it doesn't require continous wave function collapse.


The isoated particles will, according to accepted physics, do everything that MV says we are doing. The discrete branches are a textbook illustration; it's really more of a continuous thing. There's still only one wavefunction, it just behaves in a way that can be compared to branching.

In a nutshell the idea of multiverse is that the entire universe evolves as an isolated system, without any wavefunction collapse.


I know, but you need to reconcile that with the universe we observe where looking at the system will find it in a definite state and not in a superposition. What is the multiverse response precisely? How is "branching magic" an improvement over "collapsing magic"?


"Collapsing magic" consists of a projection that somehow happens during the time-evolution of the wavefunction, changing it from a superposition of eigenfunctions into one eigenfunction.

"Branching," to the extent that branching is a good word for what happens, already is known to be a behavior of wavefunctions: as a Gaussian pulse moves, it spreads out (due to dispersion inherent to the Schrodinger equation), and we as humans can arbitrarily call that branching. (But, like I said, it's continuous instead of discrete like the word branching would imply.)

So, what remains is to explain why we find the universe in a definite state, if it time-evolves into something other than specific eigenvalues. But, first, I'll ask you: what happens if you put a manned capsule inside of the isolated particle box, so that the person inside the capsule starts dispersing too?


In summary, the situation seems to be:

0) quantum mechanics is great, but it can lead to quantum systems described as a superposition of states and then we need to explain why we find the universe in a definite state.

1a) one option is to say that the wave function collapses to a definite state.

1b) another option is to say that there is a multiverse... and what remains is to explain why we find the universe in a definite state.

Solution (a) may be ugly, but “solution” (b) sets you back to the starting point!


I fail to see the relation between "branching" (or however do you want to call the feature that separates the standard universe from the multiverse, I let you pick the name for it) and the unitary evolution of the wave function.

As for your question: I don't know. Let's say the Schroedinger equation is actually not perfectly linear and there is spontaneous collapse which happens quite rapidly for a system with N~10^30 particles. Now what?


Which existing mechanism gets you from that metaphysical multiverse to the physical universe?


The "multiverse" would be one big wavefunction (quite physical), and each "universe" would be a partition of the wavefunction in phase space. In this sense "multiverse" and "universe" would just be names for different parts of the wavefunction, like "The Rocky Mountains," and "Kona," would be names for features on a topo map.


Which basis is used to partition the wavefunction?

If you are going to tell me that it is according to the eigenvalues of the observable operator it's not that different from saying that there is a collapse on one of the eigenvalues of the measurement.

And the question remains for your "isolated particles in a box doing multiverse". How is the partition of the wavefunction done if there is no preferred basis?

Edit: Maybe in your interpretation the only "physical" thing is the universe described by its wave function and those infinite multiverses are just mathematical "projections" of that wavefunction. But then how can a mathematical operation without any physical substrate explain anything about the physical world?


Not every "fundamental" statement in physics is a natural law, for example the idea of grouping together like microstates into macrostates. We group together microstates into macrostates based on their macrovariables, which are selected to line up with emergent behavior that we observe on the macroscale. Temperature is well-motivated, but it's motivated in a different way than position.

In Multiverse, the projection-onto-eigenbases statistical rule (which guides the partition of the wavefunction into conceptual universes), is seen as being like thermodynamics: statistical, and motivated to compress vast microscopic information into variables that are nice for humans. Someone who thought MV was the right idea would say that projections were a way to calculate the fraction of universes in which something was true, and thereby your probability of ending up in one where it was. In that view, it's emergent, instead of fundamental - like temperature. This reduces the number of fundamental ideas necessary by allowing projection to emerge instead of being asserted.


It's interesting that you mention thermodynamics. Macrostates are not a real thing. They are a reflection of our lack of knowledge about the precise state of the physical universe. We use a statistical ensemble of microstates, all of which could be the actual one as far as we can tell. In the best case, the true microstate will be included. But thermodynamic properties are not "real", they are a construct.

I don't see the point of the analogy. In statistical mechanics we have to consider all the possible microstates because we don't know which one is real. In the multiverse approach we know what universe is real, so what is the point in keeping all the universes that "could have been but are not" around? We know they are not real! Deriving thermodynamics from statistical mechanics we get a useful theory. What does the multiverse bring us?

The wave function can be interpreted epistemologically, as the expression of our lack of knowledge of the precise state of the universe (v.g. pilot wave theory). But there is no need for parallel universes that we know are not real, if you want to have virtual parallel universes they will be just those that we could be in as far as we know (and one of them will be the true one). The wave function collapse is in that interpretation the fact of narrowing the set of potential universes compatible with the actual one, as we learn more about the universe we live in.


You don't, but the photon does ;-)

Seriously, what do you mean by "will lead to the same outcome"?


In order for us to meaningfully assert that two things are different, there must be some information difference between them. If the same universe results from two paths, then they are the same path. Ultimately because your brain is in that universe, and it's current state is the thing responsible for distinguishing histories.

What you suppose is logically inconsistent with that fact, and the whole idea is going to be unfalsifiable. The human brains' ability to distinguish possibilities is contingent upon the presence of information.

>Do photons have their probability distributions with interference patterns on screens to resolve a some sort of preferential history information storage? Like there is not enough bits available to say which screen the photons went through so they must say both?

Yes, photons are said to go through both slits because there does not exist any information which would distinguish the paths. As soon as you arrange an experiment which provides such information, the chosen path becomes clear.


> Yes, photons are said to go through both slits because there does not exist any information which would distinguish the paths. As soon as you arrange an experiment which provides such information, the chosen path becomes clear.

This isn't correct. Photons are said to go through both slits because they travel like waves and actually go through both slits. How would your interpretation account for the fact that a single photon at a time fired through a double slit still produces an interference pattern?


>Photons are said to go through both slits because they travel like waves and actually go through both slits.

not really. See mentioned below Hitachi electron double slit experiment.

> How would your interpretation account for the fact that a single photon at a time fired through a double slit still produces an interference pattern?

Because position is quantized, ie the position probability has that wave form, and thus position probability of hitting the screen has that interference looking pattern. There is no real physical interference though between any real waves here. It is just a position probability pattern formed as direct sum of 2 other patterns - the 2 cutouts made by the slits from the original wave pattern of quantized position probability.

The Hitachi electron double slit experiment (https://www.youtube.com/watch?v=PanqoHa_B6c) is much more illustrative because electron position quantization is different from the electron's DeBrogile - unlike photons. In Hitachi you can see that each electron hits the screen as a singular point and only the statistical aggregation of these hits - such aggregation naturally visualizes the position probability density - forms the "interference" pattern.


What's not correct? Photons are said to go through both slits because they go through both slits? It is true that they go through both slits. That's why we say that they go through both slits. Because there's no information available to contradict that fact.

I really don't know what you're disagreeing with.


There is DeBroglie-Bohm which says the photon only goes through on slit, the waveform goes through both. The singlr photon interacts with the waveform which influences its valid locations on the detector/screen producing the interference.


> How would your interpretation account for the fact...

Thing is, an interpretation cannot account for anything, it is a theory that does...


The parent that said it's all subjective applies here, too. You still have to interpret the theory.



> because your brain

Can't we just agree that "brain" has no place in objective science (unless it's the brain science, of course).


"Brain" is just a stand-in for any system which can correlate its state with another system.


Whomst do you think performs the experiments?


Just for your information: Contrary to how it may appear in internet memes, "whomst" isn't really a standard English word.

https://en.m.wiktionary.org/wiki/whomst


Give it some time. Few years ago you wouldn't be able to legally google it in English


I'm aware.


Well, like I said - it doesn't matter.


> If the same universe results from two paths, then they are the same path.

No, one could easily imagine fundamental laws of physics which map multiple states at one time step to the same state at the next. Indeed, the emergent laws of thermodynamics, which govern the accessible physical quantities of macroscopic systems, have this feature. It turns out in our universe that this apparent irreversibility is due to a special initial state and information being dumped into inaccessible microscopic degrees of freedom, so that statistical mechanics and atomic theory can be used to derive the appearance of thermodynamics. But there's nothing internally inconsistent about a fundamentally irreversible theory.


I'm not talking about the reversibility of a theory. You can conjecture that our current reality could have been arrived at from two different paths. What you cannot do is assert the truth of a theory which selects one path in favor of the other, because the existence of any information which would make that determination would contradict that these both histories led to the same present. Thus there is no physical way to distinguish these histories.


> I'm not talking about the reversibility of a theory.

You asserted that there was something wrong in principle with many-to-one dynamics. I pointed out that we have consistent irreversible theories that are considered in-principle acceptable theories, and in particular are strictly preferred over one-to-one theories in their respective domains.

> You can conjecture that our current reality could have been arrived at from two different paths. What you cannot do is assert the truth of a theory which selects one path in favor of the other, because the existence of any information which would make that determination would contradict that these both histories led to the same present.

Just because two different pasts would have led to identical presents does not mean we don't have criteria (such as simplicity) to favor one past over another. Indeed, all of science is based on this. Ultimately all we can use to check our theories are our observations, and the actual microscopic state of the universe is vastly under-determined from these observations alone. Assumptions about simplicity and regularity must be deployed.

As an extreme example: it's logically possible a cheesecake materialized in the center of the sun 1 second ago. (You can either think of this as a thermodynamic fluke, or as a new proposed fundamental theory where the standard model of physics is temporarily suspended 13.8 billion years after the big bang and exactly 1 cheesecake appears in the center of each star.) The cheesecake would be instantly destroyed, and it would not influence my observations, and thus there are two possible pasts (cheesecake vs. no cheesecake) which give the same present state and observations and we cannot categorically rule one out. Nevertheless, we assign astronomically low probability to the cheesecake past.


I thought that two paths leading to a single path was equivalent to wave function collapse. When we percieve things as acting clasically, what has happened is that from our point of view, all those universes that are compatible with our current observations have collapsed into a single thing.


> If the same universe results from two paths, then they are the same path.

And if from the current universe two paths may result then they are the same path. So there is a single path?


I have a loose rule to read parentheses as brain-farts (I think, this applies here, two).

The quoted remark assumes the universe was a state-machine. Which is a tall order. On the other hand, it's just a reformulation of the "energy is never lost" hypothesis (axiom), or "entropy always increases". But that's really just pop-sci. As was pointed out up-thread, it's rather describing the human way of thinking. Which to me also implies that we like to exaggerate a little: "Of course I could figure out where you were last month, in principle, it's just ... I don't have time for that right now!"

Edited: I always mix up increase and decrease of Entropy.


The quote is actually a great way to phrase in simple terms an important property of quantum mechanics. It is called "unitarity" or equivalently "time evolution is described by a unitary linear operator". Obviously, the technical terms would have meant nothing to the general reader, so they were simplified, but they are most definitely not (to quote you) "brain-farts".

PS It is more general than a classical state machine.

PPS https://en.wikipedia.org/wiki/Unitarity_(physics)


OK but is there a reason entropy is said to increase? I mean linguistically, if you will. What with "heat death" it doesn't sound too positive.


As cool as it sounds, this is way beyond intelligibility for non-physics people. I don't understand the need for trying to vulgarizing anything quantum related.

I understand the Maxwell demon, and Bennet's argument, but everything after that is simply too hard.


Both papers published simultaneously... How does that happen? But the probability of that happening seems to be more since the last 100 years ;)

And no, I don't have any research papers or statistics to back up my statements. Just saying from observing this pattern on reading upon many of the stories about inventions/discoveries etc.

BTW, what is the probability of others feeling the same about this "observation"?


Its because new ideas are built from old ideas. And old ideas travel around the world from mind to mind. When it is time for a new idea to be born, the ideas from which it is constructed are readily available. That is to say: When the last piece of a new idea is distributed rapidly, lots of minds can simultaniously arrive at the same insight. The same new idea. That is why in the modern age, as information travels faster and further, people are having the same idea at the same time, more often.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: