It is important to note that there is a hidden agenda in this essay: Sabine is a superdeterminist, and so her presentations have to be viewed through that lens. As a superdeterminist, she is committed to idea that classical reality is "actually real" (for some value of "actually real" -- see below). And so Hyperion's chaotic behavior is actually real, and QM cannot account for that. All of which is true.
But there is an alternative view of classical reality which is that it is not "actually real" (the scare quotes are important here), that it is all just a sort of shared illusion among sentient beings. And QM can account for that, i.e. the math of QM can (and does) describe a set of mutually entangled observers all of which agree on an observation, notwithstanding that the observation has not actual referent in the model. And that is actually what we observe. We do not directly observe Hyperion. What we observe is our own subjective perception of seeing Hyperion, and having everyone around us report their own subjective perception of seeing Hyperion. The "actual existence" of a classical entity, i.e. an actual physical referent for "Hyperion" seems like a plausible explanation for this, but what QM tells us is that this explanation is wrong. But Sabine, as a committed superdeterminist, rejects this idea. But she's not being up-front about that.
The problem with QM is not chaos, it is the fact that it is incompatible with classical intuition. The chaos thing just brings this incompatibility into very sharp focus.
I must be super deterministic as well because the alternative you describe sounds … unuseful to me. If we can have illusions like that then all the conclusion you extract from that are also subject to the same effect: it’s not necessarily true that QM can account for it; we could also be “allucinating”collectively that they can account for it. Or, we might not exist at all and all perceptions are all fake.
Once you introduce a malicious devil like that it always goes “full universe”. Say, this religious group knocks on your door, tells you that Earth is 2022 years old. You point to dinosaur fossils. They say the Devil put them there to trick us. But then, you follow, our whole existence might as well be fake. The Devil could have created both of us 5 minutes ago, with fake memories of a life that never existed, just so we could have this conversation. And we will both vanish when I close this door (blam!). The usual counter is that the devil has a very specific amount of power: triceratops head is ok, humans is too much. But even accepting that, how can you know you are a human? It might be the Devil tricking you again, maybe you are a teacup that thinks they are human.
> we might not exist at all and all perceptions are all fake
Yep. That's pretty much the size of it, except that "fake" is putting too negative a spin on it. It's more like virtual reality than "fake". What we perceive as classical reality is a very good approximation to the truth at large thermalized systems like us. But it is not the truth. Just like Newtonian gravity is a very good approximation to the truth in the weak-field case, but it is not the truth.
I find the superdeterministic solution more problematic since on its own it just says that the reason the universe seems to not have local reality is just that it has been predetermined that it should seem that way to us.
This is worse to me than our experience being a natural consequence of entangling ourselves with the rest of the universe. It's not an illusion, it's just an admission that our experience is also ruled by quantum mechanics and we cannot observe something without interacting with it.
It's the state of observer that is changed by observation, thus observer becomes entangled with the observed object, and their states become correlated. Noncorrelated state would look blurry, but correlated state looks classical.
Many worlds is not positing any old Descartes' demon though. It's just a trivial combination of things that are considered obvious.
1. Wavefunctions and superpositions objectively exist.
2. Wavefunctions can be entangled with each other, in ways that make parts of them not interact with each other.
3. When interactions have differing outcomes given different parts of a superposition, you find the outcomes depend on a single part of said superposition.
If you posit the exact same rules for the lab as the experiment, 'wavefunction collapse' or 'communication' aren't even concepts that apply 3 becomes tautilogical. The only questions are 1. 'why do I find myself observing the world as the experimenter who saw A and not the experimenter who saw B', but the other you is asking the exact same question, and you'll never encounter them. And 2. what is entropy/measurement/why do I observe the world as one experimenter and not both?
Superdeterminism goes the other way and throws out free will, information, and randomness. If you do that, you're left with equally large questions about why entropy increases, what is information, and why do we think we have free will? It's a fairly large philosophical step backwards low information explanations for many things have to be replaced with 'just because the universe is arranged precisely in a way that makes it seem that way even though there's no reason it couldn't be arranged differently'
Both are far more coherent and less egotistical than copenhagen or bohmian mechanics.
Agreed. I have to believe that there is magic, because my brain cannot handle never existing. These people are hopeless and we should probably label their beliefs as needing mental health rather than "sincerely held beliefs".
I guess the bottom line of it all for me is: can it build me an interstellar space ship (FTL would be nice!) or not? I'm OK if we're all merely sharing the illusion of colonizing nearby stars.
> can it build me an interstellar space ship (FTL would be nice!) or not?
Only the "illusions" that obey the laws of physics are allowed. The laws of physics are the only "real" thing; positions and speeds of specific objects are not.
So if the laws of physics happen to forbid FTL travel, we will not get such spaceship.
A shared illusion of mutually entangled observers with different subjective perceptions of the world with mutual lowest common denominators (senses).
The superdeterminism is the thing bugging me most about Sabine's content, that even though it's usually good, it's all from a hard deterministic perspective, which in the end is like any narrow worldview - limited in its usefulness. And superdeterminism at this point is just a philosophical version of saying "no you are" in an argument or "works on my machine" to the QA.
Actually, mainstream superdeterminism would allow us to radically change society and how we view norms. It would lead to radical changes in the criminal justice system for example. It just needs acceptance from people, and there is a lot people who stand to not benefit from these changes in society that are trying to make it harder to do so.
I'm intrigued. Can you give an example of how mainstream superdeterminism would allow us to radically change society and change how we view norms? From the perspective of current society, superdeterminism and the resulting alternate society?
I assume its by distancing side-effects from personal responsibility. iIt's one of the many reasons favoured by highly intellectual people that lack empathy - it is based upon perfect rationality and assumption everyone is rational like them.
>Well for one, we would attribute behavior we don't want in society to genes and/or the expression of those genes in a particular environment. Future treatment would be to note this, remove it from our genome, and see if said behavior is gone.
show empathy? It means not giving people chance to live or murdering them in cold blood because they genes that aren't deemed perfect. We already had this in our civilisation and remember it as one of the most terrifying events in our past.
Well for one, we would attribute behavior we don't want in society to genes and/or the expression of those genes in a particular environment. Future treatment would be to note this, remove it from our genome, and see if said behavior is gone.
Question: How often have you seen Quanta Magazine write a disclaimer that their latest story is from a school of superstring theorists? It’s PR, this is just how it is.
I do wish Sabine would state the axiom “reality is real” up front though, because I think people should be trusted to consider that one on their own.
You seem to claim that quantum mechanics can/does in fact model chaotic behavior, only that such chaotic behavior is "not actually real" in ways. That claim would seem to contradict Hossenfelders message, she seems to claim quantum mechanics can't model chaotic systems -however "actually extant" they may or may not be.
I just don't know, but I prefer to consider claims of hidden agendas after theoretic details are resolved, if at all.
The issue is quite simple at its root: the Schroedinger equation is linear, and linear systems can't produce chaotic behavior, so the existence of chaotic behavior implies that there is something going on beside the SE. But this is not news. The existence of classical reality in the first place implies the same thing. This is the measurement problem: if you start with a superposition, and you transform it according to the SE (or any other linear dynamic) then you have to end in a superposition, but in "reality" you don't. So you have only three choices: introduce some non-linear postulate, deny classical reality, or deny free choice. That is the long and the short of it. There's nothing special about choas in this regard, it's just more evidence that QM is weird, but we knew that already.
You may be missing something here: superdeterminism is a term of art [1], a respectable interpretation of QM, and it's the one to which Sabine professes to subscribe.
So there might be a connection between superdeterminism and fascism? Could you elaborate?
It’s a genuine question. Something feels off with Sabine and I can’t exactly figure out what it is.
There are lots of references there that you can follow, including a debate on whether decoherence theory is really necessary to explain what's going on.
But as far as I can tell, no party in this debate ever claims the Hyperion example implies that "the chaotic motion of Hyperion tells us that we need the measurement collapse to actually be a physical process." The decoherence arguments about Hyperion do not assume this (as far as I know).
Can someone please explain? Am I misunderstanding?
There is at this point like 40 years of quantum chaos research. I dipped my toes into it a bit in grad school. In the eighties it was believed that chaos might be the key to understanding the quantum-classical transition. It's not a very cool area of research anymore, mainly because it hasn't really led anywhere.
We know that when applying decoherence to quantum mechanical systems we recover the original classical (and potentially chaotic) equations of motion. This basically "solves" the chaos problem but doesn't actually help with measurement, because it doesn't cause collapse of the classical probabilities. This is what is referred to as "superselection".
In terms of cat analogies, the quantum wavefunction is a superposition of amplitudes a|dead> + b|alive> and these amplitudes cannot be chaotic because the Schrödinger equation is linear. When the system becomes decoherent, it becomes a classical probability distribution p(alive) that evolves under potentially chaotic classical motion. However it's still probabilistic, the cat remains neither alive nor dead. We haven't "measured" anything.
The confusion (and this is what Schlosshauser talks about) is that for some people (famously Ballentine) who follows the ensemble theory of quantum mechanics, there is no meadurement problem at all once we have a classical probability distribution. This view is not very popular however.
So anyway the point is that the existence of classical chaos can be straightforwardly explained by decoherence without any collapse at all.
The catch is that you can know certain and impossible things without measurement by simply showing their amplitudes to be 1 and 0 respectively. In case of the cat you have state after observation |dead>|see dead> + |alive>|see alive>, then project it to basis <see definite cat| <see indefinite cat| :
Havent read the paper, but I think when she objects to 'averaging' she means the tracing out the environment; which I think is ok. Especially since all lokal measurement give the same result on the reduced density matrix.
> But the vast majority of physicists today think the collapse of the wave-function isn’t a physical process. Because if it was, then it would have to happen instantaneously everywhere.
Why can't WFC be a physical process? Yes, when a photon on the Moon is measured it immediately changes the "wave function" of a photon on Earth from 50/50 to 100/0. But the wave function was always a probability, and probabilities always allow for hidden information. To the observer, entangled or not the calculation is still 50/50. And you can't e.g. "force" 1,000 entangled photons all to be positive to create an unlikely unusual scenario.
WFC collapse or no, both observers experience their own independent 50/50 photon collapse. It doesn't matter which observer measures first or if there is no "first" at all. The WFC may or may not be instantaneous because it doesn't matter. The fact that the particles are entangled doesn't affect observer A until he's able to observe observer B and vice versa.
For all we know all of our particles could be entangled with other particles halfway across the universe, it can't affect us until the speed of light reaches us so we can observer ourselves that the other particles are entangled.
Also why Hyperion isn't blurred, despite the fact that all of the particles which affect it are too small to be measured, is because Hyperion isn't too small to be measured. The particles aren't too small to be "measured" by Hyperion when they affect Hyperion, and just because Hyperion is further away from us doesn't mean it behaves and differently. And by seeing Hyperion's chaotic orbit we still can't predict those "unmeasurable" particles.
Well, to be fair, Sabine does in fact conclude at the end that it must be a physical process, and the fundamental problem [to be solved] in QM is understanding that physical process.
John Stewart Bell in 1964 proved that broad classes of local hidden-variable theories cannot reproduce the correlations between measurement outcomes that quantum mechanics predicts. The most notable exception is superdeterminism. Superdeterministic hidden-variable theories can be local and yet be compatible with observations.
There are quite a few exceptions that Bell's theorem doesn't cover. In addition to superdeterminism, it doesn't cover the case where the measurement equipment becomes entangled with the system being observed (in other ways than making a measurement).
In addition, if the universe is not strictly quantized and the appearance of quantized measurements comes about through certain "preferred" resonant modes determined by the structure of the universe, we would see the measurements predicted by quantum mechanics despite having only purely local state (seemingly violating Bell's theorem, but since Bell's theorem assumes a quantum universe it doesn't actually apply)
Whether communication is “instant” doesn’t matter because it takes the speed of light to observe.
Say A and B are entangled, both observers of A and B know they are entangled, A and B don’t have a value yet, there’s only A != B.
When A gets measured, it takes the speed of light for A’s measurement to reach B. If B gets measured before then it’s still a 50/50 (and then when A is revealed it will 100% be the opposite of B).
It’s true that if B gets measured B’s observer will instantly know A. But this doesn’t break the speed of light because B’s observer already knew that A != B, so they don’t need any more information from A to determine it when they measure B. Like how, if B and A were predetermined (not quantum) and then separated and B was revealed, B’s observer will instantly know A.
I think the point being made is a non-local variable is just "instant communication" in a different guise. It's a variable that everyone can see, and once it's value is observed is instantly for all observers no matter how far apart.
Well the idea of hidden variables is that they are totally inaccessible to current technology, so they aren't really observable. In the case of Bohmian mechanics the "hidden variable" is the configuration of all particles in the universe, which is clearly not something that can be observed by any observer. It's also not a quantity that is changed by observation which your line "once it's value is observed is instantly for all observers no matter how far apart" implies. It isn't a quantum observable: it is a normal classical kind of state. It's just that the physics for one particle depends on the configuration of all other particles, which is completely non-local.
To me all these references to Bohmian mechanics, hidden variables and faster than light communication makes hides more than it reveals.
These connections always arise because of some invariant our universe preserves, eg charge is conserved, or spin is conserved, or whatever.
The universe happens to let us create particles - an electron say, from pure energy. But the conservation of charge means you can't just create an electron, there has to be a positron too otherwise charge isn't conserved. They would instantly annihilate each other if they hung around together, so in order for us to see them they must be moving away from each other (with equal but opposite momentum, because that's conserved too).
Until we measure it, we don't know what the charge of either of particles is. Quantum superposition means that until we measure it, the charge of both particles is effectively in a 3rd state: unknown (as opposed to positive or negative).
Putting this in terms of hidden variables, there is a variable that is in one of three states: "particle-A: Unknown charge, particle-B: Unknown charge", "A: positive, B: negative", "A: negative, B: positive". It doesn't look to me like this this variable exists everywhere. It only describes the state of these two particles which by definition occupy a tiny positions in space-time. And no it isn't unobservable either: you can observe the variable by just measuring one of the particles.
It does raise all sorts of interesting questions about how the universe operates. How does it preserve these invariants across space and why are they preserved? Does this quantum superposition look anything like it's being described here? I've seen other descriptions that make it arise quite naturally from the math of 2-norm probabilities and thus not require multi-verses or some mysterious "collapse" (which look to me to be about a useful as explaining what s going on as "Bohmian mechanics").
So these hidden variables are a hand wavy way of describing how conversation laws interact QM superposition. It's not really that complex - is it? Was there any need to introduce "hidden variables" at all?
It seems to me a lot of the difficulty people have in describing QM is not in QM itself, but from the knots people get themselves into in their personal struggle to understand it. They write their knots down, we gumbies read it and say "bugger me that looks hard".
QM nonlocality isn't useful for instant communication though.
If it is a physical process, you still cant influence it on one end in order to communicate faster than the speed of light.
And nothing is "observed instantly" since that phrase doesn't have meaning in SR.
Two space-like separated observations, which cannot communicate with each other in SR, will have correlated measurements. Which measurement comes first or if they happen at the same time, will depend on your reference frame and motion.
So once a single measurement has been made somewhere in the universe, all observers (light-like or space-like separated) will measure entangled values that agree with that measurement, but the entangled state was created indeterminately. If the observations are space-like separated you cannot say which one "caused" the collapse of the wave function.
And like I said you can't use it for communication. The fact that you can't assign which observation was cause and which one was effect is probably tightly tied to the fact that you can't use it to communicate -- which side is the sender and which side is the receiver? That depends on the reference frame, which produces nonsense, so to avoid a paradox it is banned.
That’s superdeterminism (the version of it Sabine likes), but most scientists won’t accept it because they hear you as saying they can’t do reproducible experiments.
No it's not, it's many worlds. It's completely local and says nothing about free will or causality.
Step 1) Entangle pair. Parts of wavefunction with up-down and down-up exist.
Step 2) Lab A interacts with pair, they either get entangled with the up-down pair or the down-up pair, or any other subset of the wavefunction.
Step 3) Lab B interacts with any part of the Lab A+pair wavefunction. When they do, they find that, astonishingly, the part of the wavefunction they find themselves entangled with when they speak to Lab A is the same part of the wave function they find themselves entangled with based on their measurements.
No new entities are posited, no new mechanism is posited, no assertions are made about wavefunctions vanishing upon interaction. It's the simplest possible claim. The only effect it has on whether or not you can do reproducible experiments is the entropy in the subset of the wavefunction you can potentially interact with went up.
1) entangled particles are collapsed “instantaneously”. But it’s not really instantaneous because those particles can’t reach the initial observer/measurement until the speed of light.
2) Superpositions are collapsed which apparently causes different behavior than if they’re collapsed later. e.g. the double slit experiment, where measuring the electrons before the slit causes the interference pattern to disappear. This and Bell’s theorem are very unintuitive but they don’t necessarily break the speed of light.
What if electrons are shot through a very long double slit, where they may or may not be measured early on but the observers at the end don’t know this? This still doesn’t break the speed of light because once the electrons create the interference pattern which the observers can measure to determine if they are entangled, the electrons are already local to the observers.
Bell’s theorem just shows there is no one “hidden state” in entangled electrons before they are measured. The entanglement seems to create the relation “chargeA = !chargeB” without setting “A” or “B”, but when A is finally resolved, it still takes the speed of light for this information to reach B.
The only thing we know is happening for sure is interaction, which when studied using the so-called density matrices of the interacting subsystems gives insight into how any subsystem "sees" another one. You get all the "parallel universes" as separate terms this way, and see ways in which these terms evolve independently of each other and ways in which they can linearly interfere. This is all continuous evolution under the Schroedinger equation by the way.
You unfortunately won't get any satisfactory answer to this but you can definitely scratch the itch if you take the time to study some QM and decoherence theory.
AFAIUI any measurement necessarily interacts with the system, and that interaction changes the wave equation. Measurements are not different from other interactions.
IMO the word "measurement" is over-utilized in this space. My layman interpretation is A interacts with B, collapsing each other's waveform from the perspective of the other. At some point in the causality chain my eyeballs might be B, but there's nothing special about my eyeballs vs. a sensor vs. a piece of dust.
Someone more knowledgeable might be able to say what interpretation this would be considered, and how it differs from competing ones.
Personally, I am firmly in the "superdeterminism" camp. To me it's the most elegant interpretation of QM without any mystical measurements magically dependent on "consciousness" and whole universes created on each interaction. If you want to learn basics about it, I recommend watching this video: https://www.youtube.com/watch?v=dEaecUuEqfc Do not mind the (joking) clickbait title, the talk itself is quite good even if you only vaguely remember QM.
In the Many Worlds view, does this mean Hyperions motion is nonchaotic overall (i.e. considering the ensemble of the Hyperions in all branches) but chaotic in (most of) the individual branches? Not sure that makes sense.
It’s not. You end up in all of them. You branch. There is no hand-wave.
It’s like if you have a VM with a program running in it, and you clone the VM. Which VM does the running program end up in? In all of them, of course. And the
fact that the program branches/forks in that way is not observable to the program. All clones of the running program believe they are the same running program as their earlier self in the original VM. If the program was running a conscious AI, the consciousness would split (multiply).
Honestly, I don’t quite understand what problem you see. The obvious answer is that each you has been experiencing the branch they are in, and will continue to split into more yous that will each again experience the branch they are in.
It seems that you’re assuming that there is only a single linear you with a single subjective future, and can’t conceive of your subjective experience forking. But the point is that the perceived linearity of the self is only an illusion, stemming from the vantage point of any given you at any given point in time looking at his/her respective past at that point in time (which for each of them indeed looks linear).
Your future branches are all equally real futures of your current self. Your current self will end up in all of them, but of course each of those future yous will only remember their shared past (your current present) and not have any perception of the other branches.
The fact that we never subjectively experience the branching (because we are merely a content of the branch, just like the running program is merely a content of the forking VM and does not notice that it is being forked) may be why some people have difficulties with the idea of their subjective experience splitting into separate futures.
Again, take the program/VM analogy. What would you expect the program to experience? The answer is that the experience will differ depending on which clone you consider (assuming that the VMs diverge like the MW branches diverge), yet all of the clones used to be the same running program. All the clones are equally real continuations of the original running program, just like in MW the different yous are equally real continuations of their former single consciousness (but now are separate consciousnesses).
You are right that we can’t reason about which future we will end up in, because we will end up in all physically possible futures. That can indeed be a philosophical conundrum. Decisions we make are only effective in the branch they are made, and there will presumably always be (equally real and existing) branches where they are not made. That shouldn’t detract from making desirable decisions though, because once we make them, we are necessarily in the branch where we made them, and it’s only our unlucky former future selves who didn’t make them that are now in the less desirable branches.
> That shouldn’t detract from making desirable decisions though, because once we make them, we are necessarily in the branch where we made them,
In a branch, surely? While you're thinking for a second or so, there will be numerous nuclear decays in the relevant areas of your brain. Potassium-40 specific activity in the brain human brain is in on the order daBq to hBq kg^-1; there are other radioactive isotopes to be found in the brain, and there are plenty of other branching events. So "there will always be (equally real and existing) branches in which they are made", right?
When thinking of how many worlds in the MWI formalism there are, always think much larger than your intuition tells you, since every quantum outcome is realized. One needs only a small finite fraction of the universe to reach uncountably infinite branches. That should be taken into account when evaluating aesthetic concerns like conceptual clarity particularly for something that requires a weighty set of quantum numbers, like consciousness.
> assuming VMs diverge like the MW branches diverge
You could try to apply this to your VM-forking analogy, by generating a different VM image per bit of VM memory (including registers and memory-mapped I/O devices and the like) per clock cycle, taking the simplifying assumptions that clock-time is uniform for all bits and the clock cycle is the minimum possible time. Then as they say, shrink the lattice, taking clock cycle -> Planck time, bit-layout -> Planck volumes, binary content of each bit -> field content of (continually interacting and self-interacting) 24+ Standard Model fields. And we're still not even fully relativistic.
If I understand correctly, you are arguing that MW is unattractive as an interpretation due to the number of branches. I don’t feel that way. I find MW attractive because it requires less assumptions. It feels natural to me that the physical laws describe a relation (in the mathematical sense) between past and future, rather than a function. I don’t have a problem with uncountable infinity. I mean, the number of possible functions on the natural numbers, a fairly simple thing,
is already uncountable. It shouldn’t be a surprise. We also don’t know whether the universe is actually non-discrete at the base level. It may still turn out that there is really just a countable infinity of possible states of the universe, and thus also just a countable infinity of branches. That’s still humongous — it can accommodate anything you could possibly imagine — but I rather see this as a pro of MW.
What I don't understand about this example with Hyperion is
a) Wouldn't Hyperion's orientation be influenced much more by Saturn and its many moons on their many orbits than by dust and photons?
b) Wouldn't it then boil down to "just" an n-body problem (for which we know no closed form predictions exist -- yet is still perfectly "classical" in nature)?
In other words, why involve dust and photons and wave functions and entanglement when the answer could simply be "it's an n body problem so we simply can't predict its orientation"?
QM is far worse than an N-body problem. You can trivially model an interaction of 2 or 3 particles as bodies, at a given precision. But when you consider them as waves, you have to compute an insane amount of information.
And precision is another undefined thing. Is particle's speed / position continuous or quantized? Or in case of QM, is there a resolution to the waves? Or is it all infinitely precise?
My guess is that this is an example of the struggle between proponents of different interpretations of Quantum Mechanics. There are rather a lot of them, even though they all try to explain the same theory. The whole issue eerily reminds me of the debates between proponents of different theories of the relationship between God and Jesus (trinitarians, nontrinitarians and so on). Unless they yield testable predictions, there will not be an end to them.
The main claim that the Schroedinger equation isn't consistent with chaotic behaviour of observables because it is linear in psi seems poorly thought out.
As one commenter on youtube pointed out, the Schroedinger equation is linear in psi, but the Liouville equation from statistical physics is also linear in rho and nobody suggests this somehow makes it inconsistent with classical systems exhibiting chaos.
The mistake is that mathematical linearity is in equation for psi/rho, and this is then incorrectly changed into assuming linearity of equations for observables. The observed chaos is not in psi/rho, but in observables, say, positions and momenta.
The actual problem here is the age-old measurement problem, i.e. the psi function does not tell us result of experiment, only probabilities of possible results, so psi function can't be complete description of state of things in reality.
Before discussing chaos in QM, I'd like someone more knowledgeable than me to explain a few things to me, if you don't mind, of course:
1) how do we know particles act as/are waves? the double slit experiments does not seem to be proof of that:
a) if we rotate the slits 90 degrees, then the interference pattern also rotates 90 degrees in the same direction...does that mean the wave also rotates 90 degrees? does this mean the wave is 3d? and if so, why is the interference pattern not a checkerboard?
b) if we enlarge the slits, the interference pattern goes away; the more we enlarge the slits, the more the interference pattern disappears. This means that in our original experiment, the slits were too small for the particle and the particle hits both slits simultaneously, and hence the interference pattern. Why should then the particle be a wave? putting a detector means to make the two slit equipment one slit effectively, making the interference pattern disappear. How can then one reach a conclusion that particles are waves?
2) how do we know that entanglement is the result of 'spooky action at a distance' and not the result of creating two entangled particles? couldn't it be that the particles are created with correlated spins and no communication ever occurs between them?
Why am I asking these simple questions and not contribute to the discussion meaningfully, you might ask? well, it seems QM has all the markings of an strange theory, that works for some cases, does not work for some others, and it requires magic for some others. Those things make me think that the basics of QM may be invalid, so I am really questioning those basics.
I am sure I am 100% wrong, so please do illuminate me on the questions above; I've tried to ask before, and in different sites, the worst responses being 'go away lame person! how you dare questioning are beautiful intellect!!!', and the best responses being lots and lots of math that don't really answer my basic questions. I am just looking for simple straightforward answers.
Regarding waves and interference, have a look at this video: https://www.youtube.com/watch?v=TshYfYIxR9E. It shows how the interference patterns actually look. Yes, the wave is 3D and the interference patterns can be kind of like checker board. We are usually considering a 2D example because it is simpler to visualize and analize.
> how do we know that entanglement is the result of 'spooky action at a distance' and not the result of creating two entangled particles?
A very high-level explanation is that we know this because quantum computers work (even though right now we can't build them big enough to be useful) and it can't be explained by any classical effects.
There's pilot wave theory, where a particle behaves like a particle and is accompanied by a pilot wave, which behaves like a wave, the particle is also nonlocally affected by its pilot wave. This theory is nonlocal, which is problematic for different reasons.
>how do we know that entanglement is the result of 'spooky action at a distance'
The claim is opposite: spooky action at a distance is a result of entanglement.
> couldn't it be that the particles are created with correlated spins and no communication ever occurs between them?
> each time a grain of dust bumps into the moon, this very slightly changes some part of the moon’s wave-function, and afterwards they are both correlated
Help me out here, though. If this is the explanation, then wouldn’t every moon, planet, etc be chaotic, since presumably they all interact with dust and light?
So to be clear, the article discusses the "wave-function" the "superimposition of states", not of an electron moving through slits, or even of a cat in a box; but of Hyperion, moon of Saturn, diameter 200 km; and what this means for predicting it's motion of over time? Because after a while the superimposed states produce a outcome that can only be explained via Quantum Mechanics, like with the experiments of electrons through slits?
I suppose it must be valid to model this large system with Quantum mechanics. And that's partly the point of the article?
> But the vast majority of physicists today think the collapse of the wave-function isn’t a physical process. Because if it was, then it would have to happen instantaneously everywhere.
I'm a physicist and this does not pass the sniff-test. Part of the problem is we don't know what a "physical process" is. And the second sentence just does not follow from the first (if we even knew what the first sentence meant).
1. Universe started with a first phase transition involving gravity
2. Gravity might have another phase transition at the micro and macro quantum boundaries.
3. We are already use to having math systems change and boundaries to phase transitions so its not really radically new to our mathematics tools.
Yeah, I do have in my unfinished bucket re-doing my math knowledge and tools as either a MS or PhD.
That said I might be wrong in either one of these things or seeing the micro and macro boundaries of quantum and classical as a phase transition in gravity.
But, my limited math sense tells me I might have the right idea.
The key would start with reworking the Maxwell magnetic equations to account for gravity and to describe the phase transition between micro and macro world environments in terms of going from quantum to classical physics
Its a bit hard to picture it at first as the two mathematical systems used
are incompatible with each other. One is in the matrices world and one is not.
But, we have similar math mismatches in our real world. One part of economics is matrices and one is not and yet the micro and macro economic models fit together somehow. So I think it is eventually reachable to some of us as far as understanding the math.
You have two sensors, P and Q, used to measure some property of two entangled particles. When P and Q have the same setting, they always disagree, because the entangled particle that Q measures has a state opposite to the particle measured by P. Similarly, when Q has a setting opposite to that of P, P and Q always agree. For any other setting, P and Q will sometimes agree, and sometimes disagree. The extent to which the agree is what quantum physicists refer to as the correlation. If they always agree, the correlation is 1; If they always disagree, the correlation is -1; when they sometimes agree you'll get a value between -1 and 1. It's really just the percentage agreement between the two detectors.
Correlation between two quantum observables indicates how, on average, measurements of them are related. Positive correlation means that generally you measure both variables to have the same sign, more strongly correlated being closer to a straight line, i.e. for perfect correlation a measurement of one variable totally determines the other. Negative correlation means that if you measure one to be positive you will probably measure the other one to be negative and vice versa.
For two quantum operators a and b, for simplicity on the same Hilbert space H for which one state is |psi>, which is also chosen to be the current state of the system, the correlation is given by,
<(a - <a>) (b - <b>)> / (sigma_a sigma_b)
where <o> = <psi|o|psi> for any operator o, and sigma_o = sqrt(<o^2> - <o>^2) is the standard deviation of the quantum operator
Okay, good to see that quantum mechanics
defines its use of correlation as
<(a - <a>) (b - <b>)> / (sigma_a sigma_b)
Good. Glad to see the actual definition.
That is, with usual quantum mechanics
notation, this is the usual Pearson
correlation coefficient and, thus,
appropriate to call correlation.
Fine.
Good to see that this is just what physics
is talking about.
My concern has been, does physics say
"correlation" when really they mean some
case of what we might call the usually
much stronger functionally dependent?
That is, for complex valued random
variables X and Y, they are functionally
dependent provided there exist functions
f and g so that
Y = f(X)
and
X = g(Y).
With this functionally dependent
situation, the correlation could be
anything from -1 to 1 but usually we won't
care because having either of X or Y
implies we also have the other exactly.
So, as I was reading quantum mechanics, I
was wondering if there could be, and open
to, two random variables functionally
dependent but without correlation 1 or -1
so that, really, talking about mere
correlation would be too weak and, thus,
not really appropriate.
Well, in the case of correlation 1 or -1,
there exists non-zero, complex a so that
Y = aX
in which case instead of mere
correlation it would be better to use
the stronger functionally dependent.
As I understand the situation, usually
quantum mechanics uses correlation for
examples of entanglement where it
appears that, yes, the stronger
functionally dependent would hold, and,
thus, be more appropriate.
That is, the usual uses of correlation
outside of quantum mechanics, say, in
statistics of medical research, economics,
finance, social science, mean "two random
variables not independent and also not
functionally dependent" -- having the two
random variables be independent (which
implies correlation 0) or functionally
dependent would be unusual.
> As I understand the situation, usually quantum mechanics uses correlation for examples of entanglement where it appears that, yes, the stronger functionally dependent would hold, and, thus, be more appropriate.
Not really, since things can be weakly entangled, and thus measuring one thing doesn't totally determine the other. The kind of entanglement where you have perfect correlation is usually only achievable in certain strange situations, like EPR entanglement. In that case it can be seen as perfect correlation though, which I don't see as being any different from the functional dependence you speak of.
In most cases in quantum systems, we have some correlation but not perfect correlation (e.g. squeezed states of light), and since we have that whole spectrum of behaviour, with maximum entanglement (e.g. some kind of EPR) being the extreme, we choose to use correlation to quantify the entire spectrum of possibilities.
Also from a more philosophical point of view, we expect to treat different degrees of freedom separately. If you have two particles but measuring the momentum of one gives you perfect knowledge of the other (i.e. EPR), it still doesn't really make sense to say p' = -p, or whatever, since conceptually p' and p are momenta belonging to two different subsystems. The most neutral way of stating it is that the momenta are perfectly correlated.
We have two math systems that do not mesh together, one applies to macro and one applies to micro. We have no mathematical construct that tells us when micro ends and when macro begins.
My biased guess is that it might be a transformation process and math based on Gravity itself i.e. a reformulation of the Maxwell magnetic equations including gravity.
Probably a phase transition in the gravity space that gives the real world transition from micro to macro switch from quantum to classical physics
This is known as gravitationally induced decoherence. We don't know if there's any truth to this, but people are seriously working on measuring this stuff in the lab. You might like to read what Roger Penrose has to say about it. (Btw, we already know how to combine Maxwell's equations and gravity. This is all classical physics.)
What is not linear about QFT is the perturbation theory that is used to predict interactions. The underlying Hamiltonians with interactions included are themselves perfectly linear (but very complicated).
No really. He covers all the important information without bogging down the reader in details, but did it in such a way so that the interested reader can use the post as a map if he wants to find out those details.
But there is an alternative view of classical reality which is that it is not "actually real" (the scare quotes are important here), that it is all just a sort of shared illusion among sentient beings. And QM can account for that, i.e. the math of QM can (and does) describe a set of mutually entangled observers all of which agree on an observation, notwithstanding that the observation has not actual referent in the model. And that is actually what we observe. We do not directly observe Hyperion. What we observe is our own subjective perception of seeing Hyperion, and having everyone around us report their own subjective perception of seeing Hyperion. The "actual existence" of a classical entity, i.e. an actual physical referent for "Hyperion" seems like a plausible explanation for this, but what QM tells us is that this explanation is wrong. But Sabine, as a committed superdeterminist, rejects this idea. But she's not being up-front about that.
The problem with QM is not chaos, it is the fact that it is incompatible with classical intuition. The chaos thing just brings this incompatibility into very sharp focus.