Hacker News new | past | comments | ask | show | jobs | submit login
Chaotic gravitational systems and their irreversibility to the Planck length (oup.com)
77 points by seventhtiger 7 months ago | hide | past | favorite | 75 comments



It's also possible that inertia is quantized[1], which further complicates things. There is an experiment launching soon which may offer some evidence either way.

[1] https://physicsfromtheedge.blogspot.com



From your link:

>I've suggested (& published in 21 journal papers) a new theory called quantised inertia (or MiHsC) that assumes that inertia is caused by horizons damping quantum fields. It predicts galaxy rotation & lab thrusts without any dark stuff or adjustment.

In the physics community theories that reject dark matter are broadly considered pseudoscience and theories that reject WIMPs are broadly considered unlikely to be true.


I'm well aware that it's a fringe theory, which is why I'm looking forward to the test of a thruster based on this theory in space. It'll give some hard evidence either way.



The idea may be compelling, but the math doesn’t add up: https://arxiv.org/abs/1908.01589


Maybe the universe is digital after all.


Quantization does not imply discreteness: https://physics.stackexchange.com/questions/206790/differenc...


And neither alone imply computable (or 'digital').

You'd need determinism.

Some reply was (improperly?) flagged, but computability requires determinism.

All computable functions are functions from the integers to the integers


One of the most famous problems in computer science, P vs NP, is about non-deterministic computation. So no, computation does not require determinism - there are in fact plenty of models of non-deterministic computation. There are even models of computation where the halting problem is solvable (typically called hyper-computation).

Now, it is true that computation does require some amount of determinism - if the universe were entirely non-deterministic, i.e. if there was no kind of causality and events were completely unrelated to each other, there could be no notion of computation. But no one believes in that type of universe. Adding some source of rare non-deterministic events to an otherwise deterministic universe does not hurt computation.


I think you picked a bad example, because the "computation" in NP is not really a computation to most of us.

Most programmers think that computation means "something that can be done on a Turing machine or equivalent". It isn't hard to extend this idea to things like a true random number generator, or a quantum computer. This shows your basic point that computation need not be deterministic.

But the "nondeterministic" in NP doesn't speak to an actual computation that programmers think can be done. It speaks to a computation that we'd like to be able to do, but most of us think can't be done. (There is a prize for proving that impossibility.) And while there might be models of computation where said computation can be done, few programmers would think of them as modeling an actual computation.

Here alert readers might jump up and say, "Quantum computers might be able to solve NP complete problems!" True, we don't have a proof that it is impossible. But at the present time, there is no reason to believe that it is possible either. See, for instance, https://www.scottaaronson.com/papers/npcomplete.pdf. And so it appears that for actual computers that can be built, there is no computation matching how we'd like to solve NP complete problems.


I was mainly trying to point out that the mathematical term "computation" is not limited to deterministic computers. Since the GP was mentioning computable functions as something requiring determinism, I believe they were also talking more about the mathematical notion of computation rather than physical computers.

I should also note that our computers can very much solve NP-complete problems. They can't implement NP-complete algorithms to solve them, but all NP problems can be solved by a deterministic computer or a quantum computer - it just takes [much] more compute time (assuming P != NP, otherwise it may even take the same time).

This is very relevant to this discussion, because in fact it is well known and proven that the non-determinism in the NP model does not give any amount of extra computational power beyond a Turing machine. That is, a non-determinstic Turing machine can solve exactly the same set of problems as a deterministic Turing machine (but, as far as it is known today, faster). The same is true of Quantum computers.

Hyper-computation refers to even more fanciful mathematical models which are actually able to solve problems that a Turing machine can't solve, even with infinite time. They involve things like performing an infinite amount of Turing machine steps in one Hyper-Turing machine step, or having access to an oracle which tells you if a computation halts etc.


Yes, you can have fanciful "models of computation" with oracles of various kinds.

Few people really think of that as computation.

But I agree that there are real examples of physically possible computation which are not deterministic. The most widely used being true random number generators. And so theoretical computability is not necessarily the right model for the real world.


Note that analyzing the complexity of problems in relation to various oracles is probably half of computer science. Algorithms like those in NP are not exotic concepts, they are the bread and butter of many computer science courses and careers.

Hyper-computation is much more exotic, though, I'll give you that.


Random number generation, if physically non-deterministic, isnt computation.

https://en.wikipedia.org/wiki/Computable_function


The "nondeterministic" in NP ("Nondeterministic polynomial") means that problems in NP can be solved in polynomial time by a nondeterministic Turing machine. Such a machine would be like what people incorrectly think quantum computers would be, in the sense that it would explore all the possible paths at once. It's the same meaning as in a nondeterministic finite automaton (NFA) vs a deterministic finite automaton (DFA).

The question of P vs NP is not whether we haven't determined if we can do a particular computation (either at all or in polynomial time). It's whether a deterministic Turing machine could solve in polynomial time the same class of problems that we _know_ a nondeterministic Turing machine could solve in polynomial time.

Of course, the nondeterministic Turing machine is not a physically realistic model of computation.


I'm trying to figure out whether you're just trying to echo what I said in different words, or whether you thought that you were correcting what I said.

If the latter, please be specific about what misunderstanding you think I might have.


> There are even models of computation where the halting problem is solvable (typically called hyper-computation).

BTW, it's not only hyper-computation that can solve the halting problem.

The halting problem is decidable in models of computation that have finite state (although the decider machine does need more state than the machine being analyzed).


The term "the halting problem" typically refers to the problem "given a Turing machine and a starting state of the tape, determine if the Turing machine will halt".

There are other versions of the halting problem for systems that are more limitted than a Turing machine, such as finite automata ("given a finite automaton and an initial state, determine if the automaton will halt"). Some of these other versions are indeed solvable, such as the one you mention. But these are different problems, not the same problem as THE halting problem. As far as it is known today, all such systems are strictly less powerful than Turing machines (that is, for any system where it is provable if a computation in that system halts, there are problems that it can't solve that a Turing machine can) - this is known as the Church-Turing thesis.

Hyper-computation refers to models of computation where THE halting problem (does an arbitrary Turing machine halt) is solvable.


I meant "the halting problem" as in the informal description of the problem, e.g. as in how it is described in Wikipedia: "the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever".

If you take that description at face value and consider a model of computation with finite state (like real-world computers have), then it is decidable.

If you take the usual formal description of the halting problem, which like you said, is specifically defined over Turing machines (i.e. a theoretical model which assumes you can have a machine with literally infinite state, which is impossible to construct in our universe), then yes, you'd need hyper-computation to solve that.


I thought initially you are referring to things like deterministic finite automata (DFAs) or total languages (Idris) when you are talking about finite state.

If instead you are simply referring to the observation that physical computers have a finite amount of memory and thus we can solve the halting problem in finite time by simply iterating over all possible configurations, that is a somewhat uninteresting observation - since if we are already talking about real physical constraints, that algorithm is entirely useless for even the simplest computers from the 50s and 60s. It's basically equivalent to saying "any program will halt, because the sun will destroy all computers on Earth when it goes supernova".

More interestingly, it turns out that there are some finitist versions of the halting problem for finite-tape Turing machines, and they act as a similar kind of limit. That is, it turns out that the only way to verify whether an arbitrary finite-state Turing machine will halt on a specific input is to check all possible states (and this also requires a finite-state Turing machine with a larger tape than the one under analysis).

This result can actually be used in a very similar way to the infinite-tape halting problem: it 100% guarantees that, if your system is equivalent to a Turing machine with tape length N and M possible symbols, it will take more than N^M computational steps to check whether an arbitrary program holds. This can be used to prove that it is effectively impossible to check if a program halts, much the same as the "true" halting problem is used to prove that it is actually impossible to check.

For an example, an arbitrary program for a computer with as much memory as the infamous "640KB is enough for anyone" quote would require at least 640,000^255 (~10^1480) computational steps to check if it halts. So, we can just as easily say it is impossible to check and we wouldn't be far off.

This is very different from something like a total language (e.g. Idris) or a DFA, where it is actually possible to relatively quickly verify whether a program halts.


> thus we can solve the halting problem in finite time by simply iterating over all possible configurations, that is a somewhat uninteresting observation

It is, but that's not the observation I was making. You only mentioned one way of solving the problem, but that's not the only way.

> That is, it turns out that the only way to verify whether an arbitrary finite-state Turing machine will halt on a specific input is to check all possible states

What do you mean by all possible states? If you mean literally all possible states, that's not true. I mean, yes, you could iterate over all possible states to solve that problem, but that's probably the least efficient way to solve it.

There are already-known algorithms which always solve the Halting problem for machines with finite state, and they don't need to iterate over all possible states. They do, however, need to iterate over all state transitions that the machine actually goes through (multiple times, even). However, these algorithms that I'm mentioning (i.e. cycle detection algorithms) are also quite dumb. They don't exploit any knowledge about the state transitions in order to analyze whether the machines halt or not, they just simulate the machine step by step (this is due to the definition of the cycle detection problem itself, which does not allow inspecting the program).

In principle, and even in practice, it's possible to make those algorithms significantly more efficient, at least for many of the machines (i.e. programs) that we care about.

I suspect it is not possible to make such a (fully automatic) algorithm significantly more efficient for all possible programs (even the nonsensical ones), although I don't think such a proof exists (if it does, I would like to see it). The closest I've been pointed to is a paper possibly implying that such an algorithm would have to be EXPTIME-complete, although even the person that pointed me to that paper had some difficulty interpreting it -- and even if that were true, that says nothing about its real-world efficiency.

> That is, it turns out that the only way to verify whether an arbitrary finite-state Turing machine will halt on a specific input is to check all possible states

Can you point me to a source that proves this claim? Not only I'm doubting it, but even if you are right, I'd be really interested in reading such a proof.

> For an example, an arbitrary program for a computer with as much memory as the infamous "640KB is enough for anyone" quote would require at least 640,000^255 (~10^1480) computational steps to check if it halts.

Again, I'm wondering why you are claiming that the only possible algorithms which can check whether arbitrary finite-state programs halt have to iterate over all possible states (or even all the actual state transitions).


By the standards of our times, our computers require a halt, so it’s arguably deterministic by today’s implementation.


Halting has little to do with determinism. Our physical computers are also decidedly undeterministic, at least for practical purposes: TLS itself greatly relies on an RNG for example.


To give a concrete example, a free particle can have any energy it likes - it's only bound states that have discrete spectra.

Mathematically, this corresponds to solutions to a particular differential equation existing for particular values of energy (which appears as a constant in the equation). To use a simpler DE for an example: dx/dt = kx has solutions Ce^kt for all k, but a more complicated DE might only have solutions for some k.


The current mainstream says that it is really not.

However, they might be wrong.

Hilbert space in quanum field theory is infinite-dimensional (otherwise the theory will just not work). Some physics are saying that Hilber space is not infinite-dimensional: Holography principle might imply that finite amount of information in a given region of space, potentially implying a finite-dimensional Hilbert space.

And that might be some kind of pixilation.


> we conclude that up to 5 per cent of such triples would require an accuracy of smaller than the Planck length in order to produce a time-reversible solution, thus rendering them fundamentally unpredictable.

Hence, incidentally, classical physics is itself non-deterministic and non-computable.


The definition of a Planck length implies Quantum Mechanics. Pure classical physics doesn't have a smallest length scale. Classical mechanics is deterministic.


A priori, the Planck length is a a length like any other. It's perfectly fine to talk about that scale in classical physics, without involving QM.

> Pure classical physics doesn't have a smallest length scale.

Quantum mechanics doesn't have a smallest length scale, either, as far as we know.


No, Planck length is not a length like any other. Any other length gets Lorentz contracted, but the Planck length does not. It is a constant in any reference frame. And if you assume hbar->0 as for classical mechanics the Planck length will be zero. So classical mechanics doesn't have a finite Planck length.

Quantum mechanics doesn't have a smallest length scale, but Quantum Electrodynamics has, namely the Compton wavelength. And this is what goes into the derivation of the Planck length. So with only classical theory you simply can't derive the Planck length. Classical theory doesn't have a smallest length scale.


> Any other length gets Lorentz contracted, but the Planck length does not.

Are you claiming that the value of the constant l_P is the same in every reference frame or that objects of length equal to l_P will have the same length (= l_P) in every reference frame? The former is trivially true, the latter is definitely false unless you assume new physics. See also the discussion in https://physics.stackexchange.com/questions/4094/is-the-plan...

> but Quantum Electrodynamics has, namely the Compton wavelength

Citation needed. (Besides, the Compton wavelength of what particle exactly?)

> So with only classical theory you simply can't derive the Planck length. So classical mechanics doesn't have a finite Planck length.

I don't need to derive anything. In classical mechanics I am free to define a constant with value equal to 1.616255(18)×10−35 m and call it the Planck length. Whether or not it has any physical significance is irrelevant. (I mean, questions of physical significance so far have never stopped proponents of the Planck length in QM, either.)


Being unpredictable and being non-deterministic are two completely different things. Classical physics might be the former but it's definitely not the latter.


Feels like it's just another way quantum mechanics non determinism bubbles up into the larger world. So not that all that insightful right?


I don't think it requires any QM to see. I used to have an argument that you could derive non-determinism from the resolution to Zeno's paradox, but i've forgot the steps.

Roughly, you need to measure infinitely precisely to give an infinitely precise value to a variable. Via Zeno, continuous time precludes infinite measurement precision (there are no 'moments') and hence infinite precision in measurement.

All chaos is doing here is giving us a bound on the actual lack of precision reality has -- I take it, via a zeno-ish argument above, that reality has to have such a bound.

Relevantly, none of this requires any reasoning from QM premises or observations.

This paper adds-to-the-pile cases where (classical) chaotic systems require measurement beyond-possible spatio-temporal resolution to be deterministic.


Classical mechanics does have nondeterministic cases even without appeals to mathematical paradoxes. See https://en.m.wikipedia.org/wiki/Norton%27s_dome as the most well-known example.


Replied elsewhere, but Norton's dome has no real implications for classical mechanics as a full physical theory. It goes away as soon as you add the idea that matter is composed of fixed shape atoms, without even making any assumptions about the shape or size of said atoms - and this was a well-known, if often unstated, assumption even at the time of Newton.


And as soon as you have a rigid 3-body collision, indeterminism comes back. If A hits B before C, you generally get a different result than if A hits C before B. If A hits B and C at the same time, a range of solutions are possible, including both limiting solutions of the previous two variations.

The extreme example of this is the indeterminism of a break shot in billiards.


Sure, but perfectly rigid bodies are also not compatible with an atomic model of matter, they are just an abstraction used in the maths.

If you wanted to model an actual physical experiment, you would model any physical object as two or more rigid bodies connected by springs of an arbitrary small length*. So, anyone using classical mechanics for experimental purposes rather than explorations of the maths would model the collision of three balls in a way where simultaneity is impossible.

* This is just like when modeling actual circuits, you model a physical wire as an ideal wire plus a resistor. But we don't say that "classical wire diagrams predict that the current through a wire can be infinite" just because you can draw a circuit with 0 resistance - that circuit modal is simply non-physical.


But perfectly rigid atoms are compatible with an atomic model of matter. And in such a model, the indeterminism comes right back.

Yes, the idea of perfectly rigid atoms is incompatible with experiment. But so is classical mechanics on the scale where that fact becomes obvious. And so it isn't classical mechanics that is deterministic.

You are right that n-body classical mechanics combined with the assumption of a continuous response does become deterministic by the standard existence and uniqueness theorems of ordinary differential equations. But the point still remains, classical mechanics winds up not being deterministic without that extra assumption. And that extra assumption is often omitted in real models of things like elastic collisions.


It's unclear whether that's a concequence of poorly specified premises of the classical mathematical framework.

Here, I mean that 'classical reality' as specified in a framework of basic applied mathematics, with no QM premises, produces non-determinism -- via only showing that (classical) chaotic systems exist


Does having inherently unpredictable phenomena have philosophical implications?


Certainly. One of the most fundamental philosophical questions is about the nature of time, precisely because all of our "laws" of physics are time reversible, so from the perspective of physics time doesn't really exist or is just a space-like dimension (no arrow). And so the Universe should be perfectly deterministic and with enough computing power and precise enough knowledge of the initial conditions everything should be perfectly predictable and the future should be "fixed". In some way or another probably most of philosophy is arguing about whether or not this is true and what it means.

This paper claims that 5% of 3-body systems in the Universe can't be predicted even in principle, because you would need to measure initial conditions to greater precision than the planck length, which is impossible. And of course N-body systems where N > 3 are even more unpredictable, and the whole of the Universe is an N-body system, so if correct it would mean the end of determinism.

For a good treatment of this topic for a lay audience see Lee Smolin's recent book "Time Reborn".


> And of course N-body systems are even more unpredictable, and the whole of the Universe is an N-body system, so if correct it would mean the end of determinism.

I read this as the systems are, for all intents and purposes, practically unpredictable, not fundamentally unpredictable.

Just because we can't measure beyond the plank length doesn't mean there aren't deterministic rules down there.

So my take is that the universe could still be deterministic, but that can't be knowable to us since we'd have to be able to peer below the plank length.

Maybe I'm reading this wrong.


Just to go back to the philosophy thread, this is one of the things that made Kripke famous (relatively speaking, philosophically speaking): a priori "knowability" or predictability is not the same as determinism.

You can have a system that is perfectly deterministic, but with outcomes that are a priori unknowable, in the sense of being unpredictable. Kripke didn't use this language, but if the information required to compute the prediction becomes unattainable, either because of capacity or input requirements, you can't make the prediction.

I think there's some very abstract computability theorem in computer science that's more recent that came to a similar conclusion. I think that came at it more from the sense of the amount of change in inputs that would pass during the time it took to simulate a system makes it impossible to perfectly simulate things past some point.

These aren't examples in physics, but they speak to how determinism per se isn't necessarily the same thing as predictability.


Indeed. The evolution of the universal wave function as described by the Schrödinger equation is perfectly deterministic. If you believe in that as the fundamental description of reality (which also means not believing in wave-function collapse), the result is the many-worlds interpretation of QM. Similarly, hidden-variable theories are deterministic.


If hidden variable theories exist, they're non-local. Which is a whole other philosophical problem.

As for many worlds - when everything is possible, nothing is explained.


Many-worlds doesn’t mean that everything is possible, it rather means that everything that is possible becomes actual.

It is arguably strongest in explanation, in the sense that it relies on the least amount of assumptions (just the Schrödinger equation).


> It is arguably strongest in explanation, in the sense that it relies on the least amount of assumptions (just the Schrödinger equation).

It really seems like Physicists day this because they want it to be true, not because it's actually true.

I completely understand they don't like the idea of non-local hidden variables (I mean programmers don't love them much either) - but the idea that a non- local variable is somehow more complex than an infinitely dividing universe breaking into infinite copies and exploring all possible paths at all possible times relies on fewer assumptions or is simpler is just laughable to me. Maybe I'm just not getting it, but it really seems like a way of redefining the rules of a game until the preferred party wins.

"Hey we have this compression scheme that's incredibly fast. It's only a constant time lookup." "Really? How do you do that?" "Well we store and index all possible inputs." Won't some outputs be longer due to pigeonhole principle?" "Well actually in one single encoding it would, but we store an infinite number of different encodings of all possible inputs, meaning in at least one of the encodings it's smaller and just use that one." "So you store infinite variations of infinitely sized data and as a result claim your compression scheme is simpler?" "Yes because when we go to decompress we spawn an infinite number of threads and each thread decompresses by following one of the encodings, and in that thread's view it's just a constant time lookup to store the index which is clearly smaller (so compressed) and constant time to reverse and decompress." "And what about the complexity of the infinite threads with infinite copies of infinite storage?" "Ah we don't count that, we only count the world line of the successful thread."

The incredible "simplicity" of Many-Worlds.


I think you're confusing simplicity with computational cost.

Bubble sort is simpler than quicksort. It is also more computationally expensive.

Universal wave function theory is simple in this sense. (IMHO the term many-worlds is doing the theory a disservice because it's fundamentally misleading. There is only one world, we just can't perceive most of it. Which is as it has been for all of humanity's existence.)


The many-worlds interpretation isn't deducible from just the Schrödinger equation.

The Schrödinger equation roughly predicts that if you put two detectors at the two possible positions where a light beam can go after a beam splitter, and fire a single photon at the beam splitter, both detectors will detect some "amount" of the photon*. However, what you actually see in experiments is that one of the detectors detects a single photon, and the other detects 0. However, you also notice that if you repeat the experiment many times, the probability of detection is exactly equal to the square of the modulus of the amplitude of the Schrödinger function for that state.

To explain this observation, the MWI uses an extra assertion: that each "world" contains a single result, but that the number of "worlds" where the state is X corresponds to the square of the modulus of the amplitude of the Schrödinger function for that state. So, if doing simple frequentist probabilities over these "worlds", your chance as an observer to be in a "world" where the state is X is equal to that value, as the experiments observe.

Note that this assumption is exactly the same assumption as the wave function collapse, known as the Born rule.

* the Schrödinger equation actually predicts something more esoteric than even that: for any two complementary solutions X and Y, there is an infinity of additional solutions of the form aX + bY, with a and b real numbers with certain properties. So, in fact, it is actually impossible to use the Schrödinger equation to predict any particular state. You have to use the Schrödinger equation and a chosen basis of measurement, and only look at the solutions in that basis. The simple idea of "counting worlds" from above mostly breaks down at this stage - you need an additional third assumption of assuming a pre-existing "background" and using Decoherence to explain why only certain solutions normally manifest.


No, it really is.

Consider a sealed room with an experimenter looking at a box with Schrödinger's famous cat in it. The question we usually ask is whether the cat is alive, dead, or in a superposition before the box is opened.

Schrödinger's famous equation predicts that if the cat itself can be described by quantum mechanics, then it must be in a superposition. If the experimenter can be described by quantum mechanics as well, then the experimenter must also go into a superposition upon opening the cat's box. And, thanks to thermodynamics, there is no experiment that is doable by the experimenter from which the existence of collapse can be demonstrated.

Therefore the claim that there is a collapse at all is an entirely unnecessary hypotheses. All other interpretations of QM have to invent explanations for an event (collapse) that no experimental evidence exists for.


That's all well and good, but it then predicts that all possible outcomes have equal probability, and this is measurably false.

Say you design the experiment such that the observer will see the cat is alive if 2 particles both have spin up, and dead if any particle has spin down. Say the Schrodinger equation will assign equal amplitudes to the 4 possible states (up-up, up-down, down-down, down-up), and let's ignore the composite states (e.g. 1/sqrt(2)up-up + 1/sqrt(2)up-down). So, the cat-alive (up-up) state has an amplitude which is 3* the amplitude of the cat-dead state (up-down + down-up + down-down).

In a naive interpretation of MWI that only used the Schrodinger equation, there are two versions of the observer, so the probability that the observer sees one outcome versus the other is obviously 1/2: you either happen to be the version that sees the cat alive, or you happen to be the one that sees it dead. This reasoning will work if we repeat the experiment many times: since the repetitions are independent, if I repeat it 10 times, I expect that I will happen to be one of the observers who sees the cat alive about 5 times, and dead about 5 times as well. In your interpretation, the amplitude of the Schrodinger equation is irrelevant, as long as it is greater than 0: all possible events happen.

If I actually do the experiment though, I will see the cat alive only about 2.5/10 times, since the total amplitude of the wavefunction for all states where the cat is alive is much lower than the total amplitude of all states where the cat is dead.

So, the actual MWI says that, while there are two kinds of worlds, they are not equally likely. In the multitude of all worlds, the prevalence of worlds where the cat is alive is proportional to the wavefunction amplitude of the cat-alive state (about 1/4) and the ones where the cat is dead follows the same logic (about 3/4). So, given that I am one observer in one of the many worlds, the chance I am the observer in a world where the cat is alive is only 1/4.

But this connection between the number of worlds and the amplitude of the wavefunction is an additional assumption atop the Schrodinger equation. Sure, the wavefunction doesn't collapse, but it splits according to the exact same formula as the collapse in the CI (Born's rule).

And I again want to mention that even this is not enough. If |cat-alive> and |cat-dead> are solutions to the Schrodinger equation, then so is x|cat-alive>+y|cat-dead>, for an infinite number of x and y real numbers. The MWI has to explain why no observer ever actually perceives such a state (how this state would look like to to an observer is not even definable). Decoherence solves this, and it was an extremely important contribution, but it also adds an additional assumption (that CI also needs): some pre-existing classical-like background.


Yes, you can construct a naive version of the MWI that produces answers in disagreement with experiment. But that naive version of the MWI also doesn't match the predictions of attempting to model both cat and experimenter with QM.

This is the essence of a straw man argument. OK, your ridiculous version of the theory doesn't work. Now what happens if you look at the actual theory under discussion?


My point is that the actual version of the MWI requires the Born rule (which can't be derived from the Schrodinger equation, and which is also known as the measurement postulate) just as much as any other interpretation.

I wasn't building a strawman, I was trying to explain why the simple explanation you had given in the previous post (which is the commonly presented explanation of the MWI in many popular channels) doesn't actually work. You were the one who was claiming that the MWI simply says that the observer is in a superposition itself, which is indeed what the Schrodinger equation predicts. But this entirely leaves out the other half (why does the observer in fact observe a single outcome , with some probability X) and I was explaining how the same maths as the oh-so-hated collapse sneak back in through there.

Basically, the MWI and the CI agree that from the point of view of the observer something happens when the observer opens the box which they can't predict deterministically. The collapse versions of the CI say that this event actually changes the wavefunction, it collapses it, and all other possible results don't happen. The MWI says that this is just how it looks like to one observer, and all other results happen to other observers, in a very precise proportion. The "shut up and calculate" version of CI says that it's unscientific to even discuss this distinction, since a single observer anyway observes a single thing, talking about observations that didn't happen and how real or false they are is unscientific speculation.


You've said nothing suggesting that the explanation doesn't work.

Whether you believe in collapse now, collapse later, or collapse never but the Born rule works, you get the same exact predictions. Therefore no experiment done to date represents evidence that there actually is a collapse. And evidence that the experimenter is modeled by QM is evidence against a collapse. This is all true, and is all verifiable from QM.

And whether collapse handles later or never, the math behind the MWI explains why we'd think we'd observed what we observed in the absence of a collapse.


Fundamental randomness also explains nothing, but worse.


> Maybe I'm reading this wrong.

No, you're reading this correctly. GP is misinterpreting the results.


"Fundamentally"as in "QM forbids this". If our universe were different, a simpler kind which Laplace dealt with, it could be completely predicable, as theories of 18th century stated.


I think an appeal to quantum mechanics is a different argument from the one under discussion, which is based on discrete physics.


I'm not sure it is a different argument, given its dependence on the Planck length.


It's a different view of the same problem.


>'One of the most fundamental philosophical questions is about the nature of time, precisely because all of our "laws" of physics are time reversible'

Many of our theories are, but the thing is we have several direct observations of time reversal symmetry violation (below), independent of the experimental demonstrations of CP violation which also imply T-Symmetry violation.

https://arxiv.org/abs/1409.5998

https://pubs.aip.org/physicstoday/article-abstract/52/2/19/4...


Yes! This is important. It's a finding from 1964, nearly 60 years ago, and it is included in the Standard Model of physics.

One interesting thing is that unlike the other discrete symmetries, we haven't found a system that has a large Time-reversal symmetry violation. Not having a large enough source of T-violation is actually one of the major problems with the SM!

Related supplemental reading on the Strong CP problem: https://www.forbes.com/sites/startswithabang/2019/11/19/the-...


The universe is not an N-body problem. Because we have things behaving like waves. Which are much harder to simulate. We can't even simulate a 2-electron collision precisely.


So this would be a totally separate source of indeterminism than quantum mechanics. So you could have a schrodinger's rocket fired at a indeterministic 3-body system which means multiple sources of indeterminism interact.

Entropy could be viewed as the simple addition of information due to indeterminism.

How can causality survive in a universe like that.


Why does causality require determinism? Surely effect can still follow cause, even if it is not predictable in advance?


The scientific method tests its understanding of causation using prediction.

If something is not predictable in principle, then it is impossible to show that it has causation.


That only matters if you assume infinite measurement accuracy, and the scientific method has never assumed that.

Even if we assume the world is 100% fully deterministic, as long as our measurements are not 100% accurate we will have some amount of measurement noise which is completely indistinguishable, even in principle, from true fundamental randomness.

In fact, fundamental randomness is much easier to work around than measurement noise, since there is no risk of it being correlated with your experimental design. In contrast, measurement errors are often correlated to the measurement method, which makes them much harder to eliminate statistically.


Doesn't the Turing halting problem also imply that (some) things are not predictable (the halting of certain algorithms)? But I don't think that interferes with causality.


Causality in physics survived the discovery of quantum uncertainty, though it was changed by it. What parts of current physics would have to be abandoned if our current inability to predict outcomes with complete precision turned out to be fundamental?


A non-deterministic universe can have two kinds of events: caused events, and random events. That is, an event can simply happen, but it can also be caused by another event. For example, a ball could start moving on a pool table all on its own (random), but still any ball that is hit by another ball would start moving because of the impact (caused).

If the fully random events are rare enough, you can even still determine causality using statistical tests, just like we do today. Of course, you can never be 100% certain, but that is to be expected. This is anyway how experimental science worked even when the world was assumed to be 100% deterministic: true randomness is not really different to experimental science than measurement noise.


The variation of philosophical implications are possibly unpredictable by themselves...


Ha! Gödel would be so proud of this conjecture


See Norton's Dome as an example of nondeterministic behavior in classical mechanics. No quantum or chaos etc required. https://en.m.wikipedia.org/wiki/Norton%27s_dome


Note that this is not an actual physical experiment - it only works with an infinitely accurate and smooth shape of the dome, which contradicts all of the models of how matter exists at least since the ancient greeks. Any dome made up of atoms, even if arranged perfectly accurately up to the position of each individual atom, does not exhibit any kind of non-deterministic behavior in classical mechanics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: