This text mainly shows that author either misunderstands chess, probability and science, likely at the same time, or just wants to critique for critique’s sake.
Of course while it’s good to search for alternative models/theories/explanations, unless you can provide something with better with more predictive power than the existing/widely accepted ones, it’s a good idea to hold the critique.
EDIT: To clarify: by less predictive power I mean that it neither explains new effects or predicts new unknown ones, nor explains known phenomena or generates existing theories as special examples. I didn’t mean theories such as for example string theory, that has little predictive power at the moment, but has current theories as special cases and holds the promise of explaining things that current theories cannot. /EDIT
Physicists are “stuck” with existing theories not because they like them, but because they work so well it’s hard to invent something that even works equally well (not to mention something that works better). There a lot of smart that are brave in thinking and propose wild explanations. Yet, in most cases they don’t stand up the test of time.
Einstein couldn’t deal with randomness of Quantum Mechanics, put forward a hidden variable theory and it was (and still is) seriously considered, but he (and many others) weren’t able to put forward better-working theory. We stick to QM despite its weirdness/randomness because it works extremely well, not because we like it or think things must be this way and require no further study/“it is the most efficient and parsimonious possible model“.
I don't think it's entirely right to discard comparatively less developed theories based on their relatively weaker predictive power. That does sound like how one reaches a local maximum, to use the term from the article.
Think on it in a different way, most of the very revolutionary theories, those that changed how we see the world, were relatively more simple, they were generally simple enough that one fringe could develop them to a point where they shined so brightly as to be next to irrefutable. Things like how the earth might be round, and circle the sun.
We can't expect the same to be true for more advanced fields, we can't say "yeah, this "new/underdeveloped" idea does seem reasonable, but it does not solve everything as well as our existing theory that we've been iterating on for decades, so let's not waste time on that".
> Think on it in a different way, most of the very revolutionary theories, those that changed how we see the world, were relatively more simple, they were generally simple enough that one fringe could develop them to a point where they shined so brightly as to be next to irrefutable. Things like how the earth might be round, and circle the sun.
Intresting trivia: heliocentrism didn't show "shining brightly as to be next to irrefutable" - it was a fringe idea that could not be confirmed through observation at the time, required some pretty wild (for the time) assumptions - such as stars being very, very far away, to explain why there's no visible parallax from Earth's movements - and went against existing understanding of physics in general (such as, Earth is very big and heavy and bulky, so it's not obvious how could it be moving in circles very fast). Also, IIRC, the predictions made by heliocentric model were less accurate than geocentric ones.
It took astronomical observations with early telescopes to provide data points favoring a mixed geo/heliocentric model, and then further observations, work of Kepler and Newton's theory of gravity for heliocentric model to finally start making sense.
This does serve as an example backing TFA's thesis: some accepted theories, like (then) geocentric model, may be just local maxima - theoretical dead ends. A potential better theory will initially look bad in comparison, it needs work to develop past the accepted one.
> Also, IIRC, the predictions made by heliocentric model were less accurate than geocentric ones
> work of Kepler
Yes. One problem of the early Copernican heliocentric model was that it stated that the orbits of planets around the sun were perfect circles. It wasn't until Kepler showed that a) the orbits were actually elliptical and b) the planets speeded up when they approached the sun and slowed down as they moved away that the actual movements of the planets could be more accurately predicted. Until this time, the older earth-centric models with all of the epicycles were 'better', even though totally unrelated to reality.
one important point is that ground observations include mars moving backward fairly often. it took a while to formulate a movement form that satisfied that constraint better than epicycles
Yes. This is one example of the fact that the Earth 'laps' the outer planets because it has a faster orbit. So the outer planets sometimes appear to go backward with respect to a fixed point such as a star as the Earth undertakes them. They also can move up and down with respect to that fixed point because their orbits are tilted somewhat compared with the Earth's orbital plane around the sun. The overall effect is that the outer planets sometimes trace out a little spiral, and the further they are from the sun, the more these spirals dominate their overall motion (because we are lapping them more often).
Obviously it's different for Mercury and Venus, whose orbits are inside ours. They instead switch between being visible in the morning or the evening.
All very complicated for those ancient astronomers!
right on thanks for making this point - "Also, IIRC, the predictions made by heliocentric model were less accurate than geocentric ones." - that was my recollection also: the epi-cycles had greater predictive power and accuracy.
"Physicists are “stuck” with existing theories not because they like them, but because they work so well it’s hard to invent something that even works equally well"
Yes, but hundred years ago we were "stuck" with another worldview that was explaining everything fine, and i assume ther was a big resistance from estabilishment to addopt new ideas. But then old scientist died, and resistance got weaker. So we might look back at today after 100 yaers and see similar situation. No one says that ideas of 100 yaers ago were all wrong, just not so true as current.
One hundred years ago there were many things that did not have any real explanation -- classical physics was not "stuck" and was not "explaining everything fine". Our understanding of something as common as metals and insulators depends heavily on quantum mechanics. That's not even counting things like spectral lines, the stability of atoms, etc, etc that were unexplained in 19th century physics, and explicitly known at the time to be at odds with classical physics.
While worldviews can be overturned and paradigms can be shifted, it is (significantly) harder than it was before. We simply know more now, and have a better understanding of our limits. So whatever new framework that has to supercede our current framework has more ground to cover than it did even in the recent past. This is exacerbated by the fact that in physics in that most everything outside the early universe and black holes (which aren't easily accessible experimentally) seems to conform to our current framework -- there is both more to fit in and less data to work with.
No, a touch over 100 years ago Einstein came up with a new set of equations that more precisely describes the movement of objects. Newtonian physics is still valid and still taught at schools since it adequately explains the movement of objects at an approximate level that is good enough for most people most of the time. But when dealing with equations at an astronomical scale, Newtonian physics start to break down.
Likewise quantum mechanics doesn’t replace General Relativity. It compliments it.
I think you misunderstand. The work that Einstein did on quantum mechanics was indeed "complimentary", i.e. provided for free as a courtesy. (Although it could be argued he actually got paid for it later with Nobel prize.)
> But then old scientist died, and resistance got weaker.
Hum... The old scientists that (very vocally on that case) resisted change were the same that uncovered the problems with the old models and laid out the first theories on how to fix those problems. Those things are way more complex than simple quotes and labels can communicate.
New models got adopted when after a lot of work people created some that worked better. Not a moment before. Those better models didn't get resistance from the established physicists.
No, this is historically incorrect. Quantum Foundations was strongly disfavoured as a research pastime for decades - not because Shut Up and Calculate gets the right answer (it doesn't for many problems, including those that involve gravity) but because the risks of failure and obscurity were too high, there were few academic champions, it was seen as academically fringe, and the potential rewards for bolting something new onto the Standard Model without fundamentally changing its assumptions were much higher.
There's also been the - likely incorrect - belief that different models are too hard to distinguish experimentally.
So there's been a process of continuous refinement of existing theories which are known to be incomplete, and no concerted and sustained attempt to solve foundational philosophical problems - which is the level that Einstein, Newton, and other pioneers operated at.
I am wondering why you think the belief, that different models are too hard to distinguish experimentally, is incorrect - after all, the achievment of such a distinction would seem to be highly motivating. To take a historical example, The publication of Bell's inequality motivated a successful program leading to its experimental verification; do you have in mind some potential experiment to distinguish between models that is being wrongly ignored on the grounds that it is too hard to persue?
An alternative explanation for quantum foundations being in limbo is that it is extremely difficult to come up with alternatives that offer a possibility of verification.
Update: writing this reminded me of [1], in which a simple experiment by Shahriar Afshar, that arguably challenged one tenet of the Copenhagen interpretation, provoked a disturbingly over-the-top response, which supports your position on how work on quantum fundamentals is opposed (though, personally, I doubt it succeeds in challenging the Copenhagen interpretation. Interestingly, the opponents of Afshar's interpretation do not all agree on why they think it is wrong.)
> unless you can provide something with better with more predictive power than the existing/widely accepted ones, it’s a good idea to hold the critique.
Part of the authors premise is that a more correct theory could have less predictive power out thr gate and might not be pursued as a result.
Why wouldn’t it be pursued? Is string theory not pursued by “mainstream science” because (at least for now) it has less predictive power than standard model?
I think the criticism of occam's razor is valid. Occam's razor comes at a cost, and denying it would be wrong. It brings many advantages though, which outweigh the disadvantages.
Sure it is valid, but in reality most scientists don’t religiously stick to Ockham’s razor and oppose alternative theories that give correct predictions.
If hard sticking to Ockham’s razor was true, Quantum loop Gravity, String Theory and many other theories wouldn’t be intensively studied for past 50 years. Or development and studying interpretations of quantum mechanics (which btw yielded results in Quantum Information Theory).
It’s just that constructing something correct and new IS really hard.
I still haven't understood why people keep saying that there is randomness in QM.
The so-called "wave function collapse" isn't really part of QM, it's duct tape that we have applied to stick together the QM with "Classical Physics" and our pre-existing assumptions about human consciousness. I don't think we should consider it "real" or "true".
Without the wave function collapse, there is no randomness in QM.
you have it wrong, QM is the only physical theory that has randomness as an inherent part, as compared to e.g thermodynamics where randomness is due to lack of information. It is proven (see Bell inequalities) that randomness in QM isn't due to lack of information.
You can't just wave away the collapse mechanism, what do you make of the double alit experiment? isn't the target "real" enough?
What you are saying is that whenever we, the observer, leave the QM model we use the collapse as a computational trapdoor function? Sounds like an interesting point of view.
But would that not also imply that we should be able to measure the quantum world with quantum devices? Say we have a quantum property that is extremely close to p=0.5. If we could invent a device to replicate that property perfectly and measure it repetitively we could then estimate ever more accurate boundaries for the "true" value of p, no?
> QM is the only physical theory that has randomness as an inherent part
It's either inherent randomness or just a deep hole in the whole thing (similar to the alien chess thought experiment problem). Personally I choose to believe that the theory is just incomplete because nobody can even define what a "measurement" really is, meaning in which cases what we do is a "measurement" and in which cases it is not a "measurement". I also think that this is what people like Feynman refer to when they say things like "nobody understands QM", it's actually "nobody understands the wave function collapse", the rest is just maths.
My layman feeling wrt. QM, and Copenhagen school in particular, is that we're searching for too computationally simple mental models. Most other areas of physics - like GR, SR, thermodynamics - can get away with aggregating matter into points, perfect spheres, etc. because they're working in macro scale, but QM is trying to deal with the smallest bits of our reality. Now the boundary between QM and "classical physics" is one where your quantum system will interact with 10^{double digit} amount of other quantum-relevant bits. I have a feeling that searching for what constitutes "a measurement" in such scenario is missing the point, and even talking about the macro system being entangled with the test system is pretty much skipping over all the interesting bits.
It is, but that's in interpretations that also don't have the concept of "wave function collapse" in them. WFC is a feature of interpretations that considers measurement as something ontologically special.
If you don't consider measurement ontologically special, then you need to somehow derive a physically meaningful Born rule without reference to measurement, which so far is something that AFAIK has only been accomplished in theories with large amounts of nonlocality and extra assumptions. The idea that people cling to the obviously false projection postulate out of obstinance is really strange to me, there just aren't very good alternatives available (at least not with the math fully worked out).
I would love to consider measurement something ontologically special, but it's not possible because there is no well-defined definition for what a measurement is.
The definitions I have found always invoke the presence of a "classical system"/"observer".
But that just kicks the can down the road, because there is no well-defined definition of a "classical system" either.
Sure. Everyone agrees Copenhagen is just kicking the can down the road. I'm just saying, let's not act like we have a ton of viable theories to fill out the rest of the road; in the meanwhile, we still have to perform measurements and make predictions, and the projection postulate is handy for that.
(It would help tremendously if we ever measured quantum states that weren't "collapsed", but as we've never done this so far it makes most of the stochastic collapse stuff hard to justify, even if it seems intuitively like the right approach).
Turing machine as an abstract concept is known to not hold a solution to the halting problem. There's no question about that, the only question is how the abstract concept assumptions pertain to the real world (infinite tape? maybe a problem, maybe not?; is the execution speed bounded? or maybe we can somehow count to infinity by exponentially increasing the speed? things like that).
On the other hand nobody has any clue what the quantum measurement / wave function collapse actually is. There are theories/interpretations but no truly satisfying answers in the same way as for example Newton's equations were a satisfying answer to the elliptical movement of planets, even though we later found out in the 20th century that F ~ 1/r^2 was actually an approximation.
We simply don't know, and we have no idea when shall we know.
In the Everett/Many Worlds interpretation the appearance of randomness can be explained as an emergent phenomenon resulting from not being able to predict which part of the wave function we will end up in before running an experiment.
The Everett/Many Worlds interpretation cannot reproduce the predictions of quantum mechanics without extra assumptions (e.g. the Born rule) that don't have any physical basis within the context of MWI.
Yes, you are right to point this out. There are some important details that are still being debated. Personally my impression is that the debate has advanced enough to the point where MWI can’t be outright dismissed based on this argument. There are multiple plausible explanations and the remaining difficulties have more to do with philosophy than physics.
Edit: To give one example of an approach that I think is promising: We start by describing the observer and environment through a density matrix (a probability distribution over possible wave functions) and introduce an interaction with a quantum system (e.g. a spin). Given a reasonable interaction, you can show that the entanglement in the combined state (observer, environment and spin) leads to the system approaching a state that is a probability distribution of entangled states where each probability corresponds to the Born rule. Interestingly in this case the probabilities emerge from our lack of knowledge about the microstate of the observer/environment, so it’s actually thermodynamic uncertainty.
I'm not dismissing MWI, I'm just saying that current formulations either don't reproduce quantum mechanics or don't really address the existence of randomness within quantum mechanics.
While I intuitively like the statistical approach you mention, under it the Born rule holds only approximately, so it should be theoretically possible to observe entangled states which we have never done--i.e. it produces different predictions from the Copenhagen interpretation of quantum mechanics, which means it's not strictly a different interpretation, but its own falsifiable theory. Like I said in another comment, if we ever do observe entangled states directly, people will jump on board one of these alternate explanations like lightning. But until we do, the question of why we never ever observe anything that doesn't look like collapse still needs mathematical justification.
I am not aware of any interpretation that has the Born rule without assuming something equivalent to it. For example de Broglie–Bohm theory requires you to assume the original distribution of the particles follows the Born rule. QBists just postulate it, consistent histories just postulates it etc.
I am not particularly a many-world proponent, but I do not think it is fair to level this accusation as an issue for many worlds without bringing up that every other interpretation has the same "flaw".
AFAIK it can in fact be derived through Gleason's Theorem under the assumption of noncontextuality, so I don't think it's fair to say that nobody can derive the Born rule without assuming it (many people have issues with noncontextuality but that's very much a philosophical thing). The thing you have to demonstrate is that a probability measure actually connects to physical observables in some way, and this is the part that is difficult (and as far as I can tell MWI does nothing to resolve this conundrum).
Gleason's theorem also makes a big assumption when you require that the measurement outcomes are associated with POVM elements (or projection operators if you don't like POVMs). I lumped that in with "assuming something equivalent to it" since Gleason's theorem (at least by my understanding) is exactly the statement that assuming non-contextuality+POVMs/POMs is equivalent to assuming Born's rule.
Although its really cool I don't think Gleason helps you tie any particular interpretation to the Born rule, since you still have to make a jump to tie your measurement outcome to a POM/POVM element.
As far as your last sentence goes, this is sort of what I was trying to argue in my comment above. The "part that is difficult" that you identify as being unresolved by MWI is also completely unresolved by pilot wave theory, or qbism or consistent histories or any other interpretation (as far as I am aware).
I think we're in agreement there (though IMO it's highly nonobvious that noncontextuality+POVMs automatically get you the Born rule, so I don't think it's "cheating" to assume that--obviously any set of axioms that let you derive Born will have such a property!). I was mostly saying, I don't think MWI helps us understand where the probabilities come from any more than any other interpretation--you need something more. And if you can't identify where the probabilities come from, then saying your theory is "deterministic" rings fairly hollow to me.
What is usually misunderstood about QM is that QM isn't a model of the world.
QM is a model of "what we can observe from the world" based on "what we can observe from the world".
QM is a model of "accessible" information.
What laymen usually wants is to understand how the world evolve.
What QM physicists tell them is there exist some inaccessible information, but using accessible information we have, we know how to predict all the accessible information (albeit stochastic-ally).
The typical example to help computer scientists to understand is the seed of a pseudo-random generator in an online casino. The players will never be able to access the seed, therefore the best they can do is make decision based on the value of the generated random numbers they observe and their probabilities.
Bell inequalities are a consequence of this modelisation. They are a refurbishing of Boole's inequalities, a theorem about probability which only bound those who use probability.
The usual fallacy forward is telling QM is a non-local theory, classical local theory can't violate Bell inequalities, Bell inequalities violations are observed in the real world, therefore the world is not local...
This is all hand-waving. The reality is no one knows how any of this works, and there are more than twenty different interpretations.
If you can prove conclusively there's no randomness and no collapse and no need for either, a Nobel Prize awaits. If you can't - most likely - one opinion is as (in)valid as any other for now.
Of course while it’s good to search for alternative models/theories/explanations, unless you can provide something with better with more predictive power than the existing/widely accepted ones, it’s a good idea to hold the critique.
EDIT: To clarify: by less predictive power I mean that it neither explains new effects or predicts new unknown ones, nor explains known phenomena or generates existing theories as special examples. I didn’t mean theories such as for example string theory, that has little predictive power at the moment, but has current theories as special cases and holds the promise of explaining things that current theories cannot. /EDIT
Physicists are “stuck” with existing theories not because they like them, but because they work so well it’s hard to invent something that even works equally well (not to mention something that works better). There a lot of smart that are brave in thinking and propose wild explanations. Yet, in most cases they don’t stand up the test of time.
Einstein couldn’t deal with randomness of Quantum Mechanics, put forward a hidden variable theory and it was (and still is) seriously considered, but he (and many others) weren’t able to put forward better-working theory. We stick to QM despite its weirdness/randomness because it works extremely well, not because we like it or think things must be this way and require no further study/“it is the most efficient and parsimonious possible model“.