Hacker News new | past | comments | ask | show | jobs | submit login
Nobel Prize in Physics Awarded to Alain Aspect, John Clauser and Anton Zeilinger (nobelprize.org)
263 points by solarist on Oct 4, 2022 | hide | past | favorite | 126 comments



Huh. I thought Zeilinger already had Nobel.

He certainly deserves it. He is incredible experimentalist:

- Macroscopic quantum inference with molecules like C70.

- two photon orbital angular momentum entanglement with 600 difference in quantum number.

- multi-particle entangement.

- quantum teleportation


.. and his work of testing of quantum mechanics goes on from far to weird:

Cosmic Bell Test using Random Measurement Settings from High-Redshift Quasars (2018) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.12... > .. This experiment pushes back to at least ∼7.8 Gyr ago the most recent time by which any local-realist influences could have exploited the “freedom-of-choice” loophole to engineer the observed Bell violation, excluding any such mechanism from 96% of the space-time volume of the past light cone of our experiment, extending from the big bang to today.

Challenging local realism with human choices https://www.nature.com/articles/s41586-018-0085-3

>Bell himself noted this weakness in using physical setting choices and argued that human ‘free will’ could be used rigorously to ensure unpredictability in Bell tests8. Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology

>... Project outcomes include closing the ‘freedom-of-choice loophole’ (the possibility that the setting choices are influenced by ‘hidden variables’ to correlate with the particle properties


>Bell himself noted this weakness in using physical setting choices and argued that human ‘free will’ could be used rigorously to ensure unpredictability in Bell tests

Please don't flame me if this is an outrageous question, because I readily admit I know nothing on the topic of quantum physics, but isn't it a quite large limitation to assume human free will as a given? Or is "free-will" meant as a proxy for "pseudo-random" and that's enough for this experiment?


See here: https://www.pnas.org/doi/10.1073/pnas.1002780107

It has to do with true randomness in the measurement aspect, not with human free will.


How is “free will” different from the “freedom of choice” extensively discussed in that link - “as crucial as realism and locality in the derivation of Bell’s theorem”?


There is a great NOVA documentary about the High-Redshift Quasars experiment - Einstein's Quantum Riddle (2019).

https://www.youtube.com/watch?v=Mn4AwineA5o


Well I have some reading to do because that meant very little to me yet I am very intrigued.


They probably also got scared that he would die before he could ever get it. Same as with Penrose two years ago, if Zeilinger had gone without a Nobel, that would almost be a disgrace for the price.


Also Bell (of Bell's theorem) was apparently on the short list for the Nobel prize the year he died.


What a pity. Bell's theorem impressed me so much that I consider it one of the most ingenious (thought) experiment/proof ever, and it has a very huge significance in interpretation of quantum physics (to me it resonates Richard Feynman's saying "I think I can safely say that nobody understands quantum mechanics")


Zeilinger got the Wolf Prize in Physics in 2010, which is often considered the second most prestigious award after the Nobel prize.


The three of them did. (I was unaware about that prize until now - or maybe I had forgotten.)


I thought all three of them had!


Some fun background on the history and politics involved: https://arxiv.org/ftp/physics/papers/0508/0508180.pdf . Clauser is apparently the guy who took most of the arrows in the back for advocating experimental tests of quantum mechanics. It seems that he and Feynman weren't big mutual admirers: https://www.aip.org/history-programs/niels-bohr-library/oral... .


John Clauser is a raging asshole, and isn't much an admirer of anybody, let alone "a ******* moron like Feynman." I say this as a friend of his, and I'm nearly certain those are the exact words he'd choose to describe both himself, and Feynman. Interestingly, I feel there's a certain good natured lightheartedness to Clauser's 'arrogant asshole genius' persona, as if he is consciously acting that way as a comedic performance. This Nobel Prize will feed perfectly into this, he's going to love it.

Edit: I'm normally not one to complain about well deserved downvotes, but I think people may be missing my joking tone, and context here. Once a few people that also know Clauser give thoughtful replies in this discussion, it will make more sense.


Never heard of any of them until now, looks phenomenonal work that I don't even understand lol. I just had a morbid though related to this. It's interesting how even the best of the scientists of our times will just be forgotten names in the long term history. Even among the greatest scientists only a few names like Einstein, Tesla or Newton remain in public memory, rest are never even known. If you want history to remember you, being even the best scientist in the world isn't enough. Shows how inconsequential the average man is in the big picture.


Oliver Burkeman created a term called "Cosmic Insignificance Therapy" that is related to this realization.

"…it comes as a relief to be reminded of your insignificance: it’s the feeling of realizing that you’d been holding yourself, all this time, to standards you couldn’t reasonably be expected to meet. And this realization isn’t merely calming but liberating, because once you’re no longer burdened by such an unrealistic definition of a “life well spent,” you’re freed to consider the possibility that a far wider variety of things might qualify as meaningful ways to use your finite time. You’re freed, too, to consider the possibility that many of the things you’re already doing with it are more meaningful than you’d supposed—and that until now, you’d subconsciously been devaluing them, on the grounds that they weren’t “significant” enough. (Burkeman 2021, p 212)"


Interesting take, and somewhat at odds with Douglas Adams in Hitchhiker's Guide:

> The Total Perspective Vortex derives its picture of the whole Universe on the principle of extrapolated matter analyses.

> To explain — since every piece of matter in the Universe is in some way affected by every other piece of matter in the Universe, it is in theory possible to extrapolate the whole of creation — every sun, every planet, their orbits, their composition and their economic and social history from, say, one small piece of fairy cake.

> The man who invented the Total Perspective Vortex did so basically in order to annoy his wife.

> Trin Tragula — for that was his name — was a dreamer, a thinker, a speculative philosopher or, as his wife would have it, an idiot.

> And she would nag him incessantly about the utterly inordinate amount of time he spent staring out into space, or mulling over the mechanics of safety pins, or doing spectrographic analyses of pieces of fairy cake.

> “Have some sense of proportion!” she would say, sometimes as often as thirty-eight times in a single day.

> And so he built the Total Perspective Vortex — just to show her.

> And into one end he plugged the whole of reality as extrapolated from a piece of fairy cake, and into the other end he plugged his wife: so that when he turned it on she saw in one instant the whole infinity of creation and herself in relation to it.

> To Trin Tragula’s horror, the shock completely annihilated her brain; but to his satisfaction he realized that he had proved conclusively that if life is going to exist in a Universe of this size, then the one thing it cannot afford to have is a sense of proportion.

- Douglas Adams, Hitchhiker's Guide to the Galaxy


God damn I miss Douglas Adams. Taken from us way too soon. Way, way, way too soon.


Relatability. Pretty difficult to be memorialized if the average person cannot understand or identify with your work. Perhaps we can either be more forgiving of the burdens and limitations the average person carries, so far removed from quantum effects and scientific prizes.


Aspect is semi famous in France, does a lot of conferences on his early work introducing quirky experiments showing unintuitive behaviours of quantum physics (present interactions can influence past "path" (there is no actual path until final collapse but the choice could only have been made before the observation... however the observation was random so how did the particule "know") taken by a wave-particule, I wished there were more english translation of his conferences cause it's mind bending)


For those interested in following a Quantum Optics course of his: https://www.coursera.org/instructor/~6471336


If you want to be remembered in history, it’s much easier to be a mediocre sports athlete than to be the best scientist who actually helps humanity. We are a ridiculously stupid species


Who you are remembered by is the context. A lot of people like keeping up with sports so thats why they know this trivia. OTOH scientists in field X will know of all the seminal work in their field. You can make a big impact in the field and be totally unknown outside of it, most scientists probably prefer it that way.


Or maybe what's stupid is to set a goal to be remembered in history in the first place :)


Agreed. I'm not too concerned with being remembered by history or not, I am content in having chosen to do good with my time here.


Not true really; name an athlete from the 1400s.


I can name a pro athlete from the 1100s and 1200s, at least: https://en.wikipedia.org/wiki/William_Marshal,_1st_Earl_of_P... ! Not disputing your point though.


it's even easier to do somethign atrocious like being a serial killer


>We are a ridiculously stupid species

Can you explain what you mean by this?


Not the OP of the comment, but seemingly a common point of view that the most noble (and in some cases only valid?) pursuit of the human species is science/technology. I do find it interesting of the comparison between "mediocre sports athlete" and "best scientist" from the lens of aid to humanity. One could make an equally valid argument that a mediocre sports athlete brings more immediate utility to people in terms of entertainment, inspired hope, economic spend. In short, value is relative. Being a mediocre professional athlete is still a tremendous achievement. If your goal is to be remembered, you must appeal to the masses. Doesn't mean our species is stupid, it just means we are social.


No, the point of my comment was that most humans appreciate and remember the guy who doesn’t give two shits about them, would rather spend their money on bling and mansions, while the scientist who actually helps humanity tremendously is not even noticed.


Do all scientists create with the goal of helping humanity tho? I get the impression that knowledge for the sake of knowledge can be common too and that the practical applications can be a convenient, but unintended side-effect.


This is what I think about a lot since I have read the book Sapiens. Homo Sapiens have been around for 300,000 years and as of today we remember only a handful of names and only from very recent past. Who knows how long our species will exist but one thing is for sure, no one will be remember most of us normies.


Well you're remembered for having an immense impact for generations to come, not necessarily winning an award. Einstein is remembered for transforming physics, not winning the Nobel. Just like MLK is remembered for the civil rights movement not winning the Nobel peace prize.


Some folks are remembered just for the Nobel. Elias Canetti comes to mind - his only meaningful work was non-fiction, and therefore didn’t qualify.

Winston Churchill is the similar … remembered as a political leader, but won the Nobel in Literature, not Peace.

Einsteins Nobel was for Brownian motion, which isn’t what pops to mind when you think “Einstein.”


> Einsteins Nobel was for Brownian motion

"for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect"


The guy presenting the topic called Erwin Schrödinger "Edwin", both in writing on his slides and verbally.

I was bewildered to see a blunder of that magnitude in the official nobel presentation...


There is even bigger mistake:

>The Nobel Committee makes a common mistake in the press release, implying that Bell rules out hidden variables. It's only local hidden variables that are ruled out (absent superdeterminism). Bell was a big supporter of Bohmian non-local hidden variables!

from Sean Carroll https://twitter.com/seanmcarroll/status/1577254806208798722


From another tweet of his:

Bell on Bohm - Sheldon Goldstein - https://sites.math.rutgers.edu/~oldstein/papers/bb.pdf


Why is that a blunder of any important magnitude? As a physicist I do not think so. It's actually pretty common to make mistakes like that in physics. And physicists are not particularly careful about their presentations, even very important ones. We don't have the sense to care much about this things. "Official Nobel presentation" is still seen as "just a talk"


Not the reaction I got from my colleagues, they were all just like "WTF, how can this happen?"


Were they all German Physicists? Leave Hans alone, he's chill.


Did you add a typo as a joke?


Someone’s job in PR is to get the facts right in the press releases. Like referring to first American President as “Gerald Washington”


Was there a PR person to vet this presentation?


Clauser and Aspect did their experiments in the early 1970s and 1980s. That's a crazy long wait!


There's a quote, I believe by an economics Nobel (memorial) prize winner, that goes like "It's a strange feeling to be honored in your 80's for work that you've done in your 20's" - couldn't find who said it after a bit of searching


In uni I remember one of my professors telling us that the time gap between the awarding of the Nobel prize and the discovery for which it is awarded keeps getting longer (he was talking specifically about physics). I remember he also lamented George Gamow never getting a Nobel prize. (edit: grammar)


That's definitely the main cause, but in the case of Clauser at least there's the added wrinkle that for the first few years his work wasn't very popular with other physicists. His PhD advisor was writing recommendation letters specifically telling people not to hire Clauser for work on QM experiments: https://arxiv.org/ftp/physics/papers/0508/0508180.pdf .


This is sad, IIRC the goal (and reason it only goes to living people) is to throw good money after good, in the hopes of more to come, not simply reward past advances and fund retirements.


The goal was to rub Nobels reputation clean with the reputations of great living scientists.


I created an interactive simulation of the Bell inequality violation (Clauser-Horne-Shimony-Holt form): https://lab.quantumflytrap.com/lab/bell-inequality

Enjoy!


The Nobel citation reads as if John S. Bell should’ve gotten the prize


According to Wikipedia, Bell may well have done if he'd lived a little longer: https://en.wikipedia.org/wiki/John_Stewart_Bell#Death


ELI5 anyone on their achievements?


The press-release has a fair summary: https://www.nobelprize.org/prizes/physics/2022/press-release...

Also here for a bit of history: https://en.wikipedia.org/wiki/Bell_test


Note there are two pdfs with more detail linked at the bottom under "Read more about this year’s prize"

Popular science background: How entanglement has become a powerful tool (pdf)

Scientific Background: “For experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science” (pdf)


Popular Science Background: "Quantum mechanics’ most important resource" (Entangled quantum states)

From a Quantum Computing perspective entanglement ist kind of overrated.

Scott Aaronson: How Much Structure Is Needed for Huge Quantum Speedups?

https://arxiv.org/pdf/2209.06930.pdf

Actually entanglement is not mentioned at all.


Except for error correction, sensing beyond the classical limit, teleportation, key distribution, most forms of fault tolerance, cluster state computation, blind computation, one way computation, and more.


Sure you are right.

This is just a computational Perspective.

Gottesman-Knill Theorem:

"The theorem proves that, for all quantum algorithms with a speed up that relies on entanglement which can be achieved with a CNOT and a Hadamard gate to produce entangled states, this kind of entanglement alone does not give any computing advantage."

Entanglement alone is not sufficient for algorithmic speedup.

Again Wikipedia:

"The reason for the speed up of quantum computers is not yet fully understood"


Intuitively or naively there's some connection:

(1) the dimension of the Tensorproduct Space of n entangled Qubits is 2^n.

(2) exponential speedup

but which?


Aspect's experiment dates from the early 1980s, that's a long time coming!


The best way to understand what they did is to watch the so called award lecture. Each laureate gives an hour long lecture which is video taped where they explain what they did. Google for Nobel Prize Award Lecture.


That's a pretty huge experimental success: quantum mechanics is truly probabilistic, not just deterministic but based on some hidden variable we haven't (or cannot) found.

Have I understood it correctly?


I like the essence of Quantum Strangeness here:

"To be more precise, what we shall show is that the particles’ response∗ to a certain type of experiment is not determined by the entire previous history of that part of the universe accessible to them."

John Conway, Simon Kochen: Free Will Theorem https://arxiv.org/abs/quant-ph/0604079


That is a lovely way of putting it. (I wish they'd never mentioned 'Free Will' since people get hung up on that)

Always keep in mind that Bell himself was quite a fan of Bohm's theory, which is the canonical hidden variable theory really. The question becomes one of characterising that 'strangeness': is it the violation of locality ('that part of the universe accessible to them'), is violating outcome independence palatable, but not parameter independence (this was Shimony's idea originally)? Lots of curious stuff.


> Why do we call this result the Free Will theorem? It is usually tacitly assumed that experimenters have sufficient free will to choose the settings of their apparatus in a way that is not determined by past history. […]

> We consider experimenters A and B performing the pair of experiments described in the TWIN axiom on separated twinned particles a and b, and assert that the responses of a and b cannot be functions of all the information available to them.

I don’t get it — this sounds like they assumed their conclusion.

There’s also a couple places they make strong assumptions about information geometry I’m not sure I agree with — namely, if you want to refute modern Bohm-inspired models, you need to account for radically non-Euclidean spacetime. (Where you have a much harder time with “space like separated”.)

> FIN is not experimentally verifiable directly, even in principle (unlike SPIN and TWIN3).Its real justification is that it follows from relativity and what we call “effective causality,” that effects cannot precede their causes. […]

> Not all information in the universe is accessible to a particle a. In the light of FIN, information that is space–like separated from a is not accessible to a. The information that is accessible to a is the information in the past light cone of a.


Not exactly - they expressed an assumption that is too strong in this explanation. The assumption that is needed is only that measurement choice is not correlated with the generation of the particles themselves - not that it is not determined by past history.

That is why there was a lot of interest in doing cosmic Bell-like experiments: it is very hard to define a mechanism whereby the hidden property of the particle that left the quasar many billions of years ago also determined that Alice and Bob would set their apparatuses to measure these specific angles today. Theoretically this doesn't rule out the possibility, but it does mean at least that the theory would have to be significantly non-intuitive in its own right (whereas if Bell-like inequalities only happened for small-scale experiment, the theory that explained them could have been very simple indeed - such as some new wave that affected the measurement apparatus or some aspect of how we choose measurement angles).


But isn’t the Bohm model that all particles are weakly correlated? — eg, there’s some kind of “whole universe” quasi-particle from the inflationary period?

The exact alternative to non-determinism is super-macro quasiparticles/correlations. Which this seems to assume away.


There is a difference between proving that all events have share a causal relationship (which is an assumption of every mechanistic model of the universe), and proving that that causal relationship between distantly related events determines very precise restrictions that happen to match the predictions of QM.

To take it to a more human realm - we can all agree that if Caesar weren't killed by Brutus, the world would be so different that it's very unlikely both Biden and Putin would be presidents of their respective countries today - this is an almost trivially true statement I would argue. However, if we observe that at every public appearance Biden and Putin wear costumes of the same color, it's very hard to come up with a theory that explains that their choice of costume is caused by Caesar killing Brutus.

And this is essentially what the super-determinism argument gets at: the same thing that caused this photon emitted by a quark to be polarized up 1 billion years after the Big Bang also caused Alice to measure the polarization of that particle along 30 degrees 13 billion years after the Big Bang.


But they’re carrying shared quantum state — why would it be surprising their actions were correlated?

Eg, that particle being a particular state is correlated to Alice picking a particular measurement because the total system must maintain that quantum number. The particle which emitted the photon and the particles which give way to Alice have carried that information since the inflationary period — and so the photon and Alice share that correlation now.

The conclusion seems to be “the correlation must be at least this old!” — but that’s exactly the claim being made.

So I’m not sure I understand the problem for Bohm-derived models.


Look at the difficulty of creating a quantum computer if you really think that it's plausible shared quantum state would survive for a few billion years and interactions with a whole galaxy.

Also, not sure what particular quantum number you think would have to be conserved and would influence Alice's decision of which way to configure her measurement apparatus - this would definitely require some new quantum property.


As I understand it, Bell showed that, if quantum mechanics is correct, then any deterministic hidden variable theory would either have to be non-local or non-causal.

Aspect et al demonstrated experimentally that quantum mechanics is correct in the Bell sense.


You understood correctly that quantum mechanics cannot be based on some hidden variable we haven't or cannot find. (Edit: oops, I re-read your comment and it seems you say the exact opposite thing? The point is that Bell's theorem rules hidden variables OUT. Here's an easily digestible video about this: https://www.youtube.com/watch?v=ZuvK-od647c )

About probability: no one knows. It might be that there is a "wave function collapse" that has a probabilistic outcome. But it might be that there is no such a thing. Quantum mechanics that remain in the "quantum" realm are not probabilistic, it's only when you cross over to the classical world.

So it might be equally well, that the Everett interpretation (so-called "many worlds interpretation") is true, and the probability is something physicist Sean Carroll calls "self-locating uncertainty": https://www.preposterousuniverse.com/blog/2014/07/24/why-pro... That is, there are multiple "yous" that experience different outcomes, and you don't know which branch of the wavefunction you find "yourself" in, which "you" you are.


QM can't be based on LOCAL hidden variables. Hidden variables theories that are explicitly nonlocal are not ruled out. Separately, Superdeterminism eliminates both the requirement for nonlocality and the result that QM is inherently probabilistic.


> QM can't be based on LOCAL hidden variables.

... as long as we assume that the system being measured is uncorrelated with the choices of which measurements to make on it. :-)


Superdeterminism eliminates any requirement(s).


Oh, you are definitely correct. Pardon the inaccuracy.


> You understood correctly that quantum mechanics cannot be based on some hidden variable we haven't or cannot find.

That's not true. The only thing we know is that the above appears to be true, but since we do not actually know the mechanism behind quantum mechanics we will be in this limbo until (if ever) we find the actual rules.


Actually, the power of Bell's theorem is that it applies to any physical theory which attempts to explain observed correlations in experiments on entangled particles. It really does say "No matter what's behind quantum theory, we know it isn't like this".

I do have a small gripe with the above comment, in that Bell's theorem only rules out local hidden variables. Some take the view that Bohmian mechanics is an formulation of quantum theory which employs hidden variables, just explicitly non-local ones.


> I do have a small gripe with the above comment, in that Bell's theorem only rules out local hidden variables. Some take the view that Bohmian mechanics is an formulation of quantum theory which employs hidden variables, just explicitly non-local ones.

Superdeterminism also allows local hidden variable theory without violating Bell's theorem:

> https://en.wikipedia.org/wiki/Superdeterminism

A 2013 interview with the Nobel Laureate Gerard 't Hooft on this topic:

> https://spookyactionbook.com/2013/10/07/does-some-deeper-lev...

A paper by Sabine Hossenfelder and Tim Palmer:

Sabine Hossenfelder and Tim Palmer; Rethinking Superdeterminism

> https://www.frontiersin.org/articles/10.3389/fphy.2020.00139...

In 2016, 't Hooft published a textbook on some specific points of superdeterminism:

Gerard 't Hooft; The Cellular Automaton Interpretation of Quantum Mechanics

> https://link.springer.com/book/10.1007/978-3-319-41285-6


It's true in a sense that you can get around Bell's theorem by supposing that Alice and Bob's mesasurement choices are strongly pre-correlated in a specific way. But then I feel you end up with this sort of bizarre universal conspiracy set up just to make humans believe in quantum theory.


If you have such an objection, you might be interested in section 4 (in particular section 4.1 and 4.2) of the linked paper

Sabine Hossenfelder and Tim Palmer; Rethinking Superdeterminism

> https://www.frontiersin.org/articles/10.3389/fphy.2020.00139...


> Bell's and similar examples that rest on arguments from fine-tuning (or sensitivity, or conspiracy) all implicitly assume that there is no simple way to mathematically express the allowed (or likely) initial states that give rise to the predictions of quantum mechanics.

I mean, this is the crux of it. You'd have to have some convincing way of explaining why states "give rise to the predictions of quantum mechanics" for all possible measurement choices yet to be made by anyone. It still feels conspiratorial to me, having read that whole section! If someone were to show me a simple mathematical expression as defined above, I would be open to it. As far as I can tell, all this article is saying is "maybe it's possible".


> If someone were to show me a simple mathematical expression as defined above, I would be open to it. As far as I can tell, all this article is saying is "maybe it's possible".

Finding such a simple mathematical expression is exactly what research into this direction is for. But obtaining such is rather the end result that one hopes for.


Sure. It's a fine and noble goal, but at this point it just looks... well, as the Wiki article you linked says, "as plausible, and appealing, as belief in ubiquitous alien mind-control".


But there are constraints on those hidden mechanics as well. Not sure hidden variables theory survives these constraints. Can someone more knowledgeable talk on this?


Congratulations to Alain Aspect. Long time coming, well deserved.


Its about time they recognized Zeilinger's work - he's been doing astonishing work clarifying and articulating the foundations of Quantum Mechanics for decades.


Have I seen any of the fruits of this labour in my daily life, or is this for stuff like string theory?


Their work is based on experimental verification of some very counter-intuitive aspects (!) of quantum theory. Just this put them in a completely different world than string theorists, who still don’t know how to test their theory.

As mentioned elsewhere, their work is the foundation of quantum computing and quantum cryptography, which probably hasn’t directly affected your life yet but could very well do so in a not-too-far future. Again, entirely unlike string theory.


Isn't Quantum Computing bullshit?


No, it isn’t. Whilst a lot of work remains to be done for really commercially successful quantum computers (and probably a couple of decades), we’ve had experimental devices that can be used for research and development since the late 1990s.

We are currently where computing was in the 1900s: critical innovations like transistors and ICs have not happened yet, but the concept of computers is much older than that, with examples such as Babbage’s analytical engine or Pascal’s calculator.


That is a strongly deserving selection.


Sabine Hossenfelder had predicted the winners just last year: https://www.youtube.com/watch?v=jNT9Hn-96rM&t=619s


Even in THE press conference of the Nobel Physics prize, one cannot expect to get the facts fully communicated clearly. Superdeterminism is a local hidden-variable theory that reproduces the predictions of quantum mechanics.

Edit: Sabine Hossenfelder has explained this much better than I ever can: See for example http://backreaction.blogspot.com/2021/12/does-superdetermini...


Superdeterminism isn't a theory, it's more of a choice of assumptions that could lead to a local hidden variable theory. No convincing model has been proposed thus far.

Other such assumptions that would allow local hidden variable theories are "a wizard did it" and the rejection of the principle of relativity. All three share that they have no convincing reasoning behind them and that they are largely incompatible with our model of falsifiable science.


Have people actually constructed a concrete example of a theory that's super deterministic in a way that violates the bell inequalities but is complex enough to contain Turing machines driving the tests?

The requirement that you can encode Turing machines might sound silly but I really think it's the core of the issue. The ability to make computers implies a certain level of hard-to-control due to the existence of things like the halting problem and pseudo random number generators and cryptographic hash functions.

My understanding is that super determinism is one of those loopholes where it's easy to say its a problem, but actually providung a plausible concrete model where it's a problem is very hard.


> Have people actually constructed a concrete example of a theory that's super deterministic in a way that violates the bell inequalities but is complex enough to contain Turing machines driving the tests?

I am not aware that such a theory exists, but the Nobel laureate Gerard 't Hooft made some possible first baby steps into this direction:

Gerard 't Hooft; The Cellular Automaton Interpretation of Quantum Mechanics

> https://link.springer.com/book/10.1007/978-3-319-41285-6


Turing machines will exist in even very simple universes, because they are very simple. Instead physicists usually use complexity theory to analyze any proposed new universe or a theory for our universe. If for example, the new theory ensures P=NP, then is suspect. There is a lot of work done on this, but I don't know a good reference off the top of my head.

However, sometimes, people do think about Turing Machines. Here is one paper by Scott Aaronson http://www.scottaaronson.com/papers/ctchalt.pdf


Superdeterminism is unscientific as it's utterly untestable. It's a cop-out.


It's untestable by the definition of the term "superdeterminism" and the meaning of what it implies -- just as “1 + 1 = 3” is something that cannot make sense, by the very definition and meaning of its symbols.

But this doesn't make it wrong. Superdeterminism is a very neat way of resolving physical paradoxes, such as the Wigner's Friend paradox: Any experimenter’s decision to take a measurement of a certain physical phenomenon is predetermined by events, so the experimenter can attain no outcome but the one he does eventually attain, and nothing is in superposition. Neither Heisenberg's uncertainty principle nor Bell's inequality render this condition impossible, as Bell himself was well aware.


> Superdeterminism is unscientific as it's utterly untestable. It's a cop-out.

Every interpretation of quantum mechanics is utterly untestable, or more precisely, each one is exactly as testable as every other interpretation because they all explain the data equally. That's why they're "interpretations".


Weird thing to comment on the Nobel prize announcement for QM experimentalists.


People have a weird understanding of quantum mechanics and their interpretations.


> Superdeterminism is unscientific as it's utterly untestable.

The same objection can be stated for the many-worlds interpretation of quantum mechanics:

> https://en.wikipedia.org/wiki/Many-worlds_interpretation


Not sure if you’re defending the former or attacking the latter...


I cannot understand why superdeterminism is so much frowned upon while the many-worlds interpretation is considered to be perfectly fine.

I rather think the same standard should be applied to both.


One key difference is that many-worlds is an interpretation of quantum theory, whereas super-determinism refers to a class of physical theories with no known plausible examples.


MWI is more suggestive to pop-sci readers and ill-defined enough to mean different things to different people, I guess.


I won't defend popsci presentations, but formally MWI is pretty precisely defined. It's defined by assuming the postulates of quantum mechanics without including the Born rule. States are unit vectors, operations are unitary matrices, systems combine using the tensor product. That's many worlds.

It's sort of like constructivist mathematics in that it involves dropping an extremely useful axiom, and then rederives basically all the same results in a more roundabout way.


If the "precise definition" of MWI doesn't go beyond "QM mathematical formalism when you get rid of its link with the physical world" what's the use of it?

https://www.quantamagazine.org/why-the-many-worlds-interpret...

"Attempts to explain the appearance of probability within the MWI come down to saying that quantum probabilities are just what quantum mechanics looks like when consciousness is restricted to only one world. [...] What the MWI really denies is the existence of facts at all. It replaces them with an experience of pseudo-facts (we think that this happened, even though that happened too). In so doing, it eliminates any coherent notion of what we can experience, or have experienced, or are experiencing right now. We might reasonably wonder if there is any value — any meaning — in what remains, and whether the sacrifice has been worth it."


Why are you assuming that measurement is the link between quantum mechanics and the real world? Quantum mechanics doesn't need to be linked to the real world, it is the real world.

That quote is simply wrong when it says that MWI "restricts consciousness to one world". As a simple matter of calculation, you can work out what MWI predicts a complex agent will report as its experience. Here for complex agent I want you to picture a computer program recording events and accumulating statistics, not a person, just because it makes the thinking clearer. Anyways, the prediction is that the recordings will be consistent with a classical type view, for lack of a better word, as opposed to a superposition of experiences. Even if the computer program was in fact run in superposition and interacted with superposed objects.

But this is not surprising, because we already know that all the interpretations of quantum mechanics give the same predictions. So of course many worlds is not going to disagree with collapse type theories on what people say they experience. Because it is mathematically equivalent to the others. Observationally indistinguishable.


> you can work out what MWI predicts a complex agent will report as its experience

There's kind of an ontological trick at this point. Using decoherence, self-locating uncertainty, and one or more assumptions about rational decision-making, you can derive what rational credence you might apportion to each of the agent's possible experiences. I feel like this is different from actually predicting the agent's experience.


What’s an agent? What’s an experience?

Those questions do not seem easier to answer than “what’s a measurement”.


Fair point. All you can really do, without making some quite strong ontological assumptions, is derive which rational credences to assign to the outcomes of one's own future interactions with the world.

Which suddenly sounds a lot like QBism.


I didn't even say "real world", I said "physical world". Physics, unlike mathematics, is an experimental science.

Edit: where does the MWI define “complex agent” and “experience”, by the way? I thought that MWI was precisely defined by assuming the postulates of quantum mechanics without including the Born rule, but it seems that there may be more to it!


> Because it is mathematically equivalent to the others. Observationally indistinguishable.

By Occam's Razor, if there exist multiple theories that are indistinguishable, we should take the one that has minimal assumptions (i.e. no assumption of the existence of some "multiverse" like in the many worlds interpretation).


Not necessarily. In principle, there could exist a super-deterministic theory that predicts phenomena that QM does not. None has been invented so far, but in general the idea of super-determinism has received little attention, so that is not surprising.

This may (or may not) change with work by Gerard t'Hooft and others.


I think people are downvoting you because they missed the mistake in the press release, so it sounds like you were claiming super determinism is the definitive theory


I'm not convinced. (But also I'm not a physicist).

Don't get me wrong, they are all great experimentalists and their experimental work helped move the science forward in the understanding and control of the atomic scale.

But their "Bell's experiment" variations have been used to "disprove" and cast aside wide branches of theoretical research, and this will probably remain as one of the biggest failure of modern science.

If you model the experimental settings in your theory, it's indeed quite simple to reproduce the QM probabilities (and therefore violations of CHSH inegalities), with a local hidden variable theory.

In fact here is 60 lines of numpy code to set-up virtual experiments to convince yourself (try various settings for the polarisers angles alpha and beta) https://gist.github.com/unrealwill/2a48ea0926deac4011d268426... (Straight implementation of Marian Kupczynski "Closing the Door on Quantum Nonlocality" https://www.mdpi.com/1099-4300/20/11/877 (around eq.7-8) if you need more explanation).


> it is indeed quite simple to reproduce ... with a local hidden variable theory

This is not the whole truth. Bell's inequality assumes a few reasonable assumptions for hidden variable theories besides locality (https://plato.stanford.edu/entries/bell-theorem/) and it is true that sacrificing these assumptions can yield a local hidden theory consistent with Bell's inequality / CHSH inequality violations.

But the whole point of these assumptions is that we want them. Bell's inequality is the definitive experimental test that a standard classical theory is not waiting in the wings for us, and those assumptions are central to classical mechanics.

The issue with a theory that sacrifices things like parameter independence or similar is that they will remain mathematical curiosities, much like interpretations of quantum mechanics, because there is no way to test them. And they'd be stranger than quantum mechanics, so you don't get any relief either.

So, yes, you can do this. But it has no practical value. This is not the experiment's fault in any way.

However, if you are arguing that you can have a local hidden variable theory that satisfies all of Bell's assumptions and violates the inequality, then you have some severe issues in your argument somewhere, because it is categorically impossible.

> 60 lines of numpy code

In the event you are arguing for a local hidden variable theory that satisfies all of Bell's other criteria: firstly, there is no guarantee the simulation has any fidelity to the real world at all.

Thankfully, you don't have to settle for simulation. I was fortunate to take 180Q at UCLA, an undergraduate class where we were required to derive the CHSH inequality and perform simple experimental violations by ourselves. The tools needed to run a simple test are commercially available and affordable - you can do it yourself at home.

I was able to achieve experimental violation when I did so back in 2015/2016, and documented the results here:

https://github.com/AkshatM/Physics180Q/blob/master/Final%20P...

Of course, if you are arguing we can sacrifice an assumption of Bell's besides locality, then the above is moot. But it would still be rather pointless.


I am also not an expert on the topic, but citing a MDPI paper to argue against a Nobel prize seems also unconvincing.


Your code is not properly enforcing the communication constraint between the two players. Try to enter code that passes the bell tester widget included in this blog post and you'll find the problem:

https://algassert.com/quantum/2015/10/11/Bell-Tests-vs-No-Co...


Of course it can't be written into your bell tester.

It's not hiding anything. It works because of the post-selection. But this selection is done in a locally compatible way.

It works because you as the (virtual) experimenter can specify/choose explicitly a precise definition of what a measurement is, (which you can't do in your widget).

My code is showing an example of the general structure of how you could/should define what a measurement is, in such a way that you preserve both the locality of the world, and the observed violation of the Bell inequalities (according to your definition of what a measurement is).

I'd argue that it's in essence doing what, one way or the other, the real world measurement apparatuses of the experimenters are doing.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: