Hacker News new | past | comments | ask | show | jobs | submit login
Breaking Bell's Inequality with Monte Carlo Simulations in Python (bytepawn.com)
122 points by Maro 6 days ago | hide | past | favorite | 58 comments





If you want to try your hand at violating Bell inequalities, there are widgets in [1] that allow you to input strategies (as javascript) for Alice and Bob. It continuously performs Monte Carlo sampling of the strategies and presents their success rate.

There's a classical-only widget, that goes through quite some contortions behind the scenes to prevent cheating via writing to global variables, and a quantum-allowed widget where that kind of cheating is possible due to the underlying implementation cheating in precisely that using-globals way in order to correctly simulate the quantum mechanics.

Anyways, I've had a few people tell me playing around with the widgets helped them understand the inequality.

[1]: https://algassert.com/quantum/2015/10/11/Bell-Tests-vs-No-Co...


This is closely related to my PhD. It was many years ago but if I remember rightly there is no need for the assumption of determinism - Bell Inequalities hold just as well for random local hidden variables.

Simulating the correlations with computer programs is an interesting idea, partly because it challenges to those who still believe in a "local" reality to demonstrate Bell Inequality violations in distributed classical computer systems. Back in the day there was a crackpot researcher named Joy Christian who kept publishing repetitive papers in the belief that geometric algebras provided a counterexample (it looks like he's still going strong! [0]). Of course, there's nothing about geometric algebras that cannot be modelled in a computer program, so in principle Christian should have been able to demonstrate Bell violations in a distributed scenario. Needless to say, this hasn't happened even though it would be a momentous breakthrough in the foundations of physics.

[0] https://ieeexplore.ieee.org/document/9693502


You know something has gone horrifically badly when a paper begins with

> This reply paper should be read as a continuation of my previous reply paper [1], which is a reply published in this journal to a previous critique of one of my papers

We're way too deep in replies now, and anyone who values their time should get out now.


On the other hand, academics slap fights are magnificently petty to behold.

  In any dispute the intensity of feeling is inversely proportional to the value of the issues at stake. That is why academic politics are so bitter.

The conclusion reads like someone who can't admit they're wrong on Reddit:

> The common defect in the critiques [2], [6], and [14] is that, instead of engaging with the original quaternionic 3-sphere model presented in my papers [1], [7]– [11] using Geometric Algebra, they insist on criticizing entirely unrelated flat space models based on matrices and vector “algebra.” This logical fallacy by itself renders the critiques invalid. Nevertheless, in this paper I have addressed every claim made in the critique [6] and the critiques it relies on, and demonstrated, point by point, that none of the claims made in the critiques are correct. I have demonstrated that the claims made in the critique [6] are neither proven nor justified. In particular, I have demonstrated that, contrary to its claims, critique [6] has not found any mistakes in my paper [7], or in my other related papers, either in the analytical model for the singlet correlations or in its event-by-event numerical simulations. Moreover, I have brought out a large number of mistakes and incorrect statements from the critique [6] and the critiques it relies on. Some of these mistakes are surprisingly elementary.


That looks like a person attacked by a troll in a position of power.

But then, I don't want to read the actual claims.


I thought that was satire. I can't believe someone actually wrote that.

Hi, author of the article here.

Regarding determinism, I think the reason the assertion is "no deterministic local hidden.." is that, you need to break both the deterministic and locality assumption. However there is a nuance, which is, do you need to break both properties to..

(a) break the Bell inequalities, or, to

(b) reproduce quantum mechanics..

which is not exactly the same thing.

For example, in my toy simulation framework, this [1] simple setup --- where Alice's two measurement devices always return +1, and Bob's two measurement devices are conditioned on Alice's returned value, without any randomness --- breaks the Bell-inequalities at S=4, but:

(1) it's not physical, because it also breaks the Tsirelson bound (4 > 2.82), ie. you can't actually achieve this with any known real-world physical system

(2) it's deterministic in the sense that the code does not call `random()`

(3) but from the perspective of Bob, who "calls" the measurement function, it would still appear random, since it depends on whether Alice measures H or T, which was the outcome of a random coin flip; so whether we consider this random is quite nuanced..

So the above is an interesting thought/Python experiment for what it takes to break the Bell inequalities. Then, if we modify the code to reproduce quantum mechanics (for which the 2 qubits stand in), which is the code shown in the original post, in that case we cannot even avoid calling `random()`, because the "first" to measure their qubit must also get +1 and -1 with equal probabilility, so the theory cannot be deterministic.

[1] https://gist.github.com/mtrencseni/de13f766911aaaf5bfd5d4636...


Yes I could have worded that better!

So... what you have here is a deterministic non-local hidden variable model which violates Bell Inequalities. The reduced probabilities at Bob's end might look random to him, but fundamentally the measurement outcomes are determined by Alice and Bob's measurement choices. All good.

You also know that any deterministic local hidden variable model must obey Bell Inequalities.

What I'm saying is that any local hidden variable model must obey Bell Inequalities. You cannot increase the value of S by relaxing determinism.

So actually it's kind of a distraction to bring in determinism. Either you have local hidden variables - which obey Bell Inequalities - or you allow non-local hidden variables - in which case Bell Inequalities can be violated. Locality is the key assumption.


A follow-up point: it sounds like you're also wondering whether it's possible to simulate quantum mechnics exactly with a deterministic non-local hidden variable model?

Arguably, this is exactly what the Bohmian "Pilot-wave" interpretation of quantum mechanics is - see e.g. https://plato.stanford.edu/entries/qm-bohm/


Thanks for the pointer.

To be honest, I wasn't wondering that :)

From what I understand, historically nothing ever came of these different interpretations of QM. I subscribe to the Feynman motto of "shut up and calculate", with the modern modification of ".. or simulate".


Incidentally, there is a variant of the canonical Bell experiment called the Greenberger-Horne-Zeilinger (GHZ) experiment that doesn't require multiple trials to collect statistics. The GHZ experiment uses three photons in an entangled state rather than two and can produce a result that is incompatible with classical mechanics in just a single observation.

https://en.wikipedia.org/wiki/GHZ_experiment


It's impossible produce a result incompatible with classical mechanics in a single constant-sized observation, because the classical players can get any result by just playing randomly.

The advantage that GHZ has, similar to the Mermin-Peres magic square game, is that the quantum players should win 100% of the time while classical players win less than 90% of the time. This gives much faster Bayesian updates away from classical mechanics towards quantum mechanics as you collect samples (compared to CHSH). But you do still need multiple samples.

On the other hand, seeing the GHZ game fail would be instant total loss for quantum mechanics. If the win rate is supposed to be 100%, and you see a loss (that you can't attribute to noise or something), then in that case a single test would have caused you to totally discount quantum mechanics.


"a talented college physics student can do it"

I'm afraid I don't qualify for being able to do that, but I feel like I'm tantalizingly close to understanding this overall - but I'm finding it hard to understand why the lower-right "TT" quadrant is transposed in the S=2.828 example (the red box in the diagram). Maybe it's obvious if one understands it better?


Hi, I'm the guy who wrote the article.

In the article, I first show how to "break" the Bell inequality without making a reference to any complicated math or Physics, this is the section "Breaking the Bell inequality with non-local information", which uses the dice roll example. This is on purpose, for pedagogical reasons, and this is why the Python approach imo is so useful to demonstrate this whole thing: the key idea is, to break the inequality, you need to "peek" at the other side.

Then, the next mental step is simply the statement that, in "real life", you can prepare a composite system (eg. 2 photons modeled as 2 qubits) that you can seperate (modeled as the split() function in Python), you can send the 2 parts to two different observers, they use a certain measurement setup, and the whole game is played, statistic computed, etc. and then you get this value 2.82 (which breaks the Bell inequality)! So somehow, the 2 qubits are doing that we can only model [in Python] as peeking!

The actual derivation of how to get that 2.82 is, in some sense, almost like a a detail. I think with this approach, even a non-physicist can understand what this whole argument is (=Bell's genius).

"a talented college physics student can do it" - I'm a Physicist, but I'm not working as a Physicist, and I was able to derive all the numbers in that table by hand with pen & paper directly. I figured if I can do it 15 years out of school, so can a talented college physics student!

The next article will be that derivation [of the raw probabilities], I just need to transcribe it from my notebook to Latex and clean it up. If you want to see the original notes:

https://photos.app.goo.gl/sqxLnEhyeZTDD7oA6


It requires a chunk of linear algebra to understand, but the Wikipedia page has a slightly more detailed explanation: https://en.wikipedia.org/wiki/Bell%27s_theorem#Theorem

It's related to the fact that the expected value of A_1 tensor B_1 is negative 1/sqrt(2), whilst the expected value of all other tensor products are positive 1/sqrt(2).


The explanation and table in the Simple English page for this helped me grasp it better. (Although the diagram using green dots only confuses :) )

Greene's book is a fantastic read too!

https://simple.wikipedia.org/wiki/Bell%27s_theorem


> Although the diagram using green dots only confuses :)

It's very confusing. In particular it does not say that the box have 3 doors until the middle of the explanations. Also, I don't find the example very similar to the Bell's Inequality.

Moreover, I expect in a quantum system that when both open the same door they get the same result (or the oposite) so in a quantum system I expect that when both open the same door they get 100% (or 0%) agreement, so insted of 50% I expect 1/3 * 100% + 2/3 * 50 % = 66% (or 1/3 * 0% + 2/3 * 50 % = 33%).

Anyway, in some versions of the Bell's Inequality the doors of the boxes are "misalignment" so on box has white-gray-black doors and the other has another ser of colors. let's say creme-pink-brown doors. You never have a 100% or 0% of coincidences of the results.


This looks like some kind of variant of Bell's Theorem. I've not seen it before, but it reminds me of the GHZ inequality [0]

[[EDIT - actually I take that back. The GHZ inequality refers to three systems whereas your link refers to three measurement choices]].

I don't think your link gives a derivation of the quantum correlations beyond "Quantum physics says that half the time they should get a match".

[0] https://en.wikipedia.org/wiki/Bell%27s_theorem#GHZ%E2%80%93M...


This was really excellent - for many of us, code helps to make mechanisms concrete, and it forces every single thing to be pinned down, and not hand-waved away.

(Like another commenter, I was also hoping for a direct/standalone explanation for why the red matrix is transposed.)


> Victor can prepare a pair of quantum particles in a special state known as an entangled state. In this state, the outcomes of Alice's and Bob's measurements are not just random but are correlated in a way that defies any classical explanation based on local hidden variables.

What if there are no hidden properties per particle, but the combination of specific property values of particles allow for breaking Bell's Inequality?

I.e. what we call 'entanglement', it might not be 'action-at-a-distance', but the simple effect of the interaction of the properties of the two particles as they are generated.

For example, if we have two billiard balls, which are really close together, and we hit them with a third ball simultaneously, their spin will be correlated when we measure it for both balls (without taking into account other factors, i.e. friction, tilting of the table etc). Wouldn't that break Bell's inequality as well? the spins of the two balls will be correlated.


"their spin will be correlated" - in this case the billiard's spin is a per-ball property that is set before they are sent to Alice and Bob, and happens to be correlated. You can simulate this in the Python code, but you will not be able to break the Bell inequality like that. This is similar to the dice example I give, where the objects sent to Alice and Bob are random from their perspective (since the dice roll happens with Victor), and correlated.

In general, classical correlation cannot break the Bell inequalities [assuming no peeking, ie. no action-at-a-distance in the measurement devices]. To be clear, I didn't prove this in the article, the approach the article takes is "here is some code, play around with it to get a feeling for why".

Hope this helps.


> In general, classical correlation cannot break the Bell inequalities [assuming no peeking, ie. no action-at-a-distance in the measurement devices].

What if the particles have properties that mutate their state after they are sent to Alice and Bob?

Suppose, in the billiards example, that I put a small device into the balls that changes the spin of the ball to some predefined value.

Wouldn't that break the Bell inequalities without action at a distance?

The reason for the breaking would be that the state of the balls would be modified after they are sent to Alice and Bob. It would look like action at a distance without being 'action at a distance'.


It doesn't matter when the state of the ball changes (when Victor sends them, on the way, when it's measured). You can play around with this in the Python code, there is shows up like "it doesn't matter which function you put that line of code, the functions are called one after the other". The functions in question are generate_composite(), split(), and the 4 measure_X_Y(), called from bell_experiment().

The Bell's Inequalities are a test for the capacity for inductive reasoning of the pupil. If the pupil succeed he is not to be admitted to join the ranks of quantum physicist.

You usually show a pupil the problem with classical probabilities, and show that you can't violate Bell's Inequalities, then you show that Quantum Mechanics managed to replicated the observed probabilities using a non-local way, and therefore you conclude that the world is non-local.

But this logic doesn't stand. You need to use inductive reasoning to see it through. Ask yourself the question, what change would it take to your theory to make it local and still replicate the observed probabilities (and still look reasonable).

Solve the riddle (it's quite beautiful once you see it :) ) and you will be rewarded with the awesome title of crackpot physicist, pitted against other dubious crackpot physicist each convinced their loopholes are the ones and only.


> You usually show a pupil the problem with classical probabilities, and show that you can't violate Bell's Inequalities, then you show that Quantum Mechanics managed to replicated the observed probabilities using a non-local way, and therefore you conclude that the world is non-local.

If you do this you're doing a bad job at being a teacher.

The way the argument should go is you start with a list of assumptions (of which locality is one), derive Bell's inequality from them, and determine that as Bell's inequality seems to be false in real experiments at least one of your assumptions was wrong. Then you can talk about quantum mechanics and explain which of these assumption are broken in quantum mechanics. If you have time you can have fun talking about different interpretations of quantum mechanics because (e.g.) Everettian Many Worlds is completely local, but still produces predictions matching quantum mechanics (and therefore breaks Bell's inequality).


>If you do this you're doing a bad job at being a teacher.

If this wasn't sufficiently clear, I am not a teacher, I am a crackpot physicist.

Hint : Listing the assumptions doesn't work. This is what I call the three-card monte argument. The ball is not under one of the three goblets, the ball is in the sleeve of the magician.


> Hint : Listing the assumptions doesn't work

Why not?

I could see that it might not if you are not clear about your assumptions


It's circular reasoning, hidden in the definition of your assumptions. By defining not clearly what a measurement is and observations are.

You must let the cat step out of the box your definitions put you in.

You have infinite freedom in your choices of definitions, listing assumptions is creating a false dichotomy. Especially when doing so conclude to exclude the most probable assumption : Locality.

Preserve locality, and find another self consistent theory which define properly what according to it a measurement is, an not take measurement and observations as axioms.


Will you grant me that it is at least possible to derive Bell's inequality by listing out a complete set of assumptions (including assumptions that define what a measurement is and what observations are)?

Of course you personally may disagree with some of these axioms (indeed, if you take Bell's theorem seriously you must), but certainly it is possible to list them, and thereby derive Bell's inequality?


Bell's theorem is a theorem. If hypothesis applies conclusion must follow. That's math. Everything is fine with it (They are a reformulation of "Bonferroni inequalities" or "Boole's_inequality" by the way).

You've got to reframe the problem so that Bell's theorem doesn't apply. When you build your theory, if you manage to define what a measurement is, so that you don't satisfy the hypothesis of the Bell's theorem, you get to avoid having to have its conclusions.

One of Bell's theorem implied hypothesis is that measurements/observations are probabilities, so by defining measurement instead as a conditional probability, you get to avoid being subjected to Bell's inequalities.

It's inductive reasoning, you don't get truth you only get self consistency, and a model that looks much nicer than QM.


> You've got to reframe the problem so that Bell's theorem doesn't apply. When you build your theory, if you manage to define what a measurement is, so that you don't satisfy the hypothesis of the Bell's theorem, you get to avoid having to have its conclusions.

This (in my opinion) a bad way of explaining how the standard reasoning goes. We start with a list of assumptions, we prove this inequality which it turns out is not satisfied, we reject (at least) one of our assumptions. These is no crackpottery here, this is the norm.

> by defining measurement instead as a conditional probability

This sounds like it probably doesn't get you anywhere, but I'll bite - what are we conditioning on? In the standard formulation of Bell's theorem they are conditional on the "hidden variable" we are assuming exists, as well as any relevant measurement settings but it sounds like you're imagining something wilder than that.


>what are we conditioning on?

The local hidden state, but you don't get to set it from inside the universe when you do an experiment (this local hidden state is unobservable).

From inside the universe based on this hidden state, everything behave classically, pseudo-randomly based on the local hidden state.

But because you don't get to set the local hidden state during your experiment if you want to calculate the probabilities, you have to integrate over the possible values of the unknown hidden state, and this allows you to recover the strange looking quantum correlations.

Doing repeated experiment inside a universe mean picking a different initial local hidden state (because it's unobservable).

[Spoiler ahead] The original idea is not from me, if you want the nitty gritty details, look at the work of Marian Kupczynski (Closing the Door on Quantum Nonlocality https://philarchive.org/archive/KUPCTDv1 ). Or his more recent works.

I have made a straight forward implementation (3 years ago) of it to convince myself with a Monte Carlo simulation : https://gist.github.com/unrealwill/2a48ea0926deac4011d268426... [End Spoiler]


I commented awhile back on another thread that:

I think, ultimately, there are only 3 possible explanations for the paradoxes of the quantum world. 1) superdeterminism (everything including our choices in quantum experiments today were fully determined at the instant of the Big Bang), 2) something "outside" our observable reality acting as a global hidden variable (whether something like the bulk in brane cosmology or whatever is running the simulation in simulation theory) or 3) emergent spacetime (if space and time are emergent phenomena then locality and causation are not fundamental).

You seem to be suggesting something similar to option 2. Or am I misunderstanding?


The solution I'm suggesting is that nature does it in the really boring way : classically. It's almost like option 2, but the state is local.

This state is local and "inside" our universe, but we can't observe it. (A good analog for thing that are unobservable from inside the universe are seed of a pseudo-random generator).

The beauty of it, is just realising that Nature's simulator can be purely local and yet not be subjected to Bell Inequalities, but still reproduce the spurious quantum correlations, if you calculate the probabilities.

Violating Bell Inequalities is totally normal when you construct your theory such that Bell Inequalities don't apply.


I guarantee you can't break (for example) the CHSH inequality [1] with such a set-up (assuming I've understood your description of what you're proposing), and encourage you to try (with similar python script).

An easy formulation of the inequality is in the CHSH game section of the same article [2].

[1] https://en.wikipedia.org/wiki/CHSH_inequality

[2] https://en.wikipedia.org/wiki/CHSH_inequality#CHSH_game


In the script I already gave you it shows an even stronger argument than CHSH inequality : Convergence (in law) towards the QM probas : It can replicate all the proba given by QM for any alpha,beta polarizer settings, up to epsilon, with epsilon that can be made vanishingly small.

QM breaks CHSH inequality, this replicates the proba of QM therefore it also breaks CHSH.

Of course I'm not banging against a math theorem wall, I just made some leeway to go around, based on the fact that conditional probabilities are not probabilities. Setting the problem such that measurements/observation are defined as a conditional probability (against an unobservable variable) suffice for making Bell theorem not applicable. It offers a whole class of solution to the seemingly paradoxical Bell Inequalities.


If I understand correctly what your script is doing, it emphatically does not meet the challenge I gave above (specifically it fails the "but the state is local" part of your comment).

This is because of the post-selection on line 44. This post selection involves information about the measurement settings of both party A and party B, and is therefore a (very strongly) non-local thing.

To give a more explicit example - imagine I am trying to break the CHSH inequality I linked above. My response functions are set up so Alice and Bob return completely random answers (0 or 1) independent of what they get sent and I add a line to the code much like your 44 except it just keeps the lines where xy = a+b (mod 2), i.e. we filter so that we keep only the trials where we won the CHSH game.

Then we have completely trivially "won" the CHSH game with probability greater that 75% entirely due to this magic non-local filtering.


That the subtlety of this post-selection scheme, the state is completely local : By construction (L37) sela only depends on particle a, and (L38) selb only depends on particle b.

The measurement of a only depend on sela (and not selb), and the measurement of b only depend on selb (and not sela). There is no exchange of information.

The universe already has given you the observations it needed to give you by line 38. The simulator only used local information to simulate the universe up to this point.

Like in qm once you have written down your measurements, you compare them to count coincidences. Sela just mean you registered a click on detector a, Selb just mean you registered a click on detector b. The logical_and is just you counting the observations as a coincidence or not, aka whether you got a click on both detector simultaneously. You are free to be as non-local as you want here, it is of no importance with regard to the state of the universe, the clicks already happened or not happened.


Ok I think I understand your intention with the code now. Sorry I was wrong before. I think what you're talking about here is what gets called the "detection loophole" in most of the literatire. The idea that if we detect only a small enough fraction of the events then they can be a sufficiently unrepresentative sample that we think we violated a Bell inequality even though the full statistics don't.

This has (in my opinion) been comprehensively addressed already. You can check out the references in the section of the wiki article here

https://en.wikipedia.org/wiki/Bell_test#Detection_loophole

but basically if you detect enough of the possible events in the experiment there is no way for nature to "trick" you in this way, "enough" is 83% for the standard CHSH inequality or 66% if you use a slightly modified. Recent experiments (in the last decade or so) are substantially over the threshold for the detection loophole to be a problem. This is one of the earliest papers where this loophole was closed with space-like seperated detectors from 2015. In this paper they used entangled NV centers in diamond as their qubits of choice, and so essentialy had zero events lost.

https://arxiv.org/abs/1508.05949

This is a second one from the same time. This one uses a more standard setup with photons and worked with about 75% detector efficiency for each party (well above the 66% required)

https://arxiv.org/abs/1511.03189

And this is a third with an efficiency of 78% for Alice and 76% for Bob

https://arxiv.org/abs/1511.03190

I therefore have a new challenge - break the CHSH inequality, while rejecting fewer than 17% of the events, or break the (easier) modified Bell inequality used in papers 2 & 3 while rejecting fewer than a third.

Edit: This is another, more recent paper where they use superconducting qubits and again lose no events

https://www.nature.com/articles/s41586-023-05885-0


It is not about missing detection or not.

In your first paper, fig 1 (a), the "ready" box play the role of the "selected".

The universe tell you whether to select or not (it's not you missing events). It just tells it to you without giving any info on the underlying state. You can build a ready box without problem, and experimenters did, and that's all that is needed to break CHSH.

You've got to see it in an abstract way. Nature's is fuzzy and experimenters will always have to define box boundaries (spatial, temporal, and entanglement-pair selection boxes). This defining create conditioning which makes breaking bell inequalities something totally normal, expected and meaningless.

A related concept that may help you see it is https://en.wikipedia.org/wiki/Correlated_equilibrium :

In a game of Chicken, you can get a better correlations between your actions that would seemingly be possible, by using a random variable oracle to coordinate. No information exchange needed. Measurement devices are kind of playing a continuous version of this game.


Its not remotely the same as the ready box, because the ready box sends its signal before the measurement directions have been chosen.

It would be equivalent to the ready box if your filtering happened without any reference to the measurement choices our outcomes.

If you're still unhappy with role of the ready box we can instead talk about either of the two purely photonic experiments which didn't use anything similar.

> The universe tell you whether to select or not (it's not you missing events).

In your numerics it is exactly missing events, there are a bunch of events and you postselect to keep only some of them. If you mean a different model you're going to need a python script which does something else.

>Nature's is fuzzy and experimenters will always have to define box boundaries (spatial, temporal, and entanglement-pair selection boxes)

Sure, but in each of the experiments I linked the selection in the experiments loses a small enough fraction of the events that the detection loophole is closed.


I agree with the sibling comment by eigenket:

> Everything up to the [spoiler ahead] in this comment is (as far as I can tell) exactly how things work in standard formulations of Bell's inequality. There's nothing weird or crackpot there.

Moreover, to clarify, it's not necessary that the hidden variables can be measurable or that you can set them. So if a system like the one you described must follow the Bell's Inequality if all the other hypothesis are true.

I read the code and it looks like an accurate implementation of the model proposed in the paper.

From the paper you liked:

> However, the expectation values E(X1X2), displayed in (13) contain a factor 1/2, meaning that they do not violate CHSH inequality.

I agree with that part. The model should not violate the Bell's inequality or the equivalent version.

> The agreement with quantum predictions is obtained only after the “photon identification procedure”, which selects, from the raw data, final data samples.

The selection rule is the weird part. It's described in equations 7 and 8.

  x := sign(1 + cos[2(a − φ)] − 2 · r1)
where r1 is a uniform random value between 0 and 1.

a is the angle of the polarizer

φ is the secret variable that is the angle of the photon. (QM says that this type of entangled photons have no a secret angle, this model assumes that each photon has a hidden variable that is the secret value φ.)

So far so good, this calculation gives the expected result if you assume that φ is chosen from a uniform distribution between 0° and 360°.

  v := r2 |sin[2(a − φ)]|^d (Vmax − Vmin) − Vmax
  selected := (v ≤ V)
where r2 is a uniform random value between 0 and 1.

With the numbers in your program

  v := r2 |sin[2(a − φ)]|^2 (10 − 0) − 10
  selected := (v ≤ -9.99)
that is equivalent to

  selected := r2 |sin[2(a − φ)]|^2 ≤ -0.001
I've don't remember anything similar, and I can't imagine what it means experimentally.

Most of the times r2 is not tiny, so most of the times this means that the sine is tiny that means that the secret angle of the photon is almost aligned or almost orthogonal to the polarizer.

So this is a device that can measure the secret angle of the photon. This is not a real device, so it can't be proposed as an alternative explanation of the violation of the Bell's inequality.

You may be wondering why I claim it's not a real device.

If you have a detector of polarization, once you fix the angle 'a', you can't distinguish:

1) Unpolarized light, that is in particular the type of light used in a Bell's inequality test where the state is (|00> + |11>)/sqrt(2) or in other versions (|01> + |10>)/sqrt(2), where 0 is horizontal and v is vertical, or a uniform random values of φ in the model of the paper

2) Light polarized in 45° to the detector's angle, that is like a constant φ in both models.

In both cases, you detect 50% of the photons.

If you use the selection device of this paper,

1) with unpolarized light you will get selections when r2 is very small or when φ is almost paraller or orthogonal to the angle a.

2) with polarized light at 45° you will get selections when r2 is very small

So with polarized light at 45° the number of events will be much smaller than with not polarized light.

In particular if you have the source of not polarized light and the detector, adding a polarizer at 45° in the middle will reduce the number of events in the firs case to 1/4 and in the other to almost 0.


> -0.001

Should be -0.01 You have added an extra 0, but it's not the point. You can pick any V, but the bigger it is the more quantum like the correlation are.

>secret angle

Also called "phase" this is the thing there is to "see" : It has a definite value for a single experiment, but every time you do the experiment it has a different value. It behaves like a random variable and that's what allows you to replicate the behavior of what QM does by generating random numbers. That's the subtlety that makes it so that Bell's theorem don't apply.

>So this is a device that can measure the secret angle of the photon.

It uses the secret angle of the photon, to give you something observable, but doesn't leak info about the state. ("it mixes trajectory space" so that each trajectory behave the same, but trajectory are independent, each trajectory just cycle through all the possible hidden states (like the seeds of a linear congruential generator) ).

For a definite (monte-carlo) trajectory, the photon will be definitely absorbed, or not absorbed, (or maybe absorbed later), but the simulator has a state and knows unambiguously how to evolve it, you as an observer though will have to define measurements more ambiguously (due to Heisenberg uncertainty principle (but that's not the point here) )

One other way to see what we are trying to do is factorizing the QM integral.

In QM you have proba = integral( wavefunction ),

You introduce a random variable and condition on it by writing it as proba = integral( integral( wavefunction | hidden_state) dhidden_state )

The point being that you can be smart in the choice of the hidden_state such that the inner integral behaves classically : You push the quantum correlation to the outside integral.

If you want to calculate the probability, you use monte-carlo for the outside integral. And classical simulation for the inner one.

But once written in such a way, you realise that if you want to simulate a universe (like Nature does it), you don't have to simulate all trajectories : Any one will do, as they are all independent from each other.

From inside the universe because you don't have the initial phase, if you want to calculate the proba you have to do a monte-carlo.


>> secret angle

> Also called "phase" this is the thing there is to "see" : It has a definite value for a single experiment, but every time you do the experiment it has a different value. It behaves like a random variable and that's what allows you to replicate the behavior of what QM does by generating random numbers. That's the subtlety that makes it so that Bell's theorem don't apply.

That's standard local hidden variable theory. Bell's theorem apply.

The problem is that the device that is used in the article gives the wrong prediction for a beam with 50% vertically (φ=0) polarized light and 50% horizontally polarizad light (φ=90°). What is the ratio of selected photons as a function of the angle a?


Everything up to the [spoiler ahead] in this comment is (as far as I can tell) exactly how things work in standard formulations of Bell's inequality. There's nothing weird or crackpot there.

Your numerical code is impossible for me to read without some basic idea of what you're trying to show, but I'd like to point out that numpy has functions like np.radians, and np.deg2rad to convert from degrees to radians, you don't have to make your own.


MWI is not local according to all the big names I read; Lev Vaidman, Tim Maudlin, and Im pretty sure David Wallace too.

You can probably define locality in a way that MWI is nonlocal, but you can also definitely define it in a way such that MWI is local.

For me the most important thing about nonlocality is the lack of any "action at a distance", MWI satisfies this, but if you make more stringent demands it might not satisfy those.


> and therefore you conclude that the world is non-local.

No, Bell's inequality has a few sensible assumtions, like locality. The conclusion is that at least one of them is wrong and real world is a sensible one :(. By the way, there is this crazy thing call QM that nobody likes but gives accurate results.


Just because there is a way, doesn't make it the only way.

>but gives accurate results.

Giving accurate results is missing the point.

Hint: The point is understanding how nature's does it.

Here is the Chesterton's fence implied by Bell's Inequality :

Lemma: There exist a local classical simulator that allows to simulate a universe that behaves according to the probabilities of QM.

Corollary : we can simulate "fast" a universe which behaves (in law) exactly like our universe.

Nota Bene : This doesn't mean we can compute QM probabilities fast, (we can't), although one way of computing them would be to use Montecarlo estimation on various instances of universe simulations.

The question is not whether to lift the fence, or how to lift the fence, the question is how are numerical biological instabilities handled.


> Giving accurate results is missing the point.

No, that's the central point of modern (starting with Newton) physical science. In fact, I'd argue that's the main reason for the astonishing advances of the physics in mere 300 years: people stopped bothering too much about philosophical underpinning of reality and started to fucking measure the reality instead, as precisely and accurately as they could and then some. Fresnel's optics won over Newton's not because of its superior philosophical merits (it needs luminiferous aether to be a perfectly rigid incompressible solid, after all), but simply because it very accurately described light's interference, diffraction, all kinds of refraction and the accompanying polarization, and also dispersion, all in one nice, self-contained package. That's what mattered, not the ridiculousness or reasonableness of proposition that light corpuscles have poles and can experience fits of easy transmission/reflection.

> Hint: The point is understanding how nature's does it.

By being itself, how else? /s


This 'crackpot physicist' is still alive and kicking and, indeed, as per the (analytical) induction requirement to make the case, his work deserves a careful reading to assume the geometric algebraic understanding of QM (for the 'crackpot's latest, see: https://www.linkedin.com/posts/joy-christian-oxford_comment-...)

Hidden variable theory: https://en.wikipedia.org/wiki/Hidden-variable_theory

Bell test: https://en.wikipedia.org/wiki/Bell_test :

> To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency η [\eta], defined as the probability that a photodetector detects a photon that arrives at it. Anupam Garg and N. David Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of η > 2*sqrt(2)/2~= 0.83 is required for a loophole-free violation.[51] Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for η>2/3~=0.67 which is the optimal bound for the CHSH inequality.[53] Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for η>(sqrt(5)-1)/2~=0.62 [54]

CHSH inequality: https://en.wikipedia.org/wiki/CHSH_inequality

/sbin/chsh

Isn't it possible to measure the wake of a photon instead of measuring the photon itself; to measure the wake without affecting the boat that has already passed? And shouldn't a simple beam splitter be enough to demonstrate entanglement if there is an instrument with sufficient sensitivity to infer the phase of a passed photon?

This says that intensity is sufficient to read phase: https://news.ycombinator.com/item?id=40492160 :

> "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :

>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity

And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?


From "Violation of Bell inequality by photon scattering on a two-level emitter" https://news.ycombinator.com/item?id=40917761 ... From "Scientists show that there is indeed an 'entropy' of quantum entanglement" (2024) https://news.ycombinator.com/item?id=40396001#40396211 :

> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)

That was probably the "Bell test" article, which - IIUC - does indeed indicate that if you can read 62% of the photons you are likely to find a loophole-free violation.

What is the photon detection rate in this and other simulators?


> This is known as a Bell inequality. It captures the essential limitation imposed by any theory based on local hidden variables — theories that adhere to classical notions of determinism (no random chance in the measurement apparatus), locality (no faster-than-light influences) and realism (pre-existing properties).

Obligatory reminder that there is an extra assumption here: the assumption that the result of the coin flip is not correlated to the hidden state of the particle. If when receiving a particle in stage a_H your coin flip always leads to, say, HH, then you will break Bell's inequality even if all the other assumptions hold. Theories that have this property are called "superdeterministic".


[flagged]


In which way is python's random module broken, specifically?

I would assume the secrets standard library creation might be related to this concern.

https://docs.python.org/3/library/secrets.html


Why not use numpy's rng?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: