Hacker News new | past | comments | ask | show | jobs | submit login
Imaging Bell-type nonlocal behavior (sciencemag.org)
160 points by karxxm 10 months ago | hide | past | web | favorite | 23 comments

I know this must be nuts, could someone ELI20?

In this image: https://advances.sciencemag.org/content/advances/5/7/eaaw256... they describe their apparatus.

The light comes from bottom left, then gets into Beam Splitter (BS) and splits in two. The top-left detector has a pre-selected filter and a measurement device after a delay line. As far as I understand, this device gives binary output: either light received or not. The top-right detector captures whole image of the light ray, but does so only if the left detector has seen light.

The result is that the image from the top-right detector is highly correlated with the pre-selected filter of the top-left detector, even though they're placed in different arms of Beam Splitter! https://advances.sciencemag.org/content/advances/5/7/eaaw256...

Thank you for the setup explanation. I understood what is happening and it is totally fascinating, even if I do not - and probably never can - understand why or how this is happening.

A bell inequality is a feature of quantum systems by which they break our understanding of classical probability. A good reference is here [1]. The basic idea is to imagine a Venn diagram where the sum of the parts adds up to more than 100%. Bell inequalities are nifty because they heavily imply that our classical notions of “local realism” must be incomplete. Namely, that either quantum objects experience instantaneous action-at-a-distance (locality), or they do not have actualized measurement values when they aren’t being actively measured (realism).

This paper is showing a way to produce a visual image of a bell inequality and explores ways that these inequalities can be used for other visual techniques.

[1] https://youtu.be/zcqZHYo7ONs

The point at which I disconnected from understanding physics is not the actual theory, but the stuff people say about locality. I mean, once you have the concept of fields, they're not local, right? So aren't people saying "what is not nonlocal must be local" which doesn't seem like a substantive principle.

Every time I read about the struggle between realism and locality, it makes me feel unhappy because I have a choice between assuming it's me that's really dumb or everyone else. I can't understand why it should be a real problem.

Fields are neither local nor non local. Locality is about how information propagates.

Classical Gravitation defines a force field that is nonlocal, as it propagates instantaneously. Move a mass from point a to point b, and the entire universe immediately updates the force they experience.

Classical Electromagnetism defines a field that only propagates changes at a finite speed. If you move a charged mass from point a to point b, the change in the electric and magnetic fields radiate outward at the speed of light. One way to think about this is to observe that time update equations for E+M only rely on values infinitesimally near a point. If you need to compute the next time step on a grid, you only need to look at the neighbors of a point to figure out how to update a given point.

Ok, but if you think of a field as existing in space at a given time, then essentially all of it is not in your particular spot.

Just saying "this field has a value elsewhere" means you are talking about something that is not near your given point. And having different values at different points is what I thought a field was.

Are you saying that electromagnetism is a nonlocal theory because it has a field? That is not the standard meaning of 'nonlocal'.

Since you have a finite speed for information to propagate, locality can always be defined as "within the lightcone", and so locality is well defined, even for fields.

This video by Veritasium covers the entanglement concept at a non-expert level:




The first link gets bonus points for the 40-year-old advertisements included in the scan.

Maybe a bit off topic, but i've always wondered why most physicists dismiss the possibility of superdeterminism to escape Bell's theorem. Are there any physicists here who would like to elaborate?

Not a physicist, but I've been studying QM as a hobby for thirty years. The problem with superdeterminism (SD) is that if QM is true then SD is not falsifiable. SD says that all experimental outcomes are deterministic but derive from hidden state, i.e. some sort of "Cosmic Turing Machine" (CTM) calculating the digits of pi or something like that. So now what? The CTM has to be perpetually hidden from us, otherwise we could examine its state and predict the outcomes of quantum experiments, and that would violate QM. So if QM is true, then the CTM necessarily has the same ontological status as an Invisible Pink Unicorn (IPU). In fact, the deterministic calculations underlying SD may well be carried out by a literal IPU. If QM is correct then there's no way to determine this, even in principle.

Bell-inequality experiments still that leave room for hidden-variable models that fall short of full superdeterminism.

Relatively recent experiment using photons from stars in the Milky Way "Cosmic Bell Test: Measurement Settings from Milky Way Stars" http://web.mit.edu/asf/www/Papers/Handsteiner_Friedman+2017.... The experiment excludes local-realist models with local hidden variable younger than 600 light years. Similar test with cosmic microwave background could push the limit to the early universe.

How do you tell if your code is running on a virtual machine or on bare metal?

In our case we know we're running on "quantum bare metal" because we can do quantum mechanics experiments.

Not a physicist, but in my mind the basic obstacle to superdeterminism is that in the initial state at t=0 you have O(S) degrees of freedom but throughout some chunk of spacetime there will be O(S * T) Bell violations (where 'S' is "amount of space" and 'T' is "amount of time"). The degrees of freedom you get is fixed, but the constraints grow with time; the system is asymptotically overconstrained. So at first glance you'd guess "no superdeterministic solution exists".

If a solution did exist, it would probably be because of some convenient symmetry or law w.r.t. how the Bell violations played out. But you can use the behavior of arbitrary computer programs to trigger Bell tests, and computer programs are not a well behaved sort of thing. So that makes it seem unlikely for a set of O(S*T) Bell violations out in the world to follow a well behaved pattern that could be compressed into O(S) bits. Like, suppose I decide to dovetail through all computer programs, running a Bell test every time one of the programs halts. This would appear to force the initial state to encode information about solutions to the Halting problem, without the benefit of encoding it into a process that executes over time. But the Halting sequence is algorithmically random; incompressible...

From the Aspect’s piece I linked to in another comment:

“Yet more foreign to the usual way of reasoning in physics is the “free-will loophole.” This is based on the idea that the choices of orientations we consider independent (because of relativistic causality) could in fact be correlated by an event in their common past. Since all events have a common past if we go back far enough in time—possibly to the big bang—any observed correlation could be justified by invoking such an explanation. Taken to its logical extreme, however, this argument implies that humans do not have free will, since two experimentalists, even separated by a great distance, could not be said to have independently chosen the settings of their measuring apparatuses. Upon being accused of metaphysics for his fundamental assumption that experimentalists have the liberty to freely choose their polarizer settings, Bell replied: “Disgrace indeed, to be caught in a metaphysical position! But it seems to me that in this matter I am just pursuing my profession of theoretical physics.” I would like to humbly join Bell and claim that, in rejecting such an ad hoc explanation that might be invoked for any observed correlation, “I am just pursuing my profession of experimental physics.”

Why would a physicist embrace superdeterminism? Present science holds the falsifiability of a theory as a necessity for its utility. Under the light of SD every experimental result may be impossible to interpret as supportive of a theory.

Free will seems to be the most difficult philosophical question. Accepting SD invalidates the scientific method's approach to learning about the world.

You may be interested in other non-local hidden variable theories as a way to sidestep Bell's Theorem & the Copenhagen interpretation.

> Free will seems to be the most difficult philosophical question

Please elaborate. It seems like it’s not even possible to phrase a question about free will that makes sense.

What would it mean for an entity to have free will, such that you could ask a question about whether or not we had it?

If you could ask a meaningful question, it doesn’t seem as difficult as explaining why anything does or does not exist, or qualia.

There is an analog of free will that works even in a cellular automaton. For that you need two things:

1. the part of cellular automata describing a thinking entity can be separated from the rest of the world: that is changing states of cells anywhere outside of it doesn't change the state of the entity itself.

2. it is not possible to replace the computation describing this entity with anything simpler.

2 means that any method of predicting what this thinking entity choses is completely equivalent to that entity living and making the choice it wants by itself. And 1 means that the part of network is indeed a separate entity.

With superdeterminism 1 can not be true, and a spin change of a single particle far away triggers very complex change in behavior of all thinking entities that were close to that particle in the past.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact