
The Forgotten Solution: Superdeterminism - vectorbunny
https://backreaction.blogspot.com/2019/07/the-forgotten-solution-superdeterminism.html
======
whatshisface
"The Facts" basically say that among the statements, "Your experiment design
isn't predestined by the universe to make it accidentally seem like quantum
mechanics is true," "the state of the universe today is all you need to know
to predict the state of the universe tomorrow," and "an experiment only has
one outcome," there is at least one lie. If the first one is a lie that's
superdeterminism, the second one the Copenhagen interpretation and the last
one Many Worlds. What bothers me about getting philosophical is, philosophers
will attempt to choose one or more based on intellectual aesthetic criteria
that we developed from the womb onwards in the macroscopic world, while in
reality the only legitimate answer is "we don't know." I think that is broadly
speaking a problem that hampers the effectiveness of philosophy, there is not
enough willingness to say "the present information does not permit a
conclusion."

~~~
MiroF
This comment started out good but then stumbled in to some odd critique of
philosophy as being incapable of dealing with unknowable things when indeed
that seems to be perfectly within the purview of philosophy.

Humes Problem of Induction, for instance, is exactly an example of
philosophical practice grappling with these unanswerables.

~~~
whatshisface
So, Hume didn't stop ethical philosophers from trying to derive morals, he
only stopped some of them. Wittgenstein and Borges didn't stop philosophers
from trying to "beat" language games using only language: they only stopped
some philosophers. I'm not saying that there aren't visionary heroes who
realize that some discussions aren't going to go anywhere for fundamental
reasons; instead, I'm highlighting the fact that even when they do, "all of
philosophy" almost never reaches a consensus about quitting the debate. In
math, when they deduced that the Axiom of Choice was always going to be an
axiom, everybody quit looking for ways to confirm or refute it. I think it's a
weakness of philosophy that similar things can't happen.

~~~
MiroF
Hume's problem of induction is not about deriving moral principles (I believe
you got it confused with the "is-ought" dichotomy.) Most philosophers have
largely given up on giving a rational deductive basis for why we should
believe in induction, so if anything that seems to be a perfect example of
what you're describing.

I don't think the nature of quantum reality is anywhere near as settled. For
decades, we thought that it was impossible to test local hidden-variable
theories. Thank god some people were still working on the problem!

------
jfengel
I don't think the problem with superdeterminism is the lack of free will, but
with the way it doesn't really give you anything mentally to work with. It
posits some early state from which everything could be deterministically
extrapolated... except that state is both very complicated and completely
hidden. It takes all of the probabilities and shoves them in a black box and
says, "The answers exist, and they're in there. But you can't actually look in
the box for the answers. You have to go do the experiment and wait for the
speed of light to propagate the answer to you."

Like all interpretations, it's mathematically equivalent to any other. It's
just a question of what helps you think about the problem, and I don't think
many people find it very edifying. You can replace the box with a random
number generator, which is at least small enough to fit in your pocket. The
superdeterminism box appears to have been crammed full of untold centillions
of answers... none of which are accessible beforehand.

If there were reason to think that the superdeterminism box were somehow
smaller -- if it all really came down to just one random bit, say, that had
been magnified by chaotic interactions to appear like more -- that would
attract some attention. And I suppose it would be conceptually testable, by
running Laplace's demon in reverse, except that that's not possible either
from inside the universe.

So it doesn't really come as a surprise that superdeterminism falls behind MWI
or Copenhagen or even pilot wave, because each of those hands you something
that you can use to mentally organize the world. Superdeterminism just seems
to hand you a catchprase, "As it was foretold in the Long Ago -- but which I
just found out about".

~~~
zazaraka
Whats wrong with a pseudo-random number generator? You start the universe with
1 million random bits and then just iterate your function on them. How would
we detect repetition at the 2^1 mil level? Maybe the universe would repeat
itself after a while, but how would we know?

Superdeterminism also plays nicely with the simulation hypothesis. You seed
the virtual machine with some randomness and the physical laws and then you
run the simulation.

~~~
jfengel
I don't believe you'd need even a million random bits. It's conceivable that
only a few random bits are actually required, and then let iteration take care
of the rest.

There's nothing wrong with that. I just don't think people find it very useful
as an organizing principle, so it doesn't attract a lot of attention.

------
moomin
The thing about superdeterminism is that it's only interesting if you want to
argue philosophy. If you're dealing with hidden variables (or even measurement
errors) the only practical tool in your box for handling them is probability
distributions.

So either way, you've got a probability distribution. And at this point people
just apply Occam's Razor and get on with their lives. You can theorize an
infinite number of systems that work exactly like the real world. The question
is whether they're useful.

~~~
trevyn
Could an untestable underlying theory inspire new models and implications that
are testable?

~~~
dharmab
In a general case, yes. This happened recently with gravitational waves. The
construction of multiple gravitational wave detectors across the planet
allowed us to test a previously untestable hypothesis about the number of
dimensions that gravity can act in.

[https://www.youtube.com/watch?v=3HYw6vPR9qU](https://www.youtube.com/watch?v=3HYw6vPR9qU)

~~~
SideburnsOfDoom
> allowed us to test a previously untestable hypothesis

So it is testable now? Is everyone here using the word "untestable" in the
same way, i.e. "untestable today with the current state of the art, but it
might be tomorrow" vs. "untestable ever, as a principle, even with perfect
tech"

~~~
namanyayg
Untestable is a layman's word, everyone here should be saying "falsifiable".

Many Worlds isn't falsifiable and hence untestable; but untestable can also
mean not possible to test with current tech. Falsifiable is more accurate
here.

~~~
klodolph
"Falsifiable" is a fantastic term but the philosophy of science did not stop
with Popper, and it's not the only term we have available to discuss competing
theories.

~~~
namanyayg
What other terms would you use? "Falsifiable" is more accurate that
"untestable"?

~~~
klodolph
"More accurate" is a good way to phrase it. "More accurate" also describes GR
when you compare it to Newtonian mechanics. Strictly speaking, Newtonian
mechanics is _false_ and has been _falsified_ if you subscribe to the
Popperian view. However, if Newtonion mechanics is false, is it useless? No.

As it turns out, there are a few problems with the epistemology of
falsifiability. The main ones I can think of are:

1\. Duhem-Quine problem - there is a large number of auxiliary hypotheses
under test in any experiment, in addition to the primary hypotheses. These
very numerous and we need a framework for deciding how to apply the results of
our experiment to the many hypotheses.

2\. Statistical claims may be unfalsifiable. Consider a theory that claims
that a coin flip has a 50% probability of being heads... how does one falsify
this? One can't strictly falsify it, but you can show that the evidence is
unlikely given the hypothesis. So we need some framework that connects
statistical and probabilistic evidence to our knowledge of the world.

Or in summary, the problems with falsifiability are that falsifying a theory
doesn't give us the information that we want, and it's impossible to falsify
many theories. To abuse analogies, falsifiability is kind of like trying to
cross a river with your car, when there's no bridge and the car won't start.

One approach other than Popperian falsifiability is a Bayesian system of
belief and likelihood. This only one direction that the philosophy of science
is exploring, but it is probably the one most familiar to HN readers.

~~~
namanyayg
Amazing, I appreciate the depth you went in and will be reading more on these
topics. Thank you.

Any resource you could recommend for Bayesian system of belief? I understand
Bayesian probability/math but I haven't explored it as a philosophy.

~~~
klodolph
I don't have resources for Bayesian systems of belief. I do like _Error and
the Growth of Experimental Knowledge_ by Deborah Mayo, but the book is a bit
intimidating. I admit I haven’t finished reading it, either.

------
gus_massa
In Superdeterminism each time a particle has to collapse, instead of rolling a
dice it looks into a secret table of hidden variables that was calculated at
the beginning of the universe. The table was calculated carefully so the
apparent random choices follow all the laws of quantum mechanics, and the
results are equivalent to what you would expect if any of the other
interpretations where correct.

To calculate this secret table you must simulate all the interactions and path
in the universe until it ends, because you must know which particles will be
entangled, which result will have the "random" generator in the experiments,
...

So the universe is only a movie that follows the random choices made at the
beginning of the universe. But the choices are not arbitrary, they have the
correct values so when the events really happen they follow the laws of
physics. For example, the random choices at the beginning of the universe make
it look that you can't transmit information faster than light.

Physics study the laws of the real universe, but we can redefine Physics as
the study of the laws that study the random number generator. Both real-
Physics and initial-rng-Physics follow special relativity. Bot agree about QM.
Both agree about the Bell inequality.

So with Superdeterminism we solve the problem of QM in the real word, because
everything we is already determined. Now the problem is how the RNG at the
beginning of the universe work to simulate QM and all the other effects. Let's
call the study of the RNG Physics. Now the problem is as hard as before
Superdeterminism.

~~~
chr1
What you describe is not superdeterminism, but a replay of a non local theory.
The important part happens in the first run when you calculate the table.

What superdeterminism says, is that there exists local and deterministic
evaluation rule that will compute consecutive states of the universe, but
simply because of the way the rule works experimenters far away end up always
choosing the experiments that yield correct results.

Superdeterminism is unpopular because the existence of such evaluation rule
seems very unlikely.

~~~
gus_massa
From the article:

> _Where do these correlations ultimately come from? Well, they come from
> where everything ultimately comes from, that is from the initial state of
> the universe. And that’s where most people walk off: They think that you
> need to precisely choose the initial conditions of the universe to arrange
> quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob
> left rather than right. Besides sounding entirely nuts, it’s also a useless
> idea, because how the hell would you ever calculate anything with it? And if
> it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning
> at superdeterminism is not entirely unjustified._

~~~
chr1
Yes, despite the title, the articles has better arguments against
superdeterminism than for it.

------
IX-103
I'm not fond of superdeterminism since it's not that useful for making
predictions. Any purely deterministic model has implications for free will, so
that doesn't seem to be a legitimate criticism.

Actually I would like to know more about provable violations if Bell's theorem
as I am somewhat attached to local determinism and haven't seen an experiment
that I would consider convincing. I mean the theories behind the experiments
are sound, but I'm not sure they're actually measuring what they think they
are measuring due to limitations in the experiment setup -- in order to prove
a violation of locality your system cannot be in a cyclostationary
equilibrium.

In such an equilibrium the system state effectively becomes a standing wave so
you risk measuring an effect that was actually a result of a previous cycle
and mistakenly interpret it as being a result of the current cycle -- implying
a violation in locality because the "cause" was outside of the light cone of
the effect. Note that this is analogous to confusing the group and phase
velocities of a radio wave ([https://www.quora.com/What-is-the-difference-
between-phase-v...](https://www.quora.com/What-is-the-difference-between-
phase-velocity-and-group-velocity)).

~~~
pdonis
_> In such an equilibrium the system state effectively becomes a standing wave
so you risk measuring an effect that was actually a result of a previous cycle
and mistakenly interpret it as being a result of the current cycle_

I don't know where you're getting this from, but it doesn't describe quantum
systems on which Bell inequality violations have been experimentally confirmed
(such as photon pairs from parametric down conversion).

The only "loophole" that has not been completely closed at this point is that
we don't have 100% efficient detectors, but we have detectors that are well
over 90% efficient so the claim that somehow all the stuff that will "fix" the
Bell inequality violations is hiding in the small percentage of photons not
being detected isn't very compelling.

~~~
IX-103
In short the problem I'm addressing is that the interpretation of the
experiment assumes that the system is memoryless, so that the only thing being
measured is the interaction with the particles being measured.

In the experiments generating the photon pairs from parametric downconversion,
for example, does the entire system start up, send 1 photon which gets split
into the entangled photon pairs which then go to the detectors -- with no
other photons generated?

If there is a warm-up period for the equipment or other photons are emitted or
absorbed then there is the potential for memory effects that could interfere
with the measurements.

For instance if we treat light as a wave then the cosine correlation with
angle we see in the basic "two entangled photons with polarizing lenses
experiment" is exactly what we would expect to see. The difficulty is simply
resolving this with the particle nature of photons. If the experimental system
has memory then it could easily have the phase of the effective wave or some
other function of the history of photons encoded in the state of the system.

There are probably some ways to compensate for these memory effects and
demonstrate their (non)existence, but I am not a physicist.

~~~
pdonis
_> the problem I'm addressing is that the interpretation of the experiment
assumes that the system is memoryless_

That's easy to verify by testing the various components--parametric down
conversion, prisms, beam splitters, etc.--and showing that if you shine
repeated photons on them from the same source, prepared in the same state,
they all come out in the same state, or more generally give the same results.
All of the optical components involved in these experiments have been tested
in this way: if they had failed such tests, they wouldn't be used in
experiments because we wouldn't be able to be confident in their behavior.

 _> n the experiments generating the photon pairs from parametric
downconversion, for example, does the entire system start up, send 1 photon
which gets split into the entangled photon pairs which then go to the
detectors -- with no other photons generated?_

For current photon sources, it's impossible to control exactly when they emit
a photon. The sources are so inefficient (in terms of converting input energy
into photons that are useful for the experiment) that they end up emitting
photons slowly enough that only one at a time is inside the apparatus.
However, a typical experiment does not use just one photon. It has to take
data from many photons because the results are statistical, so you need enough
runs to do statistics.

 _> If the experimental system has memory then it could easily have the phase
of the effective wave or some other function of the history of photons encoded
in the state of the system._

We know how to design systems that do this: they're called "detectors" and
"computers that store data". But such systems have to be carefully designed to
do those jobs. Optical components like prisms and beam splitters are not
designed to do that: they're designed to do exactly the opposite, to act the
same way on every photon that comes into them in the same input state. As I
noted above, those components have been extensively tested to make sure they
do in fact do that; if they didn't, they wouldn't be used in experiments.

~~~
IX-103
_> if you shine repeated photons on them from the same source, prepared in the
same state, they all come out in the same state, or more generally give the
same results._

Those kinds of measurements would violate the uncertainty principle. You can't
know the complete state going in to the system or the complete state going
out. You can run some tests and justify other assumptions based accepted
theories. We generally have a good idea what happens when lots of photons pass
through these components. We have some ideas of what happens to single photons
(in a statistical sense), but the fundamental question we are investigating is
whether there even _is_ a local deterministic description of what happens to
single photons passing through the component.

 _> The sources are so inefficient (in terms of converting input energy into
photons that are useful for the experiment)... a typical experiment does not
use just one photon. It has to take data from many photons_

I was aware of that and it's part of my criticism. If the emitter were to only
emit useable photons when it's "in the right state", what stops the "right
state" for emitting photons to become correlated with the polarizers?

There are a bunch of "unusable" photons bouncing around interacting with
everything and transporting global state. Then there are the "usable" photons
that get reflected, absorbed and re-emitted by components of the test bed. Any
time they interact with anything they modify the state of whatever they touch.
What happens to the photons that reflect off of the polarizers and travel back
into the emitter?

If a photon bounces off of a mirror it had to have 1. transfered momentum to
whatever it hit, and 2. induced a sufficiently strong opposing electromagnetic
field to cause the photon to be reflected or re-emitted. While these are tiny
effects they are roughly the same order as the effects that caused the photon
to be reflected in the first place, and they all require a change in state of
the mirror so that momentum is conserved and Maxwell's laws are not violated
(my guess would be that this could cause shifts in electron orbital or proton
spin orientation, but that's a bit beyond me).

~~~
pdonis
_> Those kinds of measurements would violate the uncertainty principle._

No, they don't. The uncertainty principle places limits on measurements of
non-commuting observables on the same system. We are not talking about that
here. See below.

 _> You can't know the complete state going in to the system_

Sure you can: just prepare the system in a known state. For example, pass your
photon through a vertically oriented polarizing filter: if it comes through,
it must be vertically polarized, so you have complete knowledge of its
polarization state. (You might have to try multiple photons to get one that
passes through: that's why photon sources in these experiments are often
inefficient.)

 _> or the complete state going out_

Sure you can: you measure it. For example, you pass the vertically polarized
photon that just came through your vertical polarization filter through a beam
splitter, and you have detectors at each output of the beam splitter. Exactly
one detector will fire for each photon.

 _> If the emitter were to only emit useable photons when it's "in the right
state", what stops the "right state" for emitting photons to become correlated
with the polarizers?_

 _> There are a bunch of "unusable" photons bouncing around interacting with
everything and transporting global state._

It looks like you don't have a good understanding of how the "emitter" works.
What you are calling the "emitter" is really a filter, like the vertical
polarizer described above: it throws away the photons coming from a source
(like a laser) that don't meet a particular requirement (like vertical
polarization). The thrown away photons are either absorbed (as in the case of
the polarizer) or they just pass through the apparatus altogether and fly away
(as in the case of parametric down conversion, for example: only a small
percentage of the laser photons will be down converted, the rest just fly away
and are gone).

In no case are the photons not used kept "bouncing around". They're gone. And
the photons in the "right" state are just the ones that make it through the
filter and are therefore in a known state when they come out, because that's
how the filter works: the filter is uncorrelated with what's inside the
experiment because, again, that's how the filter works (and it is tested to
make sure it works that way).

 _> What happens to the photons that reflect off of the polarizers and travel
back into the emitter?_

There aren't any. See above.

 _> If a photon bounces off of a mirror it had to have 1. transfered momentum
to whatever it hit, and 2. induced a sufficiently strong opposing
electromagnetic field to cause the photon to be reflected or re-emitted._

1\. Yes, but in these experiments the mirror is fixed to the Earth, so the
momentum is transferred to the Earth, which means it's effectively gone. The
entire Earth is not going to have a "memory" that can become correlated with
the rest of the experiment.

2\. No. You are thinking of it classically, but we are not talking about a
classical process.

------
archibaldJ
There is something very Taoistic about superdeterminism.

And there is something very Taoistic about homotopy type theory too.

Also, I feel that both superdeterminism and homotopy type theory have traces
of the holographic principle in them in a somewhat conceptual or abstract way.

Perhaps there exists a nice correspondence between superdeterminism and
homotopy type theory that can be used to extend (in a purely functional and
categorical way) the simulation hypothesis into a full-fledged theory (and
perhaps with its own nice little axiomatic system) to make sense of reality.

------
scythe
The problem is that superdeterminism contains a principle of explosion:

[http://en.wikipedia.org/wiki/Principle_of_explosion](http://en.wikipedia.org/wiki/Principle_of_explosion)

If superdeterminism explains quantum mechanics, why not cosmic inflation? Why
not matter asymmetry? Why not abiogenesis? Why not Brexit? Superdeterminism,
by construction, can explain _everything_ — and all there’s left to do is pray
to God.

~~~
archibaldJ
I will have to disagree with that.

Firstly I don't think anyone has actually formalised superdeterminism in a way
that the principle of explosion can be logically introduced to formally
undermine superdeterminism. What you are doing here is akin to stretching the
conceptual relevance of Godel's incompleteness theorems and trying to use it
to prove or disprove the existence of God.

Basically I don't see how it makes sense to say that superdeterminism contains
a principle of explosion. Perhaps my interpretation of superdeterminism is
very different from yours. Or maybe I simply don't see the picture as you do.
If that is the case please enlighten me.

Secondly I think you are missing the point of superdeterminism here.

There is something very computational (and perhaps Taoistic) about
superdeterminism. Apparently under this framework the whole notion of
"explaining things" is nullified and becomes meaningless. It occurs to me that
our everyday notion of "explaining things" exists at a lower abstraction level
and thus loses relevance in the face of superdeterminism. I believe if you
really want to undermine superdeterminism as a theory (or as a philosophy),
the more relevant question here to ask is: is there anything useful/meaningful
about reality (or the universe) that can be inferred assuming
superdeterminism? And then of course if you are a scientist you would then
ask: are they experimentally verifiable?

~~~
teilo
> I believe if you really want to undermine superdeterminism as a theory (or
> as a philosophy), the more relevant question here to ask is: is there
> anything useful/meaningful about reality (or the universe) that can be
> inferred assuming superdeterminism? And then of course if you are a
> scientist you would then ask: are they experimentally verifiable?

To even ask the question you must deny your own premise. If indeed
superdeterminism is true, then any experimental verification is nullified by
definition: the results of any and all experimentation is itself
superdetermined regardless of any scientific framework.

~~~
archibaldJ
The last part was supposed to be taken with a grain of irony in the spirit of
Anton Zeilinger:

"We always implicitly assume the freedom of the experimentalist... This
fundamental assumption is essential to doing science. If this were not true,
then, I suggest, it would make no sense at all to ask nature questions in an
experiment, since then nature could determine what our questions are, and that
could guide our questions such that we arrive at a false picture of nature."

I guess that is the problem most people have with superdeterminism.
Intellectuals in this day of age are too scientifically trained to have any
romanticisation for reality under frameworks like synchronicity (even though
it was popularised by Pauli before it lost mainstream appeal after actual
fundings went into unfruitful statistical research in the 80s) or (in the
newtonian time) alchemy and the love for God. This is why superdeterminism is
so unpopular.

I just really like superdeterminism because I think it is cute and I believe
in the Tao.

~~~
selestify
What does superdeterminism have to do with the Tao?

~~~
archibaldJ
There was something formless and perfect

before the universe was born.

It is serene. Empty.

Solitary. Unchanging.

Infinite. Eternally present.

It is the mother of the universe.

For lack of a better name,

I call it the Tao. It flows through all things,

inside and outside, and returns

to the origin of all things. The Tao is great.

The universe is great.

Earth is great.

Man is great.

These are the four great powers. Man follows the earth.

Earth follows the universe.

The universe follows the Tao.

The Tao follows only itself.

~~~
selestify
That's beautiful.

------
zwkrt
Can someone explain more clearly how being in a deterministic universe
resolves the “problem” of Bell inequalities? It seems like even if the
universe were deterministic it would not cause the classic polarizing-filters
Bell inequality to seem “reasonable”. In fact it makes it seem less reasonable
to me!

~~~
chr1
The determinism doesn't solve the problem, it makes the non-locality more
visible because some people think that state of two particles far away
influencing one another is somehow worse than the wave function in whole
universe changing it's value at once.

The superdeterminism "solves" the problem by claiming that there is no problem
to begin with, and the results look non-local only because the experimenters
always pick experiments that look non-local.

How can a local deterministic theory create such complex behavior as thinking
people, and at the same time constrain it in a way that time taken to play
mario level is correlated with a photon experiment a year later, is left for
the reader as an exercise.

------
archibaldJ
There was something formless and perfect before the universe was born.

It is serene. Empty.

Solitary. Unchanging.

Infinite. Eternally present.

It is the mother of the universe.

For lack of a better name,

I call it the Tao. It flows through all things,

inside and outside, and returns

to the origin of all things. The Tao is great.

The universe is great.

Earth is great.

Man is great.

These are the four great powers. Man follows the earth.

Earth follows the universe.

The universe follows the Tao.

The Tao follows only itself.

------
eridius
If Superdeterminism means that the initial state of the universe is such that
the universe _appears_ to follow quantum mechanics... why? Why would every
single decision have its resolution set in a manner that appears to follow QM?

------
teilo
Superdeterminism is a self-defeating philosophy. In essence, it cedes
everything to random chance, and makes all scientific inquiry meaningless.
There is no longer any "why" or "how." There is merely, "That's just the way
it is." Any apparent order or structure which might be observed is exactly
that: merely apparent. Therefore any attempt to understand the universe is
vain.

It is little better than the presumption that planets move because a prime
mover moves them. It is, in essence, to give as the final answer: "Planets
move as they do because they cannot do anything else."

~~~
coldtea
> _In essence, it cedes everything to random chance, and makes all scientific
> inquiry meaningless. There is no longer any "why" or "how." There is merely,
> "That's just the way it is."_

Which doesn't make it wrong...

~~~
SubiculumCode
And indeed, is based on the same math as the more mainstream interpretations.

