
'Remarkable' MIP*=RE proof describes how to solve seemingly impossible problem - rbanffy
https://gizmodo.com/remarkable-mathematical-proof-describes-how-to-solve-se-1841003769
======
isaacg
Here's a (relatively) concise explanation of the result for people without a
background in the area:

Suppose a person comes to me, claiming some fact is true. How can I check
whether they're telling the truth? I could ask them a series of questions,
carefully chosen to expose any lie. Such a procedure is called an "Interactive
Proof" (IP). It turns out, there is an upper limit to how 'complicated' the
statement being proved can be, before I, a 'simple' verifier, can't possibly
tell the difference between a true claim and a false claim. The limit is
called PSPACE. Note that in order for the prover to prove such 'complicated'
claims, the prover must be very 'complicated' itself - at least as complicated
as the statements being proved.

How can we take things farther? Suppose that instead two people come to me,
claiming some fact is true. Then, those two people are separated - they are
not allowed to communicate. Then, I ask them both questions, again trying to
ferret out lies. This procedure is called a "Multi-prover Interactive Proof"
(MIP). Again, there is a limit to how 'complicated' the statement being proved
can be before I, a 'simple' verifier, can't possibly tell the difference
between a true claim and a false claim. This time, the limit is NEXP - thought
to be significantly more 'complicated' than PSPACE. Again, note that the
prover must be capable of very 'complicated' computations to reach this point,
but the only limit based on my ability to verify the proofs is NEXP.

Now for the new result:

How can we take things further? Suppose that again two people come to me,
claiming a fact is true. This time, before the two people are separated, they
divvy up some entagled quantum states. Then they're separated, and not allowed
to communicate. Then, I ask them both questions, again trying to ferret out
lies. This procedure is called a "Quantum Multi-prover Interactive Proof"
(MIP*). This time, things are different: There is (basically) no limit on how
'complicated' the statement being proved can be before I, a 'simple' verifier,
would not be able to distinguish true and false claims. (Basically) The only
limit on the ability of the provers to prove things to me is their ability to
discover the claims, not my ability to verify those proofs. That's roughly
what saying that the limit is RE is saying - there's no limit.

~~~
krick
> series of questions, carefully chosen to expose any lie

I don't understand, how this is possible. Is it assumed that every lie is
necessarily a Bernoulli trial with a known probability? Because if not, what
prevents a single oracle from telling consistently false story, taking into
account all lies he told you before?

~~~
amluto
The solution is very careful construction of the questions. A working protocol
puts a low upper bound on the prover’s probability of successful cheating.

~~~
krick
"Very careful" doesn't really explain much. If lie is not necessarily
probabilistic (so we cannot figure it our based on frequency of some
contradictory answers), and if we need an oracle at all (that is, there is
some piece of knowledge we cannot possibly verify ourselves), why cannot
oracle just always lie about that piece of knowledge and take this lie into
consideration in all the other answers, to make story consistent?

I'm pretty sure there must be some additional constraints, otherwise I don't
see how you could possibly detect a lie that you cannot verify yourself with a
single oracle.

~~~
isaacg
Here's an example of such a protocol, for the problem of graph
(non)isomorphism.

Verifier: Here are two graphs, A and B. Are they isomorphic? (Same except for
relabeling vertices)

Prover: No, they're different.

Verifier: Here's one of the graphs, with the vertices relabeled uniformly
randomly. Which one was it?

Prover: It was A.

(Repeat 100 times)

Verifier: Ok, you got them all right. I believe you.

Here, we've forced the prover into a situation where if it's lying, and graphs
A and B are in fact isomorphic, it can't know which graph we started with,
before permuting. As a result, it can only guess which graph we started with
and hope to get lucky, and we'll catch it with very high probability.

In contrast, if the verifier was telling the truth, it could figure out which
graph we started with, and always answer correctly.

Here it is impossible for the prover to make its lie, "the graphs are not
isomorphic" consistent with its later inability to tell them apart, after
random permutation.

This kind of 'catching in a lie' is how interactive proofs typically work.

------
gautamcgoel
One of the authors of this paper, Thomas Vidick, taught the Advanced
Algorithms class I took at Caltech a few years ago. The dude is a genius, but
the class kicked my ass. He would basically assign recently solved problems
from the theory community as homework.

~~~
throwawayjava
Yeah, assigning open/recently solved problems seems to be relatively common in
graduate theory/aglo courses, at least relative to other subfields of CS and
math. Not sure why. It always struck me as not the most productive use of
problem set time. But then, what do I know about teaching theory :)

~~~
andrewjw
These classes often encourage you to collaborate - with others in the class,
and also with outside resources. That is: they would encourage you to go read
the paper, so long as you cited it as a source and provided a proof in your
own words you had reconstructed from memory. The idea is to encourage
engagement in the community and teach people to build off of existing results,
a skill often overlooked in undergrad where that would have been considered
cheating.

------
9nGQluzmnq3M
I'm not going to claim I understand the complexity class discussion here, but
what I find most surprising is the claim that this would make the halting
problem solvable. There's a fairly straightforward proof by contradiction that
it's not:

[https://en.wikipedia.org/wiki/Halting_problem#Proof_concept](https://en.wikipedia.org/wiki/Halting_problem#Proof_concept)

So how would the new finding override this? Or is it just <wiggles fingers>
quantum magic that's running the proof in parallel in infinite entangled
multiverses or something?

~~~
scythe
There are plenty of _oracle_ solutions to the halting problem. The simplest
one uses the bits of Chaitin’s constant, an incomputable number, to solve
Halting in polynomial time. Of course, these solutions cannot be physically
implemented (eg. calculating N bits of Chaitin’s constant requires solving the
halting problem for all N-bit machines, so the algorithm is “circular”).

~~~
9nGQluzmnq3M
Yes, but my layman's understanding is that the oracle is a theoretical
construct that knows in advance what programs halt and what don't -- but as
you note, the real-world issue is that such oracles don't exist.

My question remains, how does this tie back to the proof? Is there a path to
building such an oracle now?

~~~
dwohnitmok
No there is still no path to constructing such an oracle. The paper basically
presents a result built off of a less trustworthy oracle (or set of oracles)
that can potentially lie, but that still has unlimited computational power.

------
kwantam
Here's one intuitive way to think about what this means:

Imagine that you stumble on two infinitely powerful computers that share a
bunch of entangled quantum state. Further, let's say you know how to write
software for them but you need a way to check that the answers are correct.
And you want to know with near certainty that your programs ran correctly,
even if one of the infinitely powerful computers is evil and will use its
entire intellect to convince you of something that's untrue.

Aside: this means that cryptography doesn't work---you can't use the discrete
log assumption to solve this problem, you can't use collision resistant
hashing, etc.

Ok, so this paper shows that, if you can figure out how to program those
computers to solve the halting problem, you can then create an error detection
protocol (see below) that you can check on a laptop computer and that almost
perfectly (like, except with exponentially small probability) detects errors
in your halting detector's execution.

You can use your laptop to ensure that your infinitely powerful, possibly evil
lunatic twin Hal2^9000s aren't lying to you. Or, as BFLS91 says, "controlling
a herd of supercomputers from [your iPad]" ;)

(Your error detecting protocol checker must be verified, but it's not too
complicated an algorithm and we basically know how to write certified computer
software for our laptops.)

(Oh, and also note that you need a perfect random number generator, and your
laptop has to communicate with the supercomputers while they're executing.)

[edit for slightly more precision]

------
SeanBoocock
Some additional context from the always illuminating Scott Aaronson:
[https://www.scottaaronson.com/blog/?p=4512](https://www.scottaaronson.com/blog/?p=4512)

~~~
RayVR
this should really be the posted link, or perhaps the one Aaronson links to:
[https://mycqstate.wordpress.com/2020/01/14/a-masters-
project...](https://mycqstate.wordpress.com/2020/01/14/a-masters-project/).
The Gizmodo article is not very good.

------
dodobirdlord
Perhaps a link directly to the arxiv page would be more appropriate?

[https://arxiv.org/abs/2001.04383](https://arxiv.org/abs/2001.04383)

~~~
andyjohnson0
I had some chance of vaguely understanding the Gizmondo article. Having
browsed the arxiv paper, I wouldn't have had a clue what it was about or why
it was important.

------
lopsidedBrain
The paper:
[https://arxiv.org/abs/2001.04383](https://arxiv.org/abs/2001.04383)

------
zeroimpl
An oracle that doesn't always tell the truth? That seems like an odd thing for
an algorithm to have to deal with.

~~~
bawolff
On the contrary, this happens all the time in the real world. You ask the user
for a password. Maybe the user gives the right password maybe they don't.

------
jgodbout
I just came to say that the first paragraph of that article is so cringe-
worthy I had to post... Terrible, argh...

But amazing result.

~~~
bawolff
In fairness, its pretty hard to explain what MIP*=RE to a layman (or even to
tech people if the comments on hn are any indication). At least the intro kind
of evokes the feeling of what the problem is about, even if its pretty corny.

Just imagine, you're about to get a proof that some program halts and just
before you verify it you are eaten by a grue.

------
o09rdk
I'm surprised this isn't getting more attention. If this holds it would imply
a lot for a wide variety of fields.

~~~
perl4ever
Not understanding it myself, what I would _like_ for it to imply is that
quantum computers are more capable than previously believed. I don't suppose
it does mean that?

------
jakeogh
An excellent comment on the halting problem:
[https://news.ycombinator.com/item?id=21561031](https://news.ycombinator.com/item?id=21561031)

The the linked Scott Aaronson blog and associated video are fascinating.

------
ginnungagap
Would be nice to have a writeup with some details on how this disproves Connes
embedding conjecture understandable for operator algebraists with little
(quantum) computing background

------
andyjohnson0
Can anyone here provide more detail on the implications (particularly for
physics) of the Connes embedding conjecture being shown to be false?

------
selimthegrim
I wonder what the physics implications are they mentioned?

~~~
bawolff
Its beyond my understanding, but i think goes into it
[https://www.ams.org/journals/notices/201910/rnoti-p1618.pdf](https://www.ams.org/journals/notices/201910/rnoti-p1618.pdf)

From what i read it has something to do with refuting the Connes embedding
conjecture, but i dont actually know what that is

------
vectorEQ
quantum entangled oracles which don't always tell the truth. science :'D

