Hacker News new | past | comments | ask | show | jobs | submit login
'Remarkable' MIP*=RE proof describes how to solve seemingly impossible problem (gizmodo.com)
92 points by rbanffy on Jan 19, 2020 | hide | past | favorite | 44 comments



Here's a (relatively) concise explanation of the result for people without a background in the area:

Suppose a person comes to me, claiming some fact is true. How can I check whether they're telling the truth? I could ask them a series of questions, carefully chosen to expose any lie. Such a procedure is called an "Interactive Proof" (IP). It turns out, there is an upper limit to how 'complicated' the statement being proved can be, before I, a 'simple' verifier, can't possibly tell the difference between a true claim and a false claim. The limit is called PSPACE. Note that in order for the prover to prove such 'complicated' claims, the prover must be very 'complicated' itself - at least as complicated as the statements being proved.

How can we take things farther? Suppose that instead two people come to me, claiming some fact is true. Then, those two people are separated - they are not allowed to communicate. Then, I ask them both questions, again trying to ferret out lies. This procedure is called a "Multi-prover Interactive Proof" (MIP). Again, there is a limit to how 'complicated' the statement being proved can be before I, a 'simple' verifier, can't possibly tell the difference between a true claim and a false claim. This time, the limit is NEXP - thought to be significantly more 'complicated' than PSPACE. Again, note that the prover must be capable of very 'complicated' computations to reach this point, but the only limit based on my ability to verify the proofs is NEXP.

Now for the new result:

How can we take things further? Suppose that again two people come to me, claiming a fact is true. This time, before the two people are separated, they divvy up some entagled quantum states. Then they're separated, and not allowed to communicate. Then, I ask them both questions, again trying to ferret out lies. This procedure is called a "Quantum Multi-prover Interactive Proof" (MIP*). This time, things are different: There is (basically) no limit on how 'complicated' the statement being proved can be before I, a 'simple' verifier, would not be able to distinguish true and false claims. (Basically) The only limit on the ability of the provers to prove things to me is their ability to discover the claims, not my ability to verify those proofs. That's roughly what saying that the limit is RE is saying - there's no limit.


> series of questions, carefully chosen to expose any lie

I don't understand, how this is possible. Is it assumed that every lie is necessarily a Bernoulli trial with a known probability? Because if not, what prevents a single oracle from telling consistently false story, taking into account all lies he told you before?


The solution is very careful construction of the questions. A working protocol puts a low upper bound on the prover’s probability of successful cheating.


"Very careful" doesn't really explain much. If lie is not necessarily probabilistic (so we cannot figure it our based on frequency of some contradictory answers), and if we need an oracle at all (that is, there is some piece of knowledge we cannot possibly verify ourselves), why cannot oracle just always lie about that piece of knowledge and take this lie into consideration in all the other answers, to make story consistent?

I'm pretty sure there must be some additional constraints, otherwise I don't see how you could possibly detect a lie that you cannot verify yourself with a single oracle.


Here's an example of such a protocol, for the problem of graph (non)isomorphism.

Verifier: Here are two graphs, A and B. Are they isomorphic? (Same except for relabeling vertices)

Prover: No, they're different.

Verifier: Here's one of the graphs, with the vertices relabeled uniformly randomly. Which one was it?

Prover: It was A.

(Repeat 100 times)

Verifier: Ok, you got them all right. I believe you.

Here, we've forced the prover into a situation where if it's lying, and graphs A and B are in fact isomorphic, it can't know which graph we started with, before permuting. As a result, it can only guess which graph we started with and hope to get lucky, and we'll catch it with very high probability.

In contrast, if the verifier was telling the truth, it could figure out which graph we started with, and always answer correctly.

Here it is impossible for the prover to make its lie, "the graphs are not isomorphic" consistent with its later inability to tell them apart, after random permutation.

This kind of 'catching in a lie' is how interactive proofs typically work.


The first two intuitively make sense to me. But I don't get why the intuition behind why the quantum version behaves the way it does...


Unintuitivity is the defining feature of quantum mechanics.


perhaps i am talking out of my ass here, but structured like this, it feels like there is a violation of gödel’s incompleteness theorem somewhere..?


No, the key point is that these systems are still polynomial.

You cannot ask a question that would require a non-polynomial computation to provide an answer and expect it to be answered correctly.

Most importantly, recursive enumerable already means semidecidable. (Because it consists of primitive recursive functions, which are bounded.) Which means by Godel's theorems it is stronger than Peano arithmetics. (See Tennenbaum's theorem, Presburger arithmetics is fine.)


In what real-world situation could you be convinced that supposedly-two provers really actually can't communicate?


One of the authors of this paper, Thomas Vidick, taught the Advanced Algorithms class I took at Caltech a few years ago. The dude is a genius, but the class kicked my ass. He would basically assign recently solved problems from the theory community as homework.


Yeah, assigning open/recently solved problems seems to be relatively common in graduate theory/aglo courses, at least relative to other subfields of CS and math. Not sure why. It always struck me as not the most productive use of problem set time. But then, what do I know about teaching theory :)


These classes often encourage you to collaborate - with others in the class, and also with outside resources. That is: they would encourage you to go read the paper, so long as you cited it as a source and provided a proof in your own words you had reconstructed from memory. The idea is to encourage engagement in the community and teach people to build off of existing results, a skill often overlooked in undergrad where that would have been considered cheating.


I took a class in undergrad that was similar. I think the average score on the final was 35 / 100. Of course the grades were on a curve and nobody got less than a B as long as you made an honest effort on the homework, but that class was definitely a blow to the ego for a lot of smart kids (myself included).


I'm not going to claim I understand the complexity class discussion here, but what I find most surprising is the claim that this would make the halting problem solvable. There's a fairly straightforward proof by contradiction that it's not:

https://en.wikipedia.org/wiki/Halting_problem#Proof_concept

So how would the new finding override this? Or is it just <wiggles fingers> quantum magic that's running the proof in parallel in infinite entangled multiverses or something?


What this means is that, if you are confronted by a group of people, traditionally called provers, and you limit their collaboration in a certain way (here, by entangling some qubits and only using the entangled state for "communication"), then they can nonetheless prove to you in an interactive discussion that they know how to solve some Halting-equivalent problem.

This does not give provers any new tricks for solving hard problems. This puts a ceiling on the power of this sort of prover-and-verifier proof and what it can show. There are problems harder than Halting (that is, that are hard even with oracles for Halting [0]) and MIP* doesn't contain them.

[0] https://en.wikipedia.org/wiki/Turing_jump


There are plenty of oracle solutions to the halting problem. The simplest one uses the bits of Chaitin’s constant, an incomputable number, to solve Halting in polynomial time. Of course, these solutions cannot be physically implemented (eg. calculating N bits of Chaitin’s constant requires solving the halting problem for all N-bit machines, so the algorithm is “circular”).


Yes, but my layman's understanding is that the oracle is a theoretical construct that knows in advance what programs halt and what don't -- but as you note, the real-world issue is that such oracles don't exist.

My question remains, how does this tie back to the proof? Is there a path to building such an oracle now?


No there is still no path to constructing such an oracle. The paper basically presents a result built off of a less trustworthy oracle (or set of oracles) that can potentially lie, but that still has unlimited computational power.


making the halting problem solvable if you have quantum entangled oracles that know everything. Welcome to complexity theory :) Reminds me of the proof that time traveling turing machines can solve the halting problem.


are those the Turing machines which can send an unlimited number of qubits back through a closed timelike curve? that result was mind-blowing. It only required classical computers, however, at least in the formulation I saw.

Reversible computing in general seems weird when it comes to time.


I have absolutely no knowledge of quantum mechanics beyond a pop-sci understanding, and I've only briefly skimmed the abstract and intro of the paper, but my impression is that it doesn't override this. Rather the direction of significance runs the other way; we now have strict limits on what kind of things in "quantum magic" are computable.

Also note that this result still ultimately involves oracles with unlimited computational power (it does not construct these oracles). The interesting part is that these oracles are potentially malicious and can lie and it is the verifier's job to come up with a strategy to play an interactive game with these oracles to tease out the real answer (with a high probability of correctness).


> but what I find most surprising is the claim that this would make the halting problem solvable.

I don't think this is correct. I believe that it only makes the Halting Problem verifiable to a simple verifier dealing with two entangled verifiers. So a verifier typically has to rely on an oracle to solve the Halting Problem, but this new proof eliminates the need for that trust by a verification procedure using two entangled oracles.


I don't think the claim is that such a device with a finite amount of information could decide halting. As far as I understand, it could replace the infinite time required to classically decide halting with infinite information entanglement, neither can be put into real physical use.


Godel's Lost Letter suggests that these are not finite machines


Here's one intuitive way to think about what this means:

Imagine that you stumble on two infinitely powerful computers that share a bunch of entangled quantum state. Further, let's say you know how to write software for them but you need a way to check that the answers are correct. And you want to know with near certainty that your programs ran correctly, even if one of the infinitely powerful computers is evil and will use its entire intellect to convince you of something that's untrue.

Aside: this means that cryptography doesn't work---you can't use the discrete log assumption to solve this problem, you can't use collision resistant hashing, etc.

Ok, so this paper shows that, if you can figure out how to program those computers to solve the halting problem, you can then create an error detection protocol (see below) that you can check on a laptop computer and that almost perfectly (like, except with exponentially small probability) detects errors in your halting detector's execution.

You can use your laptop to ensure that your infinitely powerful, possibly evil lunatic twin Hal2^9000s aren't lying to you. Or, as BFLS91 says, "controlling a herd of supercomputers from [your iPad]" ;)

(Your error detecting protocol checker must be verified, but it's not too complicated an algorithm and we basically know how to write certified computer software for our laptops.)

(Oh, and also note that you need a perfect random number generator, and your laptop has to communicate with the supercomputers while they're executing.)

[edit for slightly more precision]


Some additional context from the always illuminating Scott Aaronson: https://www.scottaaronson.com/blog/?p=4512


this should really be the posted link, or perhaps the one Aaronson links to: https://mycqstate.wordpress.com/2020/01/14/a-masters-project.... The Gizmodo article is not very good.


Perhaps a link directly to the arxiv page would be more appropriate?

https://arxiv.org/abs/2001.04383


I had some chance of vaguely understanding the Gizmondo article. Having browsed the arxiv paper, I wouldn't have had a clue what it was about or why it was important.



An oracle that doesn't always tell the truth? That seems like an odd thing for an algorithm to have to deal with.


On the contrary, this happens all the time in the real world. You ask the user for a password. Maybe the user gives the right password maybe they don't.


Croesus, king of Lydia, has been waiting 2566 years for this.


I just came to say that the first paragraph of that article is so cringe-worthy I had to post... Terrible, argh...

But amazing result.


In fairness, its pretty hard to explain what MIP*=RE to a layman (or even to tech people if the comments on hn are any indication). At least the intro kind of evokes the feeling of what the problem is about, even if its pretty corny.

Just imagine, you're about to get a proof that some program halts and just before you verify it you are eaten by a grue.


I'm surprised this isn't getting more attention. If this holds it would imply a lot for a wide variety of fields.


Not understanding it myself, what I would like for it to imply is that quantum computers are more capable than previously believed. I don't suppose it does mean that?


An excellent comment on the halting problem: https://news.ycombinator.com/item?id=21561031

The the linked Scott Aaronson blog and associated video are fascinating.


Would be nice to have a writeup with some details on how this disproves Connes embedding conjecture understandable for operator algebraists with little (quantum) computing background


Can anyone here provide more detail on the implications (particularly for physics) of the Connes embedding conjecture being shown to be false?


I wonder what the physics implications are they mentioned?


Its beyond my understanding, but i think goes into it https://www.ams.org/journals/notices/201910/rnoti-p1618.pdf

From what i read it has something to do with refuting the Connes embedding conjecture, but i dont actually know what that is


quantum entangled oracles which don't always tell the truth. science :'D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: