Hacker News new | past | comments | ask | show | jobs | submit login
Intelligence as a Halting Oracle (am-nat.org)
21 points by yters on Nov 5, 2018 | hide | past | favorite | 76 comments



This appears to be crank garbage, which is unfortunately not hard to run into when reading online about computability and related topics. Anything not computable by a Turing Machine is almost certainly not physically realizable (Church-Turing thesis), and if it were it would surely involve esoteric physics like black holes, not anything readily available to human brains.


> and if it were it would surely involve esoteric physics like black holes, not anything readily available to human brains.

To be fair, until we know in detail how brains work, we can't eliminate the possibility that esoteric physics are key to the their operation; we only rule that out with an adequate model that does not invoke such physics.


This is definitely true, but there are lots of things we can't rule out -- including that intelligence is a magical property bestowed upon us by a benevolent deity. When you are faced with this kind of situation, the scientific approach is to choose the model that requires less new stuff. In other words, let's choose a model that doesn't require esoteric physics until we're sure that esoteric physics are absolutely required. Or let's choose a model that doesn't include a deity until a deity is absolutely required. In this way we are constantly working with the simplest models and only adding complexity when we find that the simpler models don't work. It may not be correct in the end, but science is more about building useful models than it is about finding the "truth". Indeed, the absolute truth is probably something that we can never really know.


> This is definitely true, but there are lots of things we can't rule out -- including that intelligence is a magical property bestowed upon us by a benevolent deity. When you are faced with this kind of situation, the scientific approach is to choose the model that requires less new stuff.

Yeah, but we don't even have models to choose from. We can choose the model with the least new stuff when we have competing models; for now, we have models for bits and pieces of the brain/mind, but no wholistic model.

So, what we can really say about the brain or mind as a whole is that it's an open research question and we don't know what it will take to explain it.

We can absolutely say we have no particular reason to think esoteric physics are needed to explain the brain. What we can't say is that we have a firm basis for confidence that they aren't needed.


We have a definitive enough sense of how things work at the molecular level to know there isn't anything too crazy going on, just molecular transformations, which aren't hard to emulate. It's the more macroscopic patterns of how the brain works that we don't understand.


Until we actually observe brains solve uncomputable problems, the conservative assumption is that they can't and are not magical in any way.


> Until we actually observe brains solve uncomputable problems

Brains can obviously provide answers to uncomputable problems.

I'm not sure how we'd be able to observe they they were “solving” even if they were to do so.


Turing machines can also provide "answers" to uncomputable problems; even regular automata can do that. It's very easy to convince yourself that a human given pen and paper can simulate Turing machines. It is completely unclear how you would convince yourself that Turing machines can't simulate humans. So until we have concrete evidence I really don't see the point of assuming that humans can compute uncomputable functions.


Can Turing machines invent Turing machines?


I don't think you can give a sufficiently precise definition of "invent" to answer this question without being at odds with the intuitive understanding of "invent" that people normally hold. Turing machines can definitely search through the space of Turing machines and find some specific ones. For example that's what compilers do all day. They get a description of a Turing machine M1 written in language A and then produce a different Turing machine M2 that has the same semantics a M1, but is described in language B and is faster than M1 (that's what the optimizer does). Most people wouldn't call this "inventing" though (I would).


It isn't so much coming up with the specific operations, but the realization that TMs can emulate every other machine, i.e. the Church-Turing thesis. This seems to be beyond the capability of TMs as it ventures into the domain of the halting problem.


Solving the Halting problem requires you to decide for all TMs whether they halt or not. Nothing is preventing you from deciding it for a useful subset of TMs. For example deciding it for all TMs that have an encoding that uses fewer bits than the number of atoms in the observable universe is totally possible. There are also infinite families of TMs for which you can solve the halting problem. Again, a lot of what compilers do sounds like it would require solving the halting problem, when in fact it doesn't because they only handle a subset of all possible cases.


The parent article discusses the idea of a partial halting oracle (PHO), which decides for an undecidable subset of problems. A TM cannot meet this requirement, by definition. It seems like humans are in the category of PHOs.


There is no proof that human brains are capable of deciding uncomputable sets and really no reason to believe that they should be able to.


That's a bit of a tangent. Here we were discussing whether the concept itself makes sense.

But, as for empirical evidence, max entropy + Bayesian hypothesis selection makes the PHO hypothesis much more plausible than the TM hypothesis.


We know the physics of how brains work (i.e. the physics of neurons). So yes, we can rule out any such exotic physics.


> We know the physics of how brains work (i.e. the physics of neurons).

We know the physics of how some bits of brains work, but no model that goes from those bits to human cognition, so no basis beyond speculation and optimism for claiming that we know all the physics of how brains work.


How those bits go to human cognition is not a matter of physics. It is just a matter of neuroscience. There isn’t any room for exotic physics in the functioning of these bits and their interactions.


> How those bits go to human cognition is not a matter of physics

That the bits we know about are even everything that goes into cognition except how they hook up is somewhere between exuberant optimistic speculation and rank hubris.


No that is just physical fact. Disbelief in the supernatural is not hubris.


Whenever this issue comes up I just link to this: http://www.preposterousuniverse.com/blog/2010/09/23/the-laws...

"The Laws Underlying The Physics of Everyday Life Are Completely Understood"

Worth reading.


I've read it, and I tend to think the position it argues for is likely correct, but for complex systems that we don't have models for, the only basis we have for believing it is that we have enough holes in our knowledge of how things interact that it is possible that no esoteric physics play a role and so assuming that they do not is the simplest plausible assumption. There is a good reason for that to be a working assumption.

Until we can work out all the relevant implications of mundane physics and either find them adequate or not to model he phenomena of concern, we can't say that the physics of those phenomena are known (or say that they aren't; we’re in the realm of potential unknown unknowns.)


Now give a full mathematically rigorous explanation of consciousness.

Give three examples.

Do not use the word "Obviously..."


That's neither here nor there. The actions of a swarm of ants or Windows 10 can look dark, mysterious, inexplicable, or downright malicious, but there's nothing in their behavior that warrants something out of our known physical law.

That doesn't necessarily mean we understand their behavior, of course. Complex behavior can arise out of simple underlying rules. That's hardly controversial, except (for some reason) when it's about human brains.


This isn't necessary at all for GPs point. We also can't give a mathematically rigorous explanation for the path a hurricane will take, but that doesn't mean there's magic unknown physics at play.


First give a definition of consciousness that allows me to determine whether something is conscious or not.


You may want to check out the proof: https://news.ycombinator.com/item?id=18377525


That links to a comment on a different article that says "here's a great math-free explanation" and links this discussion.


The parent link at the top of the page is to the proof. I'm pointing to the HN page in the hopes of discussion.

Here's the direct link to the proof: https://www.am-nat.org/site/law-of-information-non-growth/


The blog post explicitly addresses this


I wonder if the author realizes Turing machines can be partial halting oracles. SAT/SMT solvers do that job pretty well over most of the same problem space as humans. And any technique humans invent can be added to a solver to use as well.


Well, the definition given specifically says stronger than Turing Machines, what you're describing is more what I hoped the article would be about, making certain theoretical oracle machine algorithms partially realizable by putting a human in the loop, to e.g. come up with lower bounds on Chaitin's constant.


This is what I'm working on, a human in the loop approach to Solomonoff induction.


Not to endorse it, but that's the point Roger Penrose disputes, that human brains have access to some kind of Turing-uncomputable physics that gives them powers beyond Turing machines.


Do we even have examples of physics that is not computable? Quantum computers can be simulated by a TM; closed timelike curves don't give you extra power either. As far as I know all the physics the we currently understand can be simulated to arbitrary precision on a TM. I don't really believe that brains have access to realms where quantum mechanics breaks down.


As far as I know you are correct. All physics is computable. Scott Aaronson, with expertise in quantum computing, claims all physical processes are Turing reducible.


Probably nonsense, sure, but my life is better for having had the phrase "The Facegramizon" introduced to it.


Refuting the Strong Church-Turing thesis: http://www.engr.uconn.edu/~dqg/papers/strong-cct.pdf

It's not clear that all physically realizable things are Turing computable.


Carl Hewitt has been making a similar argument for decades: https://cacm.acm.org/blogs/blog-cacm/231495-what-turing-and-...


As most readers have surmised, this is indeed a fallacious grasp for some reason to think the human beings are metaphysically special.

See also yters' post https://mindmatters.today/2018/09/meaningful-information-vs-...

The argument presented relies on the idea that humans can create 'mutual information' about the world which has been shown to be impossible.

However, the relies on assuming that we actually create mutual information. Instead, consider the fact that Bayesian processes with enough observations can approximate mutual information arbitrarily well. Now it seems that Yters' entire argument falls apart because the most reasonable explaination for how the human brain operates come down to Bayesian processes.

Does it sound more reasonable to you that the human brain is a magical 'partial halting oracle' violating the very laws of physics, or that the human brain uses Bayesian reasoning to build up predictions about the world which are only well founded statistically?


This accompanying proof I put together shows halting oracles can create mutual information:

https://www.am-nat.org/site/law-of-information-non-growth/

So if humans are partial halting oracles, then they can create mutual information.

The HN discussion page: https://news.ycombinator.com/item?id=18377525


Yes but the whole point is that nobody believes that humans are partial halting oracles, and there is a simpler, more believable explanation that doesn’t require that claim.

I have a hard time seeing your argument as anything other than 'motivated reasoning' attempting to justify to your religious/metaphysical beliefs.


I'd be interested in this alternate claim. You mentioned something about Bayesian updates.

Ironically, I see this the other way around. I think people are having a hard time accepting the possibility that humans are partial halting oracles because of their bias against anything that seems religious, even if it makes more sense.


To be clear, I could be convinced that humans are partial halting oracles but it would take extraordinary evidence as it is an extraordinary claim and so many people react so strongly to these claims because they dont feel it comes with the appropriate extraordinary evidence.

If you're interested in Bayesian approximations of mutual information check out the references listed at [1] and tell me what you think, as this is far from my speciality.

[1]https://en.m.wikipedia.org/wiki/Mutual_information#Bayesian_...


Thanks for the link, I will be checking it out.

I evaluate this issue with a Bayesian + max entropy approach. We have two hypotheses:

1. humans are TMs

2. humans are PHOs

Max entropy weights these hypotheses equally. What humans do is given the highest expectation with #2. Thus, Bayesian hypothesis selection says #2 is the most likely explanation of the evidence. Inference to the best explanation says humans are PHOs.


> I evaluate this issue with a Bayesian + max entropy approach.

I have a similar approach to gravity. There are two hypotheses

1. The gravitational field comes about from the curvature of spacetime

2. The ghost of Einstein flies around through all of space at an impossibly high speed nudging all objects such that their trajectories align with what he would find elegant.

Max entropy weights these hypotheses equally. What h̶u̶m̶a̶n̶s̶ planets do is given the highest expectation with #2. Thus, Bayesian hypothesis selection says #2 is the most likely explanation of the evidence. Inference to the best explanation says h̶u̶m̶a̶n̶s̶ ̶a̶r̶e̶ ̶P̶H̶O̶s̶ gravity is Einstein's ghost.


The Einstein ghost approach isn't very well defined, so it is just a placeholder. It is equivalent to saying "god did it" or "FSM did it". PHO is well defined.


For those interested in the mathematical proof: https://news.ycombinator.com/item?id=18377525


Interesting cycle you've constructed, there.


Here's the direct link to the proof: https://www.am-nat.org/site/law-of-information-non-growth/


Right. The interesting thing to me about this is its application to proving theorems, especially about programs. Historically, most people interested in having machines prove theorems have looked for algorithms — "proof procedures", the mathematicians call them, and of course they like to prove theorems about these procedures, such as that they're sound (never generate an incorrect proof) and complete (can prove any true theorem). Both of these sound like desirable properties, and for restricted domains such as first-order logic, it's possible to have both; but first-order logic is inadequate for reasoning about programs. For this task, we need higher-order logic, to prove invariants by induction, and Turing's proof arguably shows that there can be no sound and complete proof procedure for higher-order logic. We have to give up something, and of course that something is completeness; we can't part with soundness, because incorrect proofs are not useful.

So we have to accept incompleteness, but obviously we want our program-property prover to be able to prove as much as possible. How do we approach that? I don't think we can make much progress approaching it mathematically; I think we have to approach it as an AI problem. That means figuring out how humans manage to be "partial oracles" and doing the same thing as much as we can. It means heuristics, it means pattern matching, it means machine learning.


Or it means constraining your domain and only accepting those programs about which the properties you care about can be proved. Or it means accepting the risk that your system might break.


Or it means accepting that your prover will need occasional hints from a human expert. I think this will always be true, when we're doing software verification. Another way to look at the problem, then, is: how do we reduce the need for manual intervention to a minimum?


This is a great idea. I'm working on this sort of thing using active learning.


I find this argument very unconvincing. The reason code has bugs is precisely because humans are NOT good halting oracles. We're not even good self-assessors of our own oracular capabilities because we not only write buggy code, but we're often surprised by the bugs in the code we write.


Could it be that 'untrained' humans are NOT good halting oracles? the trained ones are good-enough halting oracles i.e on the scale of a fully focused Euler


That would indicate humans are partial halting oracles.


No. A partial oracle never gives a wrong answer, it's just allowed to say "I don't know."


All a partial oracle has to do is be able to correctly answer an undecidable set of problems and then it can no longer be replicated by a Turing machine. It can be wrong on an infinite number of problems and still fit this requirement.


> It can be wrong on an infinite number of problems and still fit this requirement.

Nope. If it's allowed to be wrong, then a TM can do it. In fact, any one-bit hash function is a partial oracle on that definition (which is the reason that definition is not used).


I don't follow your reasoning. If we take the set of problems a complete halting oracle can correctly decide, and set one problem to an incorrect answer, a normal TM still cannot correctly decide this reduced set. This example seems to contradict what you are saying.


That's true. But you originally said "wrong on an infinite number of problems".


Yes, that seems to be correct as well, right? We subtract an infinite number of problems from the undecidable set, and it is still an undecidable set and cannot be generated by a TM.


That would be true if you got to pick which problems you were removing from the set, but you don't. If you knew which problems the oracle was going to get wrong, then you would know the right answers to all of them because there are only two possible answers.

A hash function will get an infinite number of answers right and an infinite number of answers wrong. But because you don't know which is which, the result is useless.


Sorry, I'm not communicating clearly. What I am saying is there is an undecidable infinite set of correct answers, and then another infinite set of arbitrary answers. In the second set, there are an infinite number of right and wrong answers. The first set cannot be replicated by a TM and the second set can. Combining the sets into one and we get a set that a TM cannot replicate, yet it still has an infinite number of arbitrary wrong answers. This seems possible to me. Am I missing something?


> Sorry, I'm not communicating clearly.

Indeed not.

> there is an undecidable infinite set of correct answers

The term "undecidable set" has a technical meaning, which I'm pretty sure is not what you intended here. The technical meaning of "an undecidable set" is a set for which the question of whether a given thing is a member of the set is undecidable. So, for example, the set of all halting TM programs is an undecidable set.

But the set you are specifying here is not a set of programs, it is (you say) a set of answers. Answers to what? I presume you mean "answers to the halting problem". But what does that mean? There are only two "answers to the halting problem": HALT and RUN-FOREVER, and that's a finite set with two elements. Your set is "infinite" so that's obviously not what you meant.

Maybe you meant a set of ordered pairs consisting of a program P and an element of { HALT, RUN-FOREVER } corresponding to whether P halts or runs forever, where the ordered pair is a member of the "correct" set only if P halts if the second element of the pair is HALT, or if P runs forever if the second element of the pair is RUN-FOREVER. You didn't actually specify it, but I presume you want a given P to appear either in the "correct" set or the "arbitrary" set but not both, so the membership condition in "correct" has to be "only if" and not "if and only if". You also didn't specify whether a pair (P, HALT) and (P, RUN-FOREVER) can both appear in the "arbitrary" set, though I presume the answer to that is "no".

So what about the other direction? Your "arbitrary" set also includes some pairs that meet the criterion for membership in the "correct" set (an infinite number, actually, by your own stipulation). So what determines what goes into the "correct" set and what goes into the "arbitrary" set?

But all of this is still missing the main point: what makes a partial oracle interesting is not that it's allowed to be wrong -- it isn't. It's that it is allowed to give "I don't know" as an answer.

It is actually possibly to define a coherent concept of an oracle that is allowed to be wrong, but these are not called "partial oracles", they are "probabilistic oracles". There's a whole field of study of algorithms that act like probabilistic oracles, called "probably approximately correct" or PAC. There are also "random oracles" which are kind of like probabilistic oracles, and which are useful in cryptography. But that's not what is under discussion here.


Yep, your ordered pair scenario is what I'm saying. The correct set is undecidable in that it would take an infinitely large TM to correctly identify all the pairs, so it is not possible to characterize the set with a finite TM. The arbitrary set is labeled with a random oracle, or perhaps with an oracle that outputs HALT for every program, or it just says "I don't know."

This seems coherent and to capture the notion of a PHO. All of this to point out that the fact humans cannot figure out every math problem does not mean they can be reduced to some sort of finite TM, so that standard objection against the halting oracle idea fails.

Here's a proof of the idea, demonstrating a partial halting oracle can violate the law of information non-growth:

https://www.am-nat.org/site/law-of-information-non-growth/


No, a partial oracle by definition gives correct results for some subset of questions in the domain of interest, and no answer for other questions in the domain. It never gives wrong answers. (Now, if you can define the scope of questions for which a procedure gives wrong answers such that you can identify them in advance, you can convert it to a partial or full oracle by applying a process around the base procedure that flips the answer for those question (for a binary domain, like halting) or converts them to “don't know” (for non-binary domains.)


Well, if it's a definition thing, we can define a "splurg" which can give correct answers for an undecidable subset, and arbitrary answers, some wrong some right, for everything else. A "splurg" cannot be a TM, but it is also not a complete oracle.


It's more likely to indicate that humans are irrational and easily confused.


> If we only have a partial halting oracle, we still get a speed up by eliminating the non-halting programs that can be identified by our partial oracle.

This is wrong on so many levels that the entire rest of the essay becomes nonsense.


Why is it wrong? If the oracle can only identify a certain class of nonterminating programs, you still get a speed up by doing so.


If you told me you wanted to count through all the natural numbers but we’re going to skip the evens, how much time would you save.


In the article's scenario there are a finite number of Turing machines to check in order to find one that matches the specification. So, eliminating some of them will lead to a speed up.


You may want to check out the proof for further information.

https://news.ycombinator.com/item?id=18377525





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: