
Logical Induction - apsec112
https://intelligence.org/2016/09/12/new-paper-logical-induction/
======
josh_fyi
The article is basic mathematical research and is not intended to be
implemented, but rather to guide further applied research.

The deeper motivation for this article is to allow future Artificial General
Intelligences to prove facts about themselves; specifically, the fact that the
AGI, or a next-gen that it developers, has the same (safe) utility function
that the designers originally specified.

A formal system cannot achieve this by simulating itself.

A formal system cannot in general prove that it is reliable, i.e. that if it
proves statement P then P is true. [http://intelligence.org/files/lob-notes-
IAFF.pdf](http://intelligence.org/files/lob-notes-IAFF.pdf)

But with this article, the hope is that a formal system can show that the
statement about itself is _probably_ true.

~~~
posterboy
> _A formal system cannot in general prove that it is reliable, i.e. that if
> it proves statement P then P is true_

This implies second order logic. It is not clear to me, whether the loss of
consistency is warranted. Type Theories are tried to avoid inconsistence.
However high the order of the processing logic is, its only reason to exist is
to output first order theorems, those we can prove decidable.

> But with this article, the hope is that a formal system can show that the
> statement about itself is probably true.

Probabilty is not good enough. The Bayesian Conspiracy sure is strong, but I'd
prefer to stay with first order logic and finite state machines that are
provably correct.

> > _The short answer is that this theorem illustrates the basic kind of self-
> reference involved when an algorithm considers its own output as part of the
> universe_

Isn't that what differential equations are for? I'm tired of the liars
paradoxon. Intuitively, I've settled on the presumption that paradoxa always
rest on wrong assumptions.

I'm no mathematician, but I refuse the notion that I am inherently unable to
be certain. That's why algorithms are by my preferred definition bound to be
deterministic. I'd like to be able to tie this in with the Chomsky Hierarchy,
albeit I am not that advanced. I'm no mathematician and I'm impressed by
quines. I guess, the halting problem implies that quines cannot always be
predicted. Heuristics help there and that's what stochastic is all about.

Let me be clear. The paradoxon "this sentence is a lie" is neither a sentence,
nor a lie. That's a question of definition. Of course I'm in no position to
say a similar thing about Goedels incompleteness theorem, and I even referred
to it's result in higher order logics, but I still doubt the relevance, as
many seem to be ignorant of his former completeness theorem.

The higher order logic and self reference is related to recursively enumerable
grammars low in the Chomsky Hierarchy. Though, if ordered by magnitude, I'd
call it higher. I hope, if the goal is natural language, we don't need to aim
that high. If you want a computer that computes computers, though, go for it.

> _A formal system cannot achieve this by simulating itself._

The type of self similarity used in quines or the liars paradox seems to play
an important role in what we perceive as intelligent. Although, I stipulate,
the intelligence involved is the ability to tell the difference between a
misleading liars paradox and constructively provable quines. Of course, a
machine that doesn't need verification from the supervising developer would be
akin to a perpetuum mobile.

It is easy to supervise the AGI's output by the less intelligent machines that
have to do the output. The focus is on optimizing the processes, not the
inability to assert safety guidelines.

Edit: mixed up the order of the Chomsky Hierarchy.

~~~
AlexMennen
> This implies second order logic.

No, it does not. The second incompleteness theorem is provable in first-order
Peano Arithmetic.

> Of course I'm in no position to say a similar thing about Goedels
> incompleteness theorem, and I even referred to it's result in higher order
> logics, but I still doubt the relevance, as many seem to be ignorant of his
> former completeness theorem.

I have no idea what you're trying to say, but I assure you that people who do
this kind of research are aware of the completeness theorem.

~~~
posterboy
Oh, the second one comes of wrong. I did mean the parent quote

> A formal system cannot in general prove that it is reliable

I hadn't noticed when I wrote that, _formal system_ is an idiom - even more
specific in this specific context. How confusing.

------
dharma1
Slightly OT but it's well worth watching the recent film about Ramanujan, "The
Man Who Knew Infinity".

It has kept me thinking what "intuition" really is, how it develops and
happens in the brain, and what would be needed to build AI that has intuition.

~~~
nsomaru
If you're interested in that line of thought and would like perspective on the
way perhaps Ramanujan might have perceived it, perhaps you may enjoy this
transcript of a great lecture from an Indian holy man, mathematician and poet
(Swami Rama Thirtha) in SF in 1903:
[http://www.ramatirtha.org/vol1/inspiration.htm](http://www.ramatirtha.org/vol1/inspiration.htm)

------
gjdjcjdnxnvjd
So does this mean that the era of automated theorem proving and AI-driven
mathematics is finally upon us? I'm no expert, but this seems pretty
groundbreaking.

~~~
tnecniv
My understanding from the article (while I would like to read the paper, I
don't have time at the moment) is that it assigns probabilities that a
conjecture is correct and improves these estimates over time. As such,
something will only really be proven true when the probability hits 1. The
summary says that this will occur in the limit, but that might take as long as
proving things the traditional way and mathematicians don't like things that
are probably true but not proven.

That said, I can think of a number of uses for such an algorithm. If you load
it full of conjectures in your field that are known to be true, it will might
help you hone what problems are worth exploring by providing guess at how
likely it is you can prove a statement you are pondering.

~~~
catnaroek
> something will only really be proven true when the probability hits 1

Careful. An event can have probability 1 even if its complement isn't empty:
[https://en.wikipedia.org/wiki/Almost_surely](https://en.wikipedia.org/wiki/Almost_surely)

~~~
joshcohen
In this setting we're using discrete probabilities so we don't have to worry:)

------
dkarapetyan
Didn't the CIA have some experiment where they pooled a bunch of folks that
were really good at guessing world events? Their constructions sounds awfully
familiar to that experiment.

~~~
esja
IARPA ran something like that:
[https://en.wikipedia.org/wiki/Aggregative_Contingent_Estimat...](https://en.wikipedia.org/wiki/Aggregative_Contingent_Estimation)

------
m3kw9
Let it prove something first then we have something

~~~
ajkjk
This paper is a theoretical contribution, not a practical one, similar to
Solomonoff Induction. Solomonoff Induction's contribution is essentially
"there exists a formal mathematical process that inductive reasoning on
evidence corresponds to" which, among other things, pretty much solves
epistemological puzzles like the
[https://en.wikipedia.org/wiki/Raven_paradox](https://en.wikipedia.org/wiki/Raven_paradox)
from philosophy.

Yeah, the actual performance of Solomonoff Induction is uncomputable, but to
me the useful point is that "induction can be done mathematically", and then
what we do heuristically in our brains can be thought of as a low-fidelity
analog of that. If I'm understanding the page correctly, this is the same idea
but for statements based on proofs and logical theorems. Which seems to expand
the scope somewhat.

(I'm really excited about this, actually, just as a person who enjoys learning
about this stuff from Wikipedia. I feel like I've vaguely thought about how
Solomonoff induction would work on statements that are derived from each other
(or when combined with type-checking, since type-checking is closely related
to theorem-proving), but had no idea how to even ask a precise question much
less make anything of it.)

~~~
joshcohen
Solomonoff induction is useful because without it there's no model of
induction with infinite computational resources. "Logical induction" is not
useful in the same way because without it we already have such a model: simply
prove/disprove the propositions.

~~~
Klaperman
And what if the propositions we want to reason about are self-referencing or
say something about the system itself in a Godelian way? You need
probabilistic reasoning for that to actually work.

Also, "simply prove/disprove the propositions" requires infinite computational
resources (we don't know how long the proofs will be or if there are any).
Logical induction does not.

