
Logic Theorist - molteanu
https://en.wikipedia.org/wiki/Logic_Theorist
======
rgovostes
Here is more information on Logic Theorist's improved proof of a theorem in
Principia Mathematica:

[https://books.google.com/books?id=iYHuBwAAQBAJ&pg=PA52&lpg=P...](https://books.google.com/books?id=iYHuBwAAQBAJ&pg=PA52&lpg=PA52&dq=principia+mathematica+2.85&source=bl&ots=zyu408jeG0&sig=dApPpxiPyFGM8wBkIv8wmjjGM8I&hl=en&sa=X&redir_esc=y#v=onepage&q=principia%20mathematica%202.85&f=false)

~~~
jkabrg
For people who don't understand the dots and colons, OP's proposition can be
expressed in modern notation as

    
    
      ((p V q) -> (p V r)) -> (p V (q -> r))
    

It's easy to justify this tautology by considering the case when (p) is true
and (p) is false.

By contrast, the proof in PM and by LT strike me as highly unintuitive. I
guess it's an example of using Hilbert Deduction instead of Natural Deduction,
where Natural Deduction is closer to how people normally prove things. For
programmers, it's as if they programmed in Combinatory Logic [1] instead of
Lambda Calculus[2].

[1] -
[https://en.wikipedia.org/wiki/SKI_combinator_calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus)
[2] -
[https://en.wikipedia.org/wiki/Lambda_calculus](https://en.wikipedia.org/wiki/Lambda_calculus)

~~~
scscsc
Assuming p V !p (read: p or not p) as an axiom means that you are admitting
excluded middle. This axiom is part of "classical propositional logic". So a
proof using this case distinction (p true or p false) would be valid, in the
classical sense.

There is a stronger system, called "intuitionistic propositional logic", where
this axiom is not valid. There are less formulae that are valid
intuitionistically, but any formula that is valid intuitionistically is also
valid clasically.

There are various philosophical reasons why one prefers intuitionistic logic,
but note that in any case it is stronger to have an intuitionistic proof, so
these are preferable (when they exist).

~~~
Sniffnoy
The formula ((p V q) -> (p V r)) -> (p V (q -> r)) isn't valid
intuitionistically, though (you can get excluded middle from it by taking q=p
and r=⊥), so there's no reason to want an intuitionstic proof.

------
tokai2
Here’s a simple analogy.

Human: 1 + 1 + 1 + 1 = 4 Machine: 2 + 2 = 4

The machine ‘knows’ the result must be exactly 4. The machine is just finding
new ways to arrive at a result that it already knows to be true. But i want
the machine to arrive at a result that is hitherto unknown. That for me is AI.

~~~
pimmen
We humans often run into new discoveries by basing it on stuff we already
know.

We knew that combustion engines could rotate an axle. We knew that axles could
drive a wheel. We knew that wheels turning could drive a cart. Thus, the
automobile.

Just like humans, AIs can only solve problems it already understands.

~~~
smt88
> _Just like humans, AIs can only solve problems it already understands_

This isn't actually true. Problems can be solved by random chance, as long as
there are good ways to test the validity of solutions. The most obvious
example of this is the evolution of living organisms.

As long as AI can "understand" whether or not its solution solves a certain
problem, it can just assemble random solutions and test them. There's
obviously an incentive to decrease the randomness as much as possible, but an
infinitely powerful computer wouldn't need any such optimization at all.

~~~
mockingbirdy
If you watch videos of AIs learning to run [1] or playing Mario [2], you see
that there's definitely "try random shit until your reward function gives you
positive feedback" as a method for training AIs.

The difficult part is developing the reward function. There's a lot of
intelligence in our hormone-based incentive system - that we feel pain, that
we feel sad, etc. Many of those emotions are pre-programmed. If we find a way
to design reinforcement systems with the right goals, we can do it like mother
nature did it for several billion years. But it's still a ton of computational
power required.

If we look at nature as a massively parallelized computational system that
defined "just live" as the reward (not because it "wants" it, but because it
just emerged), it shows us how much power we would need if we would try to
build a completely random process that gains intelligence.

I think your proposed method works ("just assemble random solutions and test
them") because we've already seen it in nature. But it takes a lot of time and
energy, regardless of our computational power. I think we're faster when we
find the right boundaries and reward functions instead of randomly trying
stuff.

[1]: [https://youtu.be/gn4nRCC9TwQ](https://youtu.be/gn4nRCC9TwQ) [2]:
[https://youtu.be/qv6UVOQ0F44](https://youtu.be/qv6UVOQ0F44)

~~~
s-shellfish
> If we look at nature as a massively parallelized computational system that
> defined "just live" as the reward (not because it "wants" it, but because it
> just emerged), it shows us how much power we would need if we would try to
> build a completely random process that gains intelligence.

With that definition I can't see how artificial life created would be any much
different than the behavior of a computer virus.

> I think we're faster when we find the right boundaries and reward functions
> instead of randomly trying stuff.

Boundaries and reward functions in human society are part of the 'human social
organism' that allows each individual human to function with autonomy in a
fashion that is collaborative, aligned with our developed value systems, and
allows us to live with stability, security, faith (be it in some sense of
wonder, divinity, in each-other - doesn't matter), independence - and these
things are base needs, no matter what variation they manifest with. Boundaries
are redefined when there is conflict, and the less violent and destructive the
conflict, the better the chances are for these boundaries and reward functions
to continue functioning as they have been developed (rather than being
obliterated in entirety, requiring them to built from scratch).

It makes us faster, but it's also us standing on the shoulders of giants. And
I think it's important to question what things are worth applying random
solutions to, and what things require deep contemplation. It seems very
paradoxical, to try to define something that both is a function of the system
it exists in, and also something that could potentially break the whole
system. But, creation, destruction, clearly an oversimplification.

I do know that what looks random to one person is not necessarily random to
another. This goes back to how the context is defined, how society is defined,
boundaries and reward functions established a priori.

Emotions can be simple. The problem is that a computer already has them.
Computer produces wrong solution, algorithm dies. Computer produces right
solution, algorithm survives. We don't give the computer words to express
itself about this because we never taught it to do that. What would happen if
we did?

~~~
mockingbirdy
> try to define something that both is a function of the system it exists in,
> and also something that could potentially break the whole system

That is what's so damn hard about shaping societies (and the market). The
recursive feedback loop and the self-interference.

It's also on individual levels: We're able to dynamically adapt our
reinforcement system using meta-cognition (based in the prefrontal neocortex).

I understand the other parts of the answer, but I can't really see what you
try to say with them (e.g. boundaries, society and deciding if we apply
randomness or not).

~~~
s-shellfish
I tend to be of the mindset that over the long term, you are going to get a
good perspective if you approach the problem 50/50\. Apply randomness half the
time, apply the knowledge you know the other half. Divide and conquer, sort
of.

Randomness potentiates the space that you will both be able to identify error
and consequently, find mistakes, find errors in reasoning (Monte Carlo
simulations are traditionally used for this sort of thing). The other good
thing with randomness as well, find ways to see errors as being 'not errors',
e.g. a tool that can be developed, structured, a new way to see the problem, a
creative approach. Some melding between the two seems to be something of
significance for an AI. So you don't want it to be all random, because you
need stability, structure, you need a 'language' or an 'awareness' you
understand that isn't so chaotic and constantly changing that you can't even
find a place to put your feet on the ground.

It doesn't have to be a perfect 50/50 balance, because that has it's own set
of problems - divide and conquer all the divisions and you still wind up with
2^n newly defined problem spaces to interpret, possibly losing sight of the
bigger picture or maintaining independent direction in one focused lineage of
all those spaces. Just very generally, maintain balance, because the world is
chaos sometimes.

It's like a stream of information. All the analysis to all of that space is
meaningless if the context changes enough. So, adapt to a new context.

Honestly though, I don't know what I'm trying to say much of the time, aside,
'help', lol. Life is terrifying. :)

~~~
mockingbirdy
> Honestly though, I don't know what I'm trying to say much of the time,
> aside, 'help', lol. Life is terrifying. :)

from another comment made by you:

> Evolution does not care if an asteroid hits the earth and wipes out all life
> as it presently exists.

My advice: Stop worrying. Enjoy the randomness :) Maybe book a flight to Asia.
Life is short. Embrace uncertainty. Sell luxurious sanitary pads to rich women
in their 40s. Dress well and be funny. Then suddenly change your mood to sexy,
ask a stranger for a kiss. I now write random love letters to my ex-
girlfriends. Let's see where randomness will lead us to. See you on the other
side of existence.

~~~
mockingbirdy
Found the boundaries pretty early. Wearing nothing but socks and singing "Why
does it hurt when I pee?" while trying to cross a freeway is not considered
"appropriate in the public". shmocks everywhere.

I'm in jail now. I'm free now. But a little cold.

------
andrepd
I would like to see a few more details about how this actually worked.

------
goldenkey
Isn't this like an enhanced SAT solver?

~~~
jkabrg
No, not really. A SAT solver checks if there's a model in which a formula is
true, while this program checks if the formula is true in every model. It's
believed that there's no efficient reduction from one to the other, because
otherwise NP=coNP.

~~~
vilhelm_s
However, modern sat-solvers do both. That is, although they are called "sat-
solvers", they can actually output both "sat" and "unsat". In software
verification they are used for the "unsat" side: you produce a logical formula
P stating that your software is correct, and then you ask the sat-solver about
¬P. Hopefully it returns "unsat", meaning that it has proved P is true;
otherwise it will return a satisfying assignment, which is a counterexample to
your proposed theorem.

So I guess you could say that a theorem prover for propositional logic, like
Logic Theorist, is "half" of a modern solver like Z3, it can produce proofs
but not counterexamples. :)

------
codezero
Has any of the code for this survived?

~~~
Cieplak
Not the original source but pretty explanatory:

[https://history-
computer.com/Library/Logic%20Theorist%20memo...](https://history-
computer.com/Library/Logic%20Theorist%20memorandum.pdf)

------
darkmighty
Systems of logic should not be hard coded into AIs (or AGIs) imo. The basic
problems with logic is that it doesn't take uncertainty into account, and
doesn't support the fact that almost every statement about the real world is
only approximately true, or a useful model; it also usually can't handle
inconsistencies very well.

It would be unable to prove or in fact have any useful facts about the real
world. For example, consider the statement "The sky is blue.". It is not
categorically true -- the sky becomes red near sunset, (depending on
chrominance vs luminance interpretation) it gets black at night, may be other
colors in other planets, etc. So the system would be stuck trying to convey
all necessary conditions, might not be able to formulate simple thoughts or
communicate well. It would almost certainly lack abstraction capabilities.

There are some logical systems that make slight modifications to logic to
accommodate probabilities and uncertainty, but those still might be
insufficient or incomplete.

The fundamental issue is that, while logic works and is useful for us, it is
just a tool. The end goal of any real world agent is not logical correctness,
it is achieving tasks, understanding/predicting its environment,
communicating, etc. Because of the specificity of those environments and
tasks, there isn't going to be a general method that is efficient in them. You
need a evolutionary-like, or leaning system that can adapt to the problem with
minimal underlying assumptions. The assumptions of logic that everything
should be binned according to falsehoods, probabilities, etc, is ultimately
not necessarily a good way to (for example) control a robot in a complex
environment, make it assemble a product, play videogames, or even do
mathematics. Perhaps they can be augmented with side models (again tools)
allowing them to formulate subtasks as logical problems (although its not
clear those approaches would be better than just training neural networks).
Nothing can be set in stone, everything must be learned and modifiable.

So again the main problem with AGI is formulating good frameworks that 1) Have
good, general goals aligned with what we expect from this AI; 2) Is able to
adapt in almost every way to achieve this goal, without degrading its ability
to recognize its rewards/goals.

This is such a hard problem that it occupies a significant chunk of even
humans' lifetimes. We spend lots of time thinking about long term goals, which
in part are dictated by society, in part dictated by biology, and which we
reflect upon to conclude what we should do -- from small daily goals like
eating or getting to work, to long term career advancing and relationships.
Then we set to achieve those goals by learning and acting on the world around
us. It does happen often that our goals degenerate, ultimately because of our
ability to modify ourselves and set our own goals: the reward achieved from a
drug is a degeneracy that overrides other long term goals; various addictions
exploit loopholes in our reward system; we suffer from despair, depression or
lack of motivation that can even override the basic biological rewards.

If we could not override basic biological rewards, setting our own goals, then
we would fall to addictions and inability to live in complex environments
which require complex unnatural goals related to careers, learning, and other
abstract concepts.

And too much of this power can leave us in desperation, depression,
meaninglessness.

So there is a constant balance and search for genuine rewards, and constant
refinement of them. That is central to human existence, and to conscious
experience in general.

