
To Build Truly Intelligent Machines, Teach Them Cause and Effect (2018) - guybedo
https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515
======
Animats
In the 1980s, when everybody was trying to do AI with some flavor of predicate
calculus, he extended that to probabilistic predicate calculus. That helped.
But it didn't lead to common sense reasoning. The field is so stuck that few
people are even trying.

Working on common sense, defined as predicting what happens next from
observations of the current state, is a classic AI problem on which little
progress has been made. I used to remark that most of life is avoiding big
mistakes in the next 30 seconds. If you can't do that, life will go very
badly. Solving that problem is "common sense". It's not an abstraction.

The other classic problem where the field is stuck is robotic manipulation in
unstructured situations. McCarthy once thought, in the 1960s, that it was a
summer project to do that. He wanted a robot to assemble a Heathkit TV set
kit. No way. (The TV set kit was actually purchased, sat around for years, and
finally somebody assembled it and put it in a student lounge at Stanford.) 50
years later, unstructured manipulation still works very badly. Watch the DARPA
Humanoid Challenge or the DARPA Manipulation Challenge videos from a few years
ago.

Great PhD thesis topics for really good people. High-risk; you'll probably
fail. Succeed, even partially, and you have a good career ahead.

~~~
jfengel
I worked on AI-via-predicate-calculus, as a successor to Cyc, and I think the
main thing I learned is that people are incredibly bad at predicate calculus.
Even when we behave "logically", it's an after-the-fact rationalization for a
conclusion we arrived at much faster with heuristics.

When we think-about-thinking, or talk-about-thinking, we do so in the language
of language, which quickly leads to logic. And that leads us to think that the
logic is the thinking. But in fact it's a rare, specialized mode of thought.
The primary mode of thought -- the one that keeps us from making big mistakes
for a half-minute at a time -- is that irrational one that's very easy to fool
if you put effort into it, but which actually gets it right for most of
reality (which isn't, generally, trying to trick you).

~~~
Animats
_When we think-about-thinking, or talk-about-thinking, we do so in the
language of language, which quickly leads to logic. And that leads us to think
that the logic is the thinking. But in fact it 's a rare, specialized mode of
thought._

Yes. Language is not thinking. Language is I/O.

~~~
thedirt0115
What do you think about the language of thought hypothesis? Some people would
say internal monologue is how they think.

~~~
joe_the_user
Ah, what you mean by "is" gets tricky.

You might literally interpret this as "is internal monologue _a form_ of
thought" and the answer is clearly yes, on sum logical level, it's brain
activity.

You might literally interpret this as "is internal monologue _the ONLY form_
of thought you have" and the answer seems just as clearly no and the question
seems simplistic.

But I think people who say that usually mean "is internal monologue _the
primary-in-some-way form_ of thought you have" and there the debate gets
heated but if you reveal statement, you reveal the confusion is mostly in
debating "what's primary in brain activity". But if you think about, lot of
debates about human thought is about which part is primary in a fashion we can
intuitively feel.

------
sarosh
While the article is a nice Q&A with Pearl about his new book, The Book of
Why, there is a very detailed technical tutorial from 2014 at
[http://research.microsoft.com/apps/video/default.aspx?id=206...](http://research.microsoft.com/apps/video/default.aspx?id=206977)
that provides a very in depth explanation of causal calculus / coutnerfactuals
/ etc. and how these tools should be used

~~~
pieterk
Slides are here btw: [https://media.nips.cc/Conferences/2013/nips-
dec2013-pearl-ba...](https://media.nips.cc/Conferences/2013/nips-
dec2013-pearl-bareinboim-tutorial-full.pdf)

------
narag
Simulating a mind is not the same as simulating mind processes.

I doubt that you can create a mind that's similar to a human mind without the
relevant elements that are took for granted when we think of a human being:
senses, perception, pain, pleasure, fear, volition... a body! and the real-
time feedback loop that connects us to our environment and our peers.

The same could be said about animals' minds. That's why it's still impossible
to make even a mosquito brain. It's a question of _texture_. Making a decision
for a human involves a complex cloud of subsystems working in unstable
equilibrium, more of a boiling cauldron than an algorithmic checklist. When
you're scared, you're not just _thinking_ that somethind is dangerous and
rather avoid it, you are _feeling_ something very uncomfortable and you _want_
to stop it.

What if you want to advance in creating some kind of simpler mind _now_ when
you still haven't the means to build a complete organism? That's an
interesting problem. Would immersing programs in a virtual world be useful? Or
would it be better to make robots face the real world directly? I believe that
you need, as a minimum, a system that integrates sight with hearing and
touching sensors, and some kind of incentive system.

After some results, maybe using machine learning, the emergent organization
could be applied as a building block to more complex robots. Meanwhile, trying
to teach machines some human capabilities will not lead to generalized IA, but
to more of the same we have now, that it's very useful, just not quite
qualifies for the label.

~~~
7373737373
>senses, perception, pain, pleasure, fear, volition... a body!

Yes! Almost all neural networks have no self-model and thus no self-awareness
because they cannot perceive themselves. They only see the inputs. They do not
see the result of their actions.

This makes developing a self-model impossible. They cannot develop an internal
model of internal vs external causes. What their "boundary of influence" is.
Differentiation between internal and external causes.

They are trained and then used, immutable, unlearning after training. Even if
they could perceive their outputs during training and/or evaluation, they
cannot perceive themselves otherwise, making it practically impossible to
deduce by themselves what they even are. They can't inspect themselves.

The causal loop needs to be closed for all of this to happen.

~~~
viuphiet
The "self-model" you're talking about is the agent in the Reinforcement
Learning framework. It moves between states in an environment and learns from
reward it earns from each action.

~~~
7373737373
Yes, although I wonder if, or how well these self-models develop in practice
compared to the world-models. Say, if a 2D-agent has a rectangular body shape,
it probably won't develop a high-level representation of that fact unless its
actions allow it to perceive it accurately. Purely figuring that out from
collisions produced by basic actions (rotate left, rotate right, move forward
etc.) seems to be practically infeasible. It has neither sight (observing
self-movements) nor self-touch (which would allow it to observe its boundaries
and relate it to what it has seen).

------
spappletrap
"Teach them cause and effect" ... yep, that's pretty much what everyone's been
trying to do since the 60's. The problem is that nobody will touch the core
issues of consciousness because it's inherently political. It requires
confronting some of the biggest taboos in science: anthropomorphizing animals
in biology, discussing consciousness seriously in physics, and looking at how
economics and information interact with a skeptical eye toward the standard
economic narrative.

------
uoaei
Causal inference is the next big leap in AI. Once the relatively(!) low-
hanging fruit of pattern recognition are picked to exhaustion, and once we can
get more comfortable with symbolic reasoning with respect to theorem proving /
hypothesis testing / counterfactuals, "real" reasoning machines will arise.

~~~
mrfusion
I’d like to hear more about this. How do you see this coming about?

~~~
uoaei
Definitely a combination of current ML (basically fancy nonlinear regression
to MLE targets) with symbolic reasoning. Either alone are insufficient.

Symbolic reasoning is basically learning a lot of "if-then" statements and
chaining them to make inference. Causal reasoning consists of defining
conditional dependencies of current state on past state, then extrapolating
based on the encoded assumptions. It requires some notion of object relation
both in a literal sense as well as subtler relationships. Regression
techniques are being ham-fisted to fit these roles but the popular ML of today
is still just pattern recognition and cannot be called "reasoning" per se.

I don't work directly in this space but I see it following closely the
architecture of the human brain for a while before departing to more distilled
forms of knowledge management structures.

~~~
viuphiet
Neural networks do exactly what you are describing as "symbolic reasoning". It
seems to be a common thing recently to dismiss modern ML techniques as curve
fitting, but these fundamental models are extremely powerful.

Neural networks are capable of approximating any system to arbitrary
precision.

~~~
uoaei
This is theoretically true, but it's like saying "computers can compute any
function, given enough time and resources".

There is a need to construct logically-deduced models which impose an
inductive bias so that your regression methods are efficient. That's where
reasoning comes in, and where automated reasoning methods should be useful.

------
tabtab
I'm not sure that's necessary. Early humans didn't know why a lot of things
happened, such as why rubbing sticks makes fire; they just learned to use them
from trial and error. The physics of it were beyond them. I see it more as
goal-oriented: "I want fire, how can I get it?".

I suppose that's cause-and-effect in a loose sense, but one doesn't have to
view everything as C&E to get similar results. It seems more powerful to think
of it as relationships instead of just C&E because then you get a more general
relationship processing engine out of it instead of a single-purpose thing.
Make C&E a sub-set of relationship processing. If the rest doesn't work, then
you still have a C&E engine from it by shutting off some features.

~~~
mjfl
They understood cause and effect. They didn't know the causal chain in depth,
but they did know that rubbing sticks together caused fire. They also knew
that dumping water on the ground did not cause it to rain. Thus they could
distinguish between correlation and causation.

~~~
nradov
Did they really understand cause and effect? Primitive cultures frequently
used religious ceremonies (cause) to effect changes in the natural world. It
didn't actually work, but somehow they fooled themselves into believing that
it did.

~~~
milesskorpen
I was recently reading an essay (can't find it this second) about how some
religious ceremonies actually introduced helpful randomness.

For example, if you hunt to the east and find good game, you'd keep hunting to
the east. Eventually you'd kill everything over there or get them to move, and
your hunting would get worse. The optimal approach might be to randomize the
direction you hunt, so that game doesn't learn where you're hunting.

The society couldn't say WHY the ceremony was good, but long term if they kept
applying the ceremony they'd have better outcomes than societies that didn't.

Sometimes superstition is just that ... but I'd also bet there are
unintuitive/surprising benefits behind a lot of it.

~~~
imtringued
There are psychological benefits. If you believe that things like rain are
within your control then you can be confident instead of feeling helpless.

Obviously religions take advantage of this desire for the illusion of control
and convince followers that practicing the religion will keep their lives free
from external bad influences.

~~~
milesskorpen
Sure — I think the randomization concept particularly interests me because
it's NOT just psychological. There's actual real-world benefit to doing
rituals/read entrails/listening to people speaking tongues vs. not because our
instinct or normal habits aren't always right.

------
mindcrime
There are a number of "things" that we should "teach" machines to create ones
that are "truly intelligent". Besides this kind of "cause / effect reasoning",
one could argue that an intelligent machine needs some baseline levels of what
you might call "intuitive metaphysics", and "intuitive epistemology".

You could probably argue that the cause/effect stuff is subsumed by one of
these at a certain level of abstraction, but I think it makes sense to treat
them as separate.

Related to the idea of "cause/effect" and possibly falling into the overall
rubric of "intuitive metaphsyics" is some notion of the passage of time. That
is, in human experience we link things as "causal" when they happen in a
certain sequence, and within a certain degree of temporal proximity.

Eg, "I touched the hot burner then instantaneously felt excruciating pain" is
an experience that we learn from. "I walked through the door and four days
later I felt pain in my knee" probably is not.

Our machines probably also need baseline levels of some sort of intuitive
versions of Temporal Logic and Modal Logic as well.

[https://en.wikipedia.org/wiki/Metaphysics](https://en.wikipedia.org/wiki/Metaphysics)

[https://en.wikipedia.org/wiki/Epistemology](https://en.wikipedia.org/wiki/Epistemology)

[https://en.wikipedia.org/wiki/Temporal_logic](https://en.wikipedia.org/wiki/Temporal_logic)

[https://en.wikipedia.org/wiki/Modal_logic](https://en.wikipedia.org/wiki/Modal_logic)

~~~
Barrin92
I'd agree with that and I think winograd schema make this very obvious, take
for example:

 _(1) John took the water bottle out of the backpack so that it would be
lighter.

(2) John took the water bottle out of the backpack so that it would be handy_

What does _it_ refer to in each sentence? It's very obvious that a machine
that solves this must understand physics, have a rudimentary ontology about
objects and human intuition and so on.

I think it's straight-up sad how little progress there has been on these very
fundamental problems which articulate what common sense and intelligent agents
are about.

~~~
mLuby
Can this be CAPTCHA instead of goddamn crosswalks? Asking for a friend…

~~~
gambiting
As a non-native English speaker I'm struggling here a little bit. I'm
_guessing_ that you mean that in the second sentence, "it" refers to the
bottle not the backpack? But that certainly wouldn't have been my first answer
to this question(in general any sort of captchas that are based on language
skills are not great, not everyone who consumes your content speaks your
language).

~~~
mLuby
Yep, excellent point. Oh well, back to squinting at traffic lights…

------
Pils
I recently joined a team that does a lot of causal analysis, mostly marketing
related, and was wondering what the best resources are to get more familiar
with this subject (books, lectures, online courses etc.). I am picking up the
author's other book, _Causality: Models, Reasoning and Inference_ , but
wondering what other sources people recommend.

~~~
mindcrime
Maybe a book or two on Structural Equation Modeling?

[https://en.m.wikipedia.org/wiki/Structural_equation_modeling](https://en.m.wikipedia.org/wiki/Structural_equation_modeling)

------
neaden
It's funny that I knew who this would be by or interviewing just from the
title. I like Judea Pearl and a lot of his ideas, but at the same time I think
he overstates their importance and hypes them up more then he should.

------
hooande
@dang, hate to be that guy, but can we add "[2018]" to the title?

~~~
AlexCoventry
This was kind of passé, even in 2018.

------
jacobwilliamroy
I can build an AI with common sense reasoning in about 9 months. The problem
has already been solved. Why do we care so much about making computers more
like people? Isn't that excessively cruel? Part of the utility of computing is
that computers don't have needs for fulfillment, companionship, communion. We
deploy them in awful conditions to do the most horrible, tedious time-waster
jobs. Why do such minds need to be human?

~~~
nineteen999
I wonder how it could even be considered "cruel"? Cruel to other living human
beings, perhaps. To the machine or its simulation software? No.

Any human-like AI is still a "fake" \- any notion of emotion, pain, empathy
etc. we attribute to them is only a simulation. It simply doesn't matter. It
amazes and amuses me to think that people might actually give a damn what the
machine is "feeling". I think people who truly believe this are out of touch
with reality and frankly, with other human beings. The machine doesn't really
care about us, it's a bunch of ones and zeroes no matter how you slice and
dice it.

Even after training them on cause and effect, they still don't care. I don't
buy the "if it looks like a human, sounds like a human, it's human" argument
at all.

~~~
JoeAltmaier
Cruel is also in the mind of the one doing the cruelty. People worry about
being cruel to plant, to pets, to their cars. Its natural and normal, because
we are empathetic beings. Not something to try to unlearn or avoid; its a big
part of our humanity.

~~~
nineteen999
A plant or an animal, yes. A car though? Only reason to worry about being
"cruel" to a car is that mistreating it will result in larger repair bills and
a need to replace it earlier. Same thing with a computer. But each to their
own I guess.

~~~
jacobwilliamroy
When your car is smart enough to assemble Ikea furniture, it'll also be smart
enough to quietly resent you for making it assemble furniture all day.

~~~
imtringued
No, it would probably enjoy it. If you were making it clean the toilet all day
then it would start to resent you.

------
agumonkey
I'd say pain. But that's only me.

------
ratsimihah
Isn't reinforcement learning essentially a representation of cause and effect?

~~~
AlexCoventry
No, it's a representation of which actions lead to good outcomes given a set
of input data. There is no explicit symbolic reasoning about causal factors or
their outcomes involved in classic RL, and it's very unlikely that any such
symbolic representation evolves implicitly under the hood. A neural net in an
RL system is just a souped-up version of the tabular data used in the earliest
RL systems.

~~~
viuphiet
The reinforcement learning framework is perfect for representing cause and
effect. An agent could learn that in a state of no fire, taking an action of
rubbing sticks together would transition into a state of having fire. This
concept is formalized as learning the dynamics function.

------
hans1729
related, a deepmind-paper i found fascinanting:

[https://arxiv.org/pdf/1901.08162v1.pdf](https://arxiv.org/pdf/1901.08162v1.pdf)

------
smiljo
Saw only recently that Judea Pearl was a guest on Sam Harris' podcast:
[https://samharris.org/podcasts/164-cause-
effect/](https://samharris.org/podcasts/164-cause-effect/).

The preamble is depressing, since the episode aired right after a mass
shooting, but Pearl gives a brief overview of his thinking.

------
crimsonalucard
Most people don't even know how to run an experiment to verify causation. They
chant the mantra: "correlation does not equal causation" then go back to
correlating everything they see in the world.

------
sgt101
Interesting formulation - because I think children learn about cause and
effect.

any hooo....

------
bitxbit
I strongly believe there’s overemphasis on AI, artificial intelligence, vs
augmented intelligence.

~~~
nightski
Do you have an indication that they are all that much different? Meaning,
would the techniques or strategies used to develop augmented intelligence be
that much different than what is going on in AI?

------
RedComet
A “truly intelligent machine” is a contradiction of terms. They can not have
intelligence like humans (AGI or whatever the current buzzword is). Humans are
not solely material.

~~~
IIAOPSW
Not with that attitude.

Seriously. Extraordinary claims etc etc. If you want to claim humans are not
solely material, you need to give some sort of evidence of a phenomena beyond
the physical. You can't use intelligence per se as your evidence as then your
argument is circular.

~~~
RedComet
It has been demonstrated for millennia. And I don’t know what your strange
intelligence straw man has to do with anything. Even elementary metaphysics
covers this.

~~~
IIAOPSW
This is just neo-geocenterism.

>Of course the Earth is in the center of the solar system. We must occupy a
privileged space in this universe. Its been demonstrated for millennia.

Thinking is done with neurons. Neurons are subject to the same physical laws
as the rest of the material world. Therefore thinking can be done by a machine
(if nothing else, by a physics simulation of neurons).

To refute this logic, you must show some thought is not being done by neurons
or that neurons are not subject to physics.

~~~
RedComet
I don't see why you feel the need to keep setting up these straw men.

It might make for nice rhetoric, but it is more than a little disingenuous.
Not only was that geocentrism piece not an argument or claim that I've made,
but I've never seen anyone make it in that manner either. But perhaps you know
that and were intentionally misrepresenting their arguments.

Back on topic, no, your concluding claim is not true. To reach your conclusion
you would, at the minimum, need to assume that a machine can simulate
arbitrary physical phenomenon, which is not a forgone conclusion. For
instance, the "thinking done by neurons" you refer to made be reliant on some
facet of the real numbers that is simply not computable. Perhaps at any level
of approximation of it, what we discern as AGI may not manifest. Etc.

But, finally, your premises are faulty. What most people really mean when they
reference AGI or "truly intelligent" is not intelligence, but wisdom.
Computers have been "more intelligent" than humans for a long time now if it
simply means arithmetic and recalling trivia. Now, noting that, isn't it
possible that such wisdom is dependent on dependent on the will, the soul,
etc?

So we arrive back at the true point of contention - you are a materialist. I
claim that materialism has been handily refuted for thousands of years. Then
you fell back on pretty much every freshman level fallacy in the book. On the
other hand, I suspect some of your misrepresenting of "classical" philosophy
was not intentional, and just the result of getting most of it second hand
from the Kurzweil (AI) and Dawkins (geocentrism) type literature. It is wise
not to be so dismissive of pre-"Enlightenment" thinking.

