
The Computational Theory of Mind (2015) - DanielleMolloy
https://plato.stanford.edu/entries/computational-mind/
======
joe_the_user
It seems like basic materialism to claim that the brain could be simulated by
a large enough Turing machine. But that claim would not necessarily imply that
a Turing machine is a useful model for brain. A Turing machine isn't
necessarily even used as the working model of a computer though it conceivably
could be.

Edit: My comment is basically asking what saying "the mind is a Turing
machine" really means. It seems like the main implication someone would take
away from that statement is that it's reasonably practical to describe mental
processes with a computer but given that the question is formulated as a
binary is/is-not. The mind "is" a Turing machine given ordinary physics which
I believe say everything can be approximated by a large enough Turing machine.
But this sense of "is" doesn't imply the Turing machine mind model is useful.

~~~
lidHanteyk
Nobody claims that Turing machines _in particular_ are well-suited to the
task. What we claim is something more open-ended: Turing-complete languages
can simulate each other given a universal encoding (a program we typically
call an "emulator") and the simulation is only polynomially slower and larger.

To the details: We don't want just one tape on the Turing machine. It's fine
but slow, like using unary Church numerals instead of binary Cantor numerals.
We usually assume at least two tapes. Similarly, we often ignore tapes and use
a random-access memory instead.

Edit: I guess that this is how we're talking today? From the article:

> It is common to summarize CCTM through the slogan “the mind is a Turing
> machine”. This slogan is also somewhat misleading, because no one regards
> Turing’s precise formalism as a plausible model of mental activity. The
> formalism seems too restrictive in several ways.

The article then lists senses, finite memory, concurrency, and determinism as
four ways in which the brain might differ from idealized Turing machines.

~~~
it
Finite memory just means a limited subset of what a Turing machine can do.

Concurrency doesn't make Turing computable processes capable of anything
beyond what they can already do. At best it buys them speed, maybe some
expressiveness for the programmer, and bugs.

Nondeterminism is useful for some algorithms but it also doesn't make it
possible for Turing computable processes to go beyond Turing computability.
See for example
[https://arxiv.org/pdf/cs/0401019.pdf](https://arxiv.org/pdf/cs/0401019.pdf).

The most interesting item in that list would be the senses of organisms, as
some of those might indeed involve processes that are not computable.

Also, some aspects of the brain's activity may not be computable. See Roger
Penrose's books for details, such as p. 377 of Shadows of the Mind (a section
about possible noncomputability in some physical processes).

------
YeGoblynQueenne
>> However, the mechanisms that connectionists usually propose for
implementing memory are not plausible. Existing proposals are mainly variants
upon a single idea: a recurrent neural network that allows reverberating
activity to travel around a loop (Elman 1990). There are many reasons why the
reverberatory loop model is hopeless as a theory of long-term memory. For
example, noise in the nervous system ensures that signals would rapidly
degrade in a few minutes. Implementationist connectionists have thus far
offered no plausible model of read/write memory. [4.3 Systematicity and
Productivity]

I wonder: is this information outdated?

A Neural Turing Machine (first described in 2014 by Alex Graves) is a
recurrent neural network architecture with an external memory store. Reads and
writes from and to the memory are controlled by an attention mechanism. A
newer version is the Differential Neural Computer (first described in 2016,
also by Graves).

The setup is not fundamentally different to the Elman networks or Long-Short
Term Memory networks other than the mechanism by which "memory" is manipulated
and storage, retrieval or discarding of "memories" is decided, although the
mechanisms are very similar too (for instance, in LSTMs, you could say that
training a network to decide when to "recall" a weight value is essentially
similar to the "attention" mechanism).

Is there a significant difference between an LSTM-based neural architecture
with a "reverberatory" memory and one with an external storage, both
controlled by similar mechanisms?

I would say- yes.

------
omarhaneef
In a different lifetime I got to take a few classes with Jerry Fodor, and
although this publication references him extensively, one of his most succinct
arguments for the computational theory of mind is only alluded to: the lack of
alternatives.

Fodor was a snappy writer and talker. I urge you to view his videos if they
can be found on YouTube. Unfortunately he passed away recently.

The argument goes something like this:

1\. The computational theory of mind is the only remotely plausible theory of
mind we have

2\. A remotely plausible theory is better than none at all

~~~
mjburgess
The problem with that argument is that it "the computational theory of mind"
isn't a "theory of mind" \-- so I don't think we have any.

A TOM needs to explain "mental life", at best the CTOM provides a model of a
very narrow sort of cognition (inference over propositions).

There's a gigantic (and in my view, deeply implausible) leap from "hey this
kinda works for modelling inference in animals" to "hey this is how The Mind!
works".

Not only do all CTOM models fail for actual inference in animals where we can
be reasonably sure inference is taking place (due to the frame problem), it
clearly fails for non-inferential processes and states (eg.,
emotions/environmental-action/...).

These non-inferential processes are regarded by CTOMists as "black boxes" that
just "plug into" the "Real Mind" (ie., inference over propositions).

I dont think Fodor's argument holds here: it is tantamount to saying, "hey
we've explained light with waves, why dont we just explain everything with
waves!" \-- the cost to that approach should be obvious.

The problem this field has is that its being lead by computer scientists not
neurobiologists. You ask a computer scientist what the right model of
_anything_ is, and they'd reply with a logic.

We do not, however, model causal reality with logic. Temperature isnt computed
from a 'logic of molecule motion', it is done via a causal model which relates
causal variables to one another.

~~~
omarhaneef
You’re right in the sense that the CCTM (classical computational theory of
mind) isn’t a complete theory. There are lot of problems. Studying it is
essentially memorizing all the problems and the problems with various proposed
solutions.

As a theory, not only is it incomplete, it is only remotely plausible. He
concedes that up front!

It’s like the old joke about capitalism - worst but for any other.

I would expect a response to give a better alternative, not to lay out
admittedly deep problems with it.

~~~
mjburgess
That's an argument for a research paradigm, its not an argument for the truth.

If there are fatal problems with Theory-A, Theory-A isn't true. No matter how
much Theory-A might help with some other problem you have.

The CCTM is the research paradigm for modern AI, parts of cognitive science,
etc. -- and insofar as it provides a clear set of assumptions to arrive at
useful models, so be it.

Alexa can turn the lights on, for sure. She may even be able to reason a
little (if-then-else, etc.). I doubt she will ever know what a "light" is, or
what she is doing when she turns it "on".

That would require Alexa to have lived a human life, and to have lived in a
deep and complex social/physical environment. There is no "logic" which can
specify such things in a limited set of propositions: the effect of the world
on animals is not merely to add propositions to their "set of beliefs".

Rather, animals are first traumatised by the world: their emotions, physical
memory, instincts, etc. are all unmindfully coerced by their environments.
Only with a peculiar sort of frontal lobe are those things expressible as
propositions -- but they arent propositions, as evidenced by the infinite
number of them required to capture the effects.

What we need before understanding inference, is to understand on what
inference operates: the mental life created by the effect of the world on the
whole mind of the animal.

~~~
naasking
> I doubt she will ever know what a "light" is, or what she is doing when she
> turns it "on".

You mean Alexa may never know what _we_ mean by "light" or "turning it on".
Neither would an intelligent alien that doesn't rely on sight. That doesn't
entail that such a creature isn't intelligent, or doesn't have a mental life,
or its operations doesn't operate on a model consisting of a set of
propositions.

> There is no "logic" which can specify such things in a limited set of
> propositions: the effect of the world on animals is not merely to add
> propositions to their "set of beliefs".

That's conjecture, although I think the way you've framed it is misleading.
Instincts are also "beliefs" in this model, and the operation of a mind can
have multiple layers with inconsistent sets of "beliefs" that sometimes drive
seemingly inconsistent behaviour.

~~~
QuanticSausage
> An intelligent alien that doesn't rely on sight. You don't have to go that
> far. A person blind from birth is enough an example.

------
DanielleMolloy
Please let me apologize for the lack of understanding, maybe someone can add
to this: If biological intelligence works like a Turing machine, is it even
possible to know what or how certain brain areas compute without simulating
them bottom-up?

Question is influenced by this idea originally from the 80s:
[https://en.m.wikipedia.org/wiki/Computational_irreducibility](https://en.m.wikipedia.org/wiki/Computational_irreducibility)

~~~
mLuby
>biological intelligence works like a Turing machine

We know it doesn't. There's no metaphorical linear tape in the brain; it's a
network of neurons. But a Turing machine can simulate a network of neurons
(see Machine Learning), just as a brain can model a Turing machine (see
Programmer). There are currently things a brain can think about that a Turing
machine cannot, but the question is whether that will continue to be true
despite the steady advance of (computer) science.

~~~
michaelmrose
I don't think the idea that the brain can do anything a sufficiently fast
computer can't deserves any credence. Those who argue this point always
descend into hand wavy nonsense.

~~~
0xBA5ED
If by computer, you mean a binary computer using transistors, there's no
reason to think you can make it do anything a brain can do. Alternatively, if
computer means any hypothetical hardware, it may very well end up looking like
a brain, in which case you might just call it an artificial brain rather than
a computer.

~~~
michaelmrose
Why couldn't a traditional computer fast enough simulate the brain you
imagine?

------
YeGoblynQueenne
>> The first argument emphasizes learning (Bechtel and Abrahamsen 2002: 51). A
vast range of cognitive phenomena involve learning from experience. Many
connectionist models are explicitly designed to model learning, through
backpropagation or some other algorithm that modifies the weights between
nodes. By contrast, connectionists often complain that there are no good
classical models of learning [4.2 Arguments for connectionism].

There is no special need for a specific model of "learning" in a classical
setting. Given an inference procedure, such as induction, adbuction or
deduction, that can derive new facts and rules in a logical language from
observations and a pre-existing theory (i.e. a pre-existing set of facts and
rules), all it takes to "learn" is to store the newly derived facts and rules
to a database.

I mean "learning" in the sense of Mitchell's definition of _machine_ learning,
as (informally) the ability of a system to improve its performance from
experience. In this sense, a system that starts with a database of logical
facts and rules and adds new facts and rules derived from new observations is
"learning".

You can find many examples of learning in a classical, logic setting in the
early ('70s and '80s) machine learning literature, particularly with
propositional logic learners such as decision list and decision tree learners,
the most famous of which are J. Ross Quinlan's ID3 and C4.5 decision tree
learners. The field of Inductive Logic Programming studies learning in First-
Order Logic languages, especially logic programming languages such as Prolog
and Answer Set Programming, and includes early systems such as Shapiro's Model
Inference System, Quinlan's FOIL (First-Order Inductive Learner, essentialy a
relational version of ID3), Muggleton's Progol and Srinivasan's Aleph (based
on inverse entailment), and more recently ASP learners such as ASPAL (Mark
Law), or Statistical Relational Learning techniques, e.g. by De Raedt,
Kerstig, Getoor, Taschar and others; etc etc.

Bottom line- there is a huge body of work on learning in a classical, logic
setting. There is no serious objection that "there _are_ good classical models
of learning". Such models are all over the place in machine learning. In fact,
they tend to be the most carefully characterised models of machine learning.

------
logicallee
For anyone here who thinks consciousness and all thinking _aren 't_ an
emergent property of "computation" in the mind, what do you think would fail
to happen if you simulated it in silicon, trying to emulate it exactly?

------
yters
How can such a theory be falsified?

~~~
taneq
Simple: Find something that the brain does that could not, in principle, be
emulated by a Turing machine or equivalent. So far we don't know of any such
thing (since quantum mechanics is computable and everything including the
brain is ultimately quantum mechanics).

~~~
tcgv
True random number generation can't be done by a deterministic computer, and
it appears that human brains can, although not conclusive yet, and the precise
mechanism is unclear:

\-
[https://www.ncbi.nlm.nih.gov/pubmed/15922090](https://www.ncbi.nlm.nih.gov/pubmed/15922090)

~~~
yters
Turing machines don't have to be deterministic. You can have non deterministic
Turing machines, and probabilistic Turing machines.

~~~
marcosdumay
Turing machines, without qualifiers, usually means deterministic ones.
Extending it for probabilistic is quite ok, but non deterministic ones are a
completely different beast.

