
Artificial intelligence is impossible - yters
https://mindmatters.today/2018/09/meaningful-information-vs-artificial-intelligence/
======
SEMW
The human brain, like everything else, ultimately runs on the laws of physics.
If the laws of physics can be simulated on a computer, then the brain is
theoretically simulatable on a computer. So any argument that a computer
cannot even in theory do some thing that a human mind can do must be flawed,
unless the assumption that of the simulatability of physics is wrong.

(This particular instance is thinly veiled question-begging; it takes as its
assumption that a human mind is special ("A defining aspect of the human mind
is its ability to create mutual information"), and proceeds to conclude
exactly what it just assumed. See also: sloppy dualism —
[http://goodmath.scientopia.org/2013/01/17/sloppy-dualism-
den...](http://goodmath.scientopia.org/2013/01/17/sloppy-dualism-denies-free-
will/) )

~~~
yters
Or, the mind is not the brain. No one has ever proven it is.

~~~
SEMW
Sure, of course. That's a philosophical position known as Dualism[0].

The trouble is that what the author has done is base their argument on an
assumption of dualism, but not made that assumption explicit. If made more
explicit, the question-begging becomes more obvious: "If we assume there is
something special and meta-physical about the human mind which is not
replicable anywhere else, then artificial intelligence is impossible". Well,
yeah. But putting it that way doesn't make for a very dramatic article...

[0]
[https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism](https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism)

~~~
yters
Author here. I would not say I am making the assumption that dualism is true,
merely that it is a possible explanation. Given the human mind generates a
whole lot of mutual information, seems dualism might be a good explanation in
this case. There are also other possibilities. At any rate, the possibility
the mind is a Turing machine is excluded by the premise that it creates mutual
information.

~~~
warbaker
I don't know, it looks like the conservation of mutual information is like the
fact that entropy always remains constant or increases. This is true for the
overall universe, but not true for individuals.

A human isn't some magical entropy reducer -- we reduce entropy in ourselves
by increasing it elsewhere. I suspect mutual information works the same way.

~~~
yters
That is a good insight. The law does seem similar to the 2nd law of
thermodynamics. The form of the free energy formula looks a lot like mutual
information.

So, this could go two ways. Either humans are not net entropy reducers, and
also not information creators. Or, they are information creators, and thus are
net entropy reducers. Not sure there is evidence the latter is false.

A broader point is that regardless of whether the human mind is an entropy
reducer, something in existence is an information creator/entropy reducer,
otherwise we wouldn't have information/low entropy to begin with.

------
chopin
Mmmh, that would imply that the brain is not a Turing machine (or compatible).
I'm not convinced as the author does not mention what the brain is instead.

~~~
yters
If the mind creates mutual information, then it has to be a limited halting
oracle.

------
thedudeabides5
#relevant

[https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)

------
eigenspace
Utter junk. Does _anybody_ find articles like this convincing? This has about
the same intellectual content as YouTube “proofs” that the earth is flat.

~~~
yters
Physics (randomness + determinism) cannot create mutual information. This has
been proven. So, there must be some other causal entity that creates the
mutual information we see around us, and this causal entity cannot be reduced
to randomness + determinism.

~~~
eigenspace
I saw a similar (junk) article a while ago about how our minds must be special
and can't be replicated on silicon because we can implement pure functions in
our minds, but that is impossible in any stateful machine. The author's point
hinged on the idea that when I add `2+2`, I am doing the platonic ideal pure
operation of addition which is demonstrably false so the entire argument falls
away.

Similarly, it hasn't been established that our minds possess mutual
information in the ideal sense. One should first note that there are well
known Bayesian methods for approximating mutual information which sound much
closer to what our brains do operationally.

Now consider the fact that if you actually were creating mutual information
when you looked at a sign, it would be impossible for you to be mistaken about
the implication of the sign! You would _infallibly_ understand every
streetsign that you read because your undertanding of that sign is mutual
information. This is demonstably false. People are mistaken about trains of
reasoning they make all the time, in fact this article is a fantastic example
of that.

Given that, it sounds much more like our minds are Bayesian machines
_approximating_ mutual information but not creating it.

~~~
yters
Not sure why you make the inference that because we don't have the most
elegant encoding of the information that absolute mutual information has not
been created. As it is, any approximation of absolute mutual information is
predicated on the existence of said information.

~~~
eigenspace
> As it is, any approximation of absolute mutual information is predicated on
> the existence of said information.

That's not true. I can approximate a circle on a piece of paper or with a
computer or whatever without there ever actually being a true, platonically
ideal circle in the entire universe. Similarly, I can construct a heuristic
which approximates shared information without that shared information existing
in the first place.

~~~
yters
That is true, but algorithmic information theory would say the Platonic circle
is the best explanation for those approximations, so the approximations are
instances of the absolute.

------
ThrustVectoring
So, uhh, how do _humans_ get mutual information? Big gap in the argument
there, especially if you have a determinism bent in your philosophy of physics
(in which case the article proves that _humans_ can't create mutual
information either)

~~~
yters
The point is randomness + determinism cannot create mutual information. Yet
humans create loads of it. So, by definition, the human mind must be something
other than randomness + determinism. One good candidate is a halting oracle.

~~~
ThrustVectoring
I get the point they're trying to make, I just don't think they've done
anywhere near a good enough job showing that humans create mutual information
(rather than just transforming it)

~~~
yters
Author here. Yes, I did not formally show humans create mutual information.
That part of my argument is just an intuition pump. The article was meant to
illustrate why artificial intelligence could well be impossible based on one
formal premise and one intuitive premise. The goal was to demonstrate
artificial intelligence is not a foregone conclusion.

A broader point is that the existence of mutual information (i.e. all the
order in the world) is evidence that something not Turing reducible exists,
regardless of whether the human mind is Turing reducible or not.

