
Xenopsychology - gwern
http://www.rfreitas.com/Astro/Xenopsychology.htm
======
JoachimSchipper
This article is good. If you liked it, you may also like this narrative take
on consciousness and brains:
[http://www.rifters.com/real/Blindsight.htm](http://www.rifters.com/real/Blindsight.htm).

That said, the part about (computational) capabilities seems unwilling to
contemplate any but the most rigorously-established facts, leading to
completely irrelevant estimates. It's fairly obvious that the limit on human
intelligence is not Goedel's incompleteness theorem; it should also be fairly
obvious that humans do not process 10 ^ 11 * 1000 = 10 ^ 14 bits = 100 Tb per
second (this combines Wikipedia's number of neurons in the brain and the
article's bits/second/neuron), not in any meaningful way and definitely not in
the "10 ^ 14 yes/no decisions per second" that the article gives as a
definition/explanation for "bit". In fact, the article picks efficiency,
rather than efficiency times mass, as a measure of intelligence; it's not at
all obvious why this should be so, in a simple computational model - bigger
computers do tend to be faster, after all. (On the other hand, animal
intelligence seems to be better predicted by brain mass per body mass.)

Unfortunately, I'm not aware of any better analysis that is still fairly
rigorous; does anyone have suggestions?

------
unoti
> The most efficient brain will have the highest information-processing rate
> I, and the lowest mass M, hence the highest ratio I/M. Since very large
> exponents are involved, for the convenience we define the Sentience Quotient
> or SQ as the logarithm of I/M, that is, its order of magnitude.

Doesn't dividing by mass give small things too much of an advantage? If we
have a construct of 3 atoms that can process some small non-zero amount of
information, wouldn't it be likely that this construct would be many orders of
magnitude more intelligent than a human, based on the very low mass?

Further, the Apple II and Cray are rated at +5 and +9. It seems like it'd be
quite straightforward to cut the mass in half (maybe multiple times) for both
of those machines and radically improve their score. Is that rating based on
the computer with case and cooling, or just the chip and package, or the
silicon without packaging, or just the transistors on the silicon? Each of
those answers would, under the current formulation, radically change the
sentience quotient, right?

I suppose I'm saying I'm concerned about the mass having too great of a role
in this equation. This either indicates some kind of misunderstanding on my
part, or perhaps it'd be improved by using something like sqrt(M)?

~~~
jleahy
Although it might be possible to modify the mass of a computer by a factor of
2 this is really not going to affect its score so much, given the presence of
a logarithm. You're definitely not going to be able to adjust its mass by an
order of magnitude after all. A similar problem exists for humans: should you
include the whole body or just the brain? What about some kind of power
source? Luckily all of these factors are only going to add/subtract a few
points (doubling/halving mass is +-0.3).

A bigger source of error is whether the ability to process information can be
put to good use. Since this was written computing power has increased by at
least 6 orders of magnitude, putting computers ahead of humans. Yet we are no
closer to being able to have an intelligent conversation with a computer than
we are to having an intelligent conversation with a plough or any other
machine that can be used to lever our minds.

~~~
PeterisP
Do you really think that "we are no closer to being able to have an
intelligent conversation with a computer" ?

I mean, we aren't there yet, but we're definitely closer than before, the gap
is narrowing in part due to the 6 order of magnitude increase. And do take
note that at the time when we'd be 95% close, then it would still feel like
having an intelligent conversation with a dog; the conversation would only
'feel right' if the capabilities matched 100% or more.

~~~
jleahy
That's a good point, I guess with something like that you'll never know how
close you are until you get there.

Nonetheless, once we get there (and I see no reason why we shouldn't
eventually - even if it's via and exact simulation of a human brain) I'll be
interested to look back and see how much of what we were missing was knowledge
and how much was computational capacity.

------
Scienz
This article was written in 1984, 29 years ago. Some of the information used
in it is a bit out of date.

1) It refers to the triune brain model that has been debunked already. See
this Scientific American blog entry:
[http://blogs.scientificamerican.com/guest-
blog/2012/09/07/re...](http://blogs.scientificamerican.com/guest-
blog/2012/09/07/revenge-of-the-lizard-brain/) So most of the speculation
regarding reptilian, limbic and neocortical intelligences isn't really valid.
Though this doesn't affect the chordate vs. ganglionic argument.

2) The article states that "consciousness is an emergent of neuronal
sentience" which isn't a necessary assumption. Recent research[1] has hinted
at intelligent behavior as being a thermodynamic process occurring when a
physical system acts such as to maximize its number of possible future states.
Quick video summary: [https://www.youtube.com/watch?v=rZB8TNaG-
ik](https://www.youtube.com/watch?v=rZB8TNaG-ik) If true, a typical neuronal
structure (i.e. brain) wouldn't necessarily be required for intelligent or
conscious behavior, maybe not even a typical computational structure. One
could imagine a cloud of gas undergoing chemistry such that it maximized its
entropy production and could potentially possess superhuman intelligence or
consciousness.

3) The article discusses the theoretical limits of intelligence as being a
system possessing 10^50 bits of information per kilogram. But the maximum
limit of a computational system should be given by the Bekenstein bound[2],
which limits the information the system can process to a factor proportional
to the surface area of the enclosed volume (for a 1 cubic cm sphere this comes
to about 10^66 bits), the speed of light limiting the speed at which these
bits can be processed, and the particular structure of how they are processed
(e.g. the flow of logic gates). One could imagine a (quantum?) computer with
the maximum encloseable surface area of the universe which transmitted
information optically between different gates in the system. There may be
other limits preventing that from being remotely feasible (e.g. heat
dissipation, energy requirements, entropy production leading to the heat death
of the universe, etc.). The main point is that comparing the Apple II at +5 SQ
on a scale of 0 to 50 is kind of silly, when one could imagine an intelligence
the size of galaxies, or even the known universe, operating according to a
principal of maximum entropy production similar as referenced in the previous
paragraph.

Not to knock the article; my main point is that there have been a lot more
theoretical advances in the almost-30-years since it was written.

[1]
[http://math.mit.edu/~freer/papers/PhysRevLett_110-168702.pdf](http://math.mit.edu/~freer/papers/PhysRevLett_110-168702.pdf)

[2]
[http://www.scholarpedia.org/article/Bekenstein_bound](http://www.scholarpedia.org/article/Bekenstein_bound)

~~~
vlasev
A bit of nitpick on point 3) - it's a logarithmic scale. The difference
between +10 and +5 is 148 times and between +5 and +50 it's about 3.5e+19
which should be quite doable by something as large as a galaxy...

------
GuiA
Wow, I read the article, and ended up falling in a Wikipedia rabbithole,
reading up about Aristotelian logic, Peano arithmetic, communication in sea
mammals, and now I need to go eat dinner before it's 3 am. :)

Thanks for posting this amazing article.

------
bishun
Reading this article reminded me a bit of Carl Sagan's "The Dragons of
Eden"[1] (1977). It's subtitle: 'Speculations on the Evolution of Human
Intelligence' explains the content pretty well. Mr. Sagan explains quite
clearly in the introduction that he is speculating, but it is very
entertaining speculation.

[1]
[http://en.wikipedia.org/wiki/The_Dragons_of_Eden](http://en.wikipedia.org/wiki/The_Dragons_of_Eden)

------
NAFV_P
This has much pertinence regarding Intelligent Design hypotheses. In ID the
designer is not defined, its existence is just inferred.

