
Is there a simple algorithm for intelligence? - rcoppolo
http://neuralnetworksanddeeplearning.com/sai.html
======
bohm
This ignores that genetic code is meaningless without the translation
machinery (ribosomes and the entire physical world of physics and chemistry).
Genetic code without the translation machinery and the physical world it is
translated into is meaningless.

A multi-cellular embodied intelligence can acquire information about the
patterns of the physical world by growing in it - the code is compact because
much of the necessary information can be acquired during embodied growth.

A disembodied intelligence such as AIs can't rely on external storage and
acquisition of information during a cellular growth stage, and thus must
likely be considerably more complex from the get-go.

~~~
albinofrenchy
His numbers are also based off the difference between humans and chimps for
some reason? We can't replicate the intelligence of a chimp on a computer, so
why that is a valid basis of comparison is beyond me.

~~~
chimprich
Quite. You would suspect that if we were able to produce an AI with the
intelligence of a chimp then a human-level AI would be in reach. The harder
step is probably getting to the level of chimp intelligence; you'd have
managed to create consciousness and solved the hard problem for a start.

~~~
shmageggy
It's a much, much harder step, actually.

[https://en.wikipedia.org/wiki/Moravec%27s_paradox](https://en.wikipedia.org/wiki/Moravec%27s_paradox)

Kind of renders all of his information theoretic calculations moot.

~~~
maaku
Moravec's paradox is not an experimental result. It's just one guys musings on
intelligence. It says a lot more about the misguided things people thought
about intelligence in the early days of AI than the actual inherent difficulty
of achieving human-level AI.

~~~
mannykannot
According to the linked article, several notable researchers were of the same
opinion (and one might also say that relativity was just one guy's musings -
though admittedly experimental evidence wasn't long in coming.)

I do, however, agree with your second sentence, and I think the paradox is
also weakened by the fact that when 'high-level' mental tasks are computerized
(chess-playing, for example), it is often through brute-force calculation that
is not a model of how humans use their general intelligence to perform the
task.

------
cousin_it
Given unlimited computing power, AIXItl [1] is a pretty simple algorithm that
can provably solve all problems, in some sense, at least as well as any other
algorithm. The idea is to simply dovetail over all possible algorithms and
select the ones that fit observations best. That includes humans, if you
believe as I do that humans are computable.

With limited computing power, it's likely that the best algorithms for many
different problems (possibly including "intelligence" however you define it)
won't be simple, just like the best known algorithms for integer
multiplication [2] aren't simple. In particular, AIXI variants will be hard to
approximate, precisely because they dovetail over all possible algorithms.

That said, it's very likely that the best algorithms for intelligence will be
simpler and faster than humans, because humans are the stupidest possible
creatures that can build a civilization (otherwise it would've happened
earlier in our evolution).

[1] [http://www.hutter1.net/ai/paixi.htm](http://www.hutter1.net/ai/paixi.htm)
[2]
[https://en.wikipedia.org/wiki/F%C3%BCrer%27s_algorithm](https://en.wikipedia.org/wiki/F%C3%BCrer%27s_algorithm)

~~~
fizixer
I saw Shane Legg's talk recently and I understood the main idea but I had a
question I wish I could ask him:

He said general intelligence is 'as long as you provide a goal, the
intelligence should optimize over that goal based on experience'.

But my problem is: general intelligence is supposed to come up with its own
goals.

Take for a example a couch potato who has some steady stream of monthly
stipend, and sits in front of tv all day. This person is in possession of a
functioning general intelligence yet has no goal. One day this person decides
to assign to self a goal of going out and getting a degree and so on and so
forth. That's also part of that person's general intelligence.

It's not clear to me that AIXI is self-driven in that sense. It would be great
if someone could comment on that.

~~~
dorgo
I think the idea is to build something usefull (learn from experience and
optimize goals)instead of something dangerous (own goals). And I dont think
that own goals are a neccessary condition for general inteligence. But it
depends on the definition of general intelligence, of course.

------
garyrob
For me, the question addressed by the article isn't even the interesting one.
The author talks about "intelligence" as in how we process environmental
information and perform reasoning tasks. From that perspective, it does seem
obvious that some form of computer will one day be able to do it. I don't
really care if the algorithms for doing so are simple or complicated.

For me, the real question of interest is not about that kind of intelligence,
but of "awareness". Computers may process environmental information and
perform all sorts of logical operations based on it. But there is zero reason
to think that any computer has any awareness whatsoever. It has no experience.
There is something it is like to experience the color yellow, as opposed to
merely classifying observed light as being of a certain wavelength; no
computer has begun, in any way, to make that leap.

It seems like a lot of people, including the author, don't even notice that
this is a question. They seem to assume that computer-like information
processing is all that happens in a human mind, and that awareness itself
doesn't even exist as something to ponder. So they equate any doubt about
whether a computer can do what we do as a form of vitalism.

But with just a little bit of introspection, the fact that there is something
it is like to experience the color yellow, as opposed to classifying a visual
stimulus as being of a certain wavelength, or logically classifying it as
having the attribute of "yellowness", is absolutely clear -- and absolutely
fundamental to our existence as human beings (although many would assume that
many other forms of life also have this basic ability to experience, if not
the same ability to process information in computer-like ways).

This is why, for me, the Turing Test is orthogonal to the question of whether
a machine is actually conscious. I have no problem imagining that a software
system, even on hardware that is basically the same as what we have now but
bigger and faster, will one day be able to pass the Turing Test. But that
doesn't prove it has any experience of the color yellow; it merely means it
can mimic the output of an entity that does.

~~~
habitue
How do you know other humans can truly experience "yellowness"? What if we're
all just pretending and you're the only one?

Let's not talk about realistic AI for a moment, let's talk about
hypothetically building a human protein by protein (or whatever building block
you're comfortable with saying doesn't have qualia on its own). We're going to
write the computer program to build this human. Nanobots will carry it out,
every part of the process is understood and nothing is left for biology to do
for us. We're just putting together lego blocks and at the end is a human
indistinguishable from any other human.

Does this person experience qualia? How would you know? Apparently you can't
just ask, since this isn't a valid thing to do with a computer (it's not
_really_ experiencing qualia! It's just saying that!).

Let's just say you come up with some objective qualia test. Now we can build a
slightly more deficient human protein by protein and test it for qualia. In
fact, we can do a qualia binary search and...

Once you take apart the argument like this, it's easy to see that arguments
that computers can't experience qualia are arguments against materialism or
reductionism, or are patent attempts to sneak souls and other woo into the
discussion.

~~~
stelonix
I believe his point is, such a process wouldn't ever be feasible if we can't
"create" awareness. You'd have a sack of meat that would not interact. Or if
it would, it'd be a vegetable capable of computation but little else (that's
my random assumption).

If you forget your fear of "soul" and other terms not ratified by science
journals, you'll find out there are plenty of theories in metaphysics and
spirituality on what it means __to be __and what awareness __is __. It
requires you to at the very least __read __those philosophical /theological
concepts instead of dismissing it as "woo". Cattle dwellers in India wrote
many things about it eons ago. Joseph Campbell's work is a nice place to
start.

~~~
habitue
> you'll find out there are plenty of theories in metaphysics and spirituality
> on what it means to be and what awareness is

I have encountered these theories. In the past I have been under the sway of
them. They can be interesting to think about, but without any evidence for
them there isn't any reason to give them particular consideration. (I am
including rational, mathematical, or logical arguments based on experimental
evidence as evidence here)

Without evidence, how do we pick which of these myriad theories to believe?
How do we decide what the actual answer is? Science and materialism have
repeatedly produced predictions that pan out in experiments. Theories of souls
and sprituality have repeatedly had to retreat from testable predictions.

To take another tack, how did these spiritual theories come about? If they're
true, then the ancient theorist must have seen some phenomena in the real
world, and extrapolated correctly from that evidence. If that's the case,
these ancient theorists were unparalleled geniuses, but we don't need to worry
about ancient spiritual theories themselves, we'll rediscover them in time
through our plodding (but safe) scientific method. If we don't rediscover
them, then they were interesting theories that didn't pan out.

~~~
garyrob
"They can be interesting to think about, but without any evidence for them
there isn't any reason to give them particular consideration. (I am including
rational, mathematical, or logical arguments based on experimental evidence as
evidence here)"

I'm glad we're discussing this because you're an example of the kind of
thinking that is a mystery to me.

The fact that I have qualia when looking at, for example, the color yellow, is
directly knowable by means of just a little introspection. This is not
"rational, mathematical, or logical" or based on "experimental evidence". It
is simply direct observation. Observation is where the scientific processes
you name start. Observation does not depend on those processes. They depend on
it and build from it. Observations are the most fundamental thing.

From the direct observation of your experience of yellowness, which I think
you must agree is something more than merely classifying the wavelength as
being in a certain range, I would think you would know that qualia exist. That
is enough evidence.

"How do we decide what the actual answer is? Science and materialism have
repeatedly produced predictions that pan out in experiments. Theories of souls
and sprituality have repeatedly had to retreat from testable predictions."

So, let's not have a theory of souls or spirituality at this point. We don't
know enough. That doesn't mean that those things don't exist, it only means we
don't understand them.

Suppose a cave man-level creature on another planet encounters an iPhone
because we somehow send one there. This cave man creature knows how to build
fires and has used reasoning to make weapons, etc. In doing those things, he
probably tried different ideas about how best to make fires and rejected the
ones that didn't work well. In other words, he has a rudimentary understanding
of the scientific method.

Now he tries to figure out how an iPhone works. He constructs theories based
on what he knows. The theories are all shown, over hundreds of years, to be
wrong. Therefore, he is told, iPhones don't exist because when he constructs
theories about them, they are wrong.

But he directly observes the existence of the iPhone because he holds it in
his hand. He knows they exist. He doesn't need a scientific theory to prove
that they exist. Maybe, eventually, he will have a correct scientific theory
as to how they work; it will just take more time. Or maybe he never will.
Either way, iPhones exist.

Awareness is, in my view, exactly the same, and it is mystifying to me that
people such as yourself come to a different conclusion.

~~~
habitue
Oh, I'm not saying qualia doesn't exist. Our experiences are definitely an
actual phenomenon. And we can't currently explain exactly how it works and we
should try to find out what produces the experience of qualia.

What we shouldn't do, is say humans have qualia and jump to the conclusion
that machines can't have qualia, and can't be "aware" in the same sense a
human can be. I apologize if that wasn't your intended argument.

~~~
garyrob
OK, good, so we have in common that we both believe qualia exist. I didn't get
that from your preceding posts.

"What we shouldn't do, is say humans have qualia and jump to the conclusion
that machines can't have qualia, and can't be "aware" in the same sense a
human can be. I apologize if that wasn't your intended argument."

No, we can't make that jump. But my intuition as a programmer (and one who is
interested in A.I. and does A.I. in the sense that some of my mathematical
ideas were incorporated into a number of spam filter products) is that they
don't. There is simply nothing there that would "add" awareness to the
algorithm. All there is is information processing. It can classify something
as either having the attribute of yellowness or not. But something beyond that
functionality is required to have it experience the qualia of yellowness.

My point is that we have no idea what that something extra is. Somehow, people
have it. But we don't know how. There are hypotheses, like those of Roger
Penrose who discuss quantum phenomena as possibly being behind it, as made
manifest in certain microscope structures in the brain. Suppose Penrose is
right. Then, maybe one day a quantum computer can incorporate the same
phenomena and be conscious for the same reasons we are.

But I do not believe that, for example, a spam filter as we know it today has
the slightest experience of qualia. There is just no "room" for it. Everything
in the spam filter has a specific function for the classification task and has
nothing to do with experiencing qualia. The same is true for the most
complicated deep learning network.

Maybe software as we know it can induce qualia as some kind of an emergent
phenomenon, and the brain also induces it through some kind of emergent
phenomenon. But I tend to hypothesize there would be something that causes
that, some mechanism that the emergent phenomenon grows out of, that is not
embodied in something like a current-day deep learning network.

My hypothesis is that to create a machine that experiences qualia, we will
first understand the mechanism behind our own experience of qualia, and then
reproduce that mechanism in a machine. I hypothesize that this would be a new
and extremely fundamental, and transforming, scientific discovery.

And if we do reproduce that mechanism in a machine, and THEN that machine
passes the Turing Test without us programming mechanisms to merely MIMIC human
behavior, then I'll tend to assume that we've created a truly conscious entity
-- though we may never be able to completely prove it.

On the other hand, there's no reason to assume we'll ever discover how it
works -- for instance, we can't rule out the idea that there is a God who
doles out souls to human bodies, and that we'll never have the means to
understand God or souls. I don't believe that's the case -- but plenty of
people do, and we can't presently prove they're wrong.

~~~
habitue
I don't disagree except that in order to produce a sufficiently human-like
intelligence that can pass the turing test, we'll most likely need to have
pulled the human experience apart enough to understand where the experience of
qualia comes from. I think it's hugely unlikely we'll be able to create an
intelligence that can carry on a conversation that can fool any human, and
still not have a single clue what qualia is or where it comes from. We are a
long way from that conversationalist, so it's not surprising we have no good
understanding of qualia.

------
300bps
_In the 16th century only a foolish optimist could have imagined that all
these objects ' motions could be explained by a simple set of principles. But
in the 17th century Newton formulated his theory of universal gravitation,
which not only explained all these motions, but also explained terrestrial
phenomena such as the tides and the behaviour of Earth-bound projecticles._

It turns out that Newton's ideas on gravity are a mathematical approximation
that at least ignored relativistic effects. See:
[http://physics.stackexchange.com/questions/52165/newtonian-g...](http://physics.stackexchange.com/questions/52165/newtonian-
gravity-vs-general-relativity-exactly-how-wrong-is-newton)

Newton's gravity was a theory about a single fundamental force - and it turned
out to be not 100% accurate. Rather than be an example of how intelligence
might be similarly possible to explain with a simple theory, I think it offers
a perfect counter-example of why it almost certainly cannot.

~~~
habitue
Comets, and the planets and the tides, and apples falling from trees were
found to be caused by the same underlying phenomena, despite seeming very
different. Newton's insight that they were the same was not overturned by
relativity, so the point stands quite well.

~~~
300bps
_Newton 's insight that they were the same was not overturned by relativity_

I almost feel like you're replying to someone else because I never said what
you are contradicting.

I merely said that Newton's gravity is a simplification that doesn't take all
variables into account. So to point to Newton's gravity as an example of,
"Sometimes simple things can be right!" actually makes it more of a counter-
example.

~~~
habitue
I was pointing out that the complex explanation is that comets have their own
driving force, and tides have a separate driving force, and planets have their
own reason for moving as they do, etc. Compared to that kind of hodge-podge,
relativity is still a really simple theory. While relativity increased the
complexity of the explanation of gravity, it didn't bring it anywhere near the
level of complexity that might have been assumed before Newton.

The author's argument is that our intelligence is likely not solely possible
by the interaction of a large number of separate principles, but rather by
some simple principle that manifests itself in different ways. I think if we
find some simple principle that explains intelligence, and then later have to
make a correction to the math of that principle, the resulting explanation
will still be simpler than the idea that intelligence is made up of many
different principles all of which have to be present.

------
gwern
Jacob Cannell has a recent blog post that goes into more depth about the
neurobiological aspects of the master-trick argument:
[http://lesswrong.com/lw/md2/the_brain_as_a_universal_learnin...](http://lesswrong.com/lw/md2/the_brain_as_a_universal_learning_machine/)
(also good is
[http://lesswrong.com/lw/meo/analogical_reasoning_and_creativ...](http://lesswrong.com/lw/meo/analogical_reasoning_and_creativity/)
)

------
norea-armozel
Just from skimming the article it seems to me the problem with AI and probably
just the whole matter of what people define as intelligence is that it assumes
that the universe itself is not intelligent. Maybe the universe itself is
intelligent but that consciousness is some other phenomena that gives
intelligence an individual characteristic? It's just a random thought, I don't
really know which way to go with it.

~~~
davesque
I can relate. While I consider myself an agnostic with a slight atheist
streak, I still like to muse on these sorts of thoughts occasionally. The
universe does seem somehow innately intelligent and alive. Everything is in
motion. Everything appears to have some kind of structure. Gotta be careful
not to fall back on the "G" word though (not that you did) :).

~~~
javajosh
"The Universe" is huge and empty and completely hostile to life. Most matter
is in stars which are not empty and completely hostile to life. (And which are
huge compared to us, but which are like specks of dust compared to "The
Universe").

Our planet is a speck of dust rotating at a fortunate distance around another
speck of (shining) dust. By mass, the majority of the planet is useless to
life. Everything alive on Earth exists in a very thin skin between lifeless
rock and lifeless, airless, freezing space.

Then, on any timescale meaningful to "The Universe" individuals are tiny
blips. Species are around longer, but not much. Vast majority of species are
killed off by cosmic events regularly (we are on something like the 6th major
extinction event, IIRC).

All of humanities struggles with itself, with ignorance, all of it's artistic
expression, have taken place within the context of that thin skin of warm
fluid surrounding Earth over a shockingly short time frame.

The best that can be said is that we now know the score: that "The Universe"
is fascinating, vast, hostile, empty, silent, and our entire species is the
rough equivalent not to a mewling baby, but rather to, perhaps, a single
microscopic insect gestating in its egg, just beginning to break the shell and
look around.

~~~
filoeleven
And yet here we are, living and experiencing the universe.

Most matter that you interact with is not in your body, and much of it is not
in a usable form to your body, yet it still sustains you. Why should this not
hold true at a larger scale?

We cannot survive in the colds of space, yet it is the tendency of matter to
clump together, creating that space, which also gives us a sun and a planet
and an atmosphere that was necessary for our existence. We cannot survive in
the lifeless molten rock, yet we also could not have formed without its heat.
It's usually at the edges of things where the most interesting and varied
phenomena arise.

Much of the universe is hostile to human life in its current form, yet it is
also the universe that gave rise to it. Whether human newborn or microscopic
insect hatchling, the experienced world is at its most incomprehensible, and
the organism at its most vulnerable, at the beginning. Growth allows for
greater experience, and experience can lead to greater growth. Our knowledge
of what's possible for a species extends only backwards, and only on one
single speck of dust.

The universe has no inherent vastness, hostility, emptiness or silence without
a conscious being to assign it those properties. I choose to imbue it with
aspects of magnificence, intelligence, organization, perhaps even benevolence.
We are not in a cosmic battle to conquer a dumb, hostile substrate; we are
embedded in a rich, nurturing agar whose purpose it is, at least in part, to
help us to flourish.

~~~
javajosh
_> The universe has no inherent vastness, hostility, emptiness or silence
without a conscious being to assign it those properties._

Agreed.

 _I choose to imbue it with aspects of magnificence, intelligence,
organization, perhaps even benevolence._

Well, that's nice, but it's only possible at the space scale of, at most, the
Earth. Or at least human scale - since it is _you_ who are doing the choosing.
I mean, it's fine to change the subject, but at least be aware that you are
doing so.

 _> We are not in a cosmic battle to conquer a dumb, hostile substrate; we are
embedded in a rich, nurturing agar whose purpose it is, at least in part, to
help us to flourish._

Again, I feel like you are confusing space- and time- scales. Earth's
biosphere _is_ nurturing agar, no doubt, over a short time scale and small
space scale. But the solar system, galaxy, let alone the universe at large, is
most definitely not, and nor is the Earth's biosphere on any large time scale.

We have arisen in this universe, apparently the first intelligent life on this
planet, and on any other planet. If there is intelligent life elsewhere, we
are separated by impossible vastness: to get to another star, in any form, we
must harness immense energy. More than is reasonable. And once there, the
lifeboat must somehow found a colony, a self-sustaining biosphere, which
itself is a gargantuan task.

Perhaps, though, this is a benevolent universe: maybe those great chasms of
space protect intelligences from each other. It means that if any of us manage
to make the journey, and we discover each other, it will be a meeting of great
good fortune, and the travellers will _always_ be at the mercy of the
inhabitants of the destination star.

There is hope for long-term survival of intelligence, but it is very, very
small.

~~~
filoeleven
> Well, that's nice, but it's only possible at the space scale of, at most,
> the Earth. Or at least human scale - since it is you who are doing the
> choosing.

Hmm, my intent was to expressly claim those things at large scales, or more
accurately, at all scales. It's just that it is easier for us to understand
e.g. the organization and benevolence only in the things which are of the most
immediate usefulness: we understand the biosphere as a resource because we can
use that _now_ , whereas we don't really understand how to harness the full
power of a star because it's too far beyond our current capabilities.

> Earth's biosphere is nurturing agar, no doubt, over a short time scale and
> small space scale. But the solar system, galaxy, let alone the universe at
> large, is most definitely not, and nor is the Earth's biosphere on any large
> time scale.

I agree with you about Earth's biosphere space-wise though perhaps not time-
wise. But if Earth is a cradle of intelligent life, surely we are not meant to
stay here indefinitely? Even now the wild idea of mining asteroids, once
relegated to science fiction, is reaching the point of feasibility. What was
previously useless (not to mention unknown) to us is becoming useful, and will
act as a stepping stone (or a catapult) to enable some other far-fetched ideas
to come closer to manifestation. I believe this is a pattern that does not
necessarily end.

> If there is intelligent life elsewhere, we are separated by impossible
> vastness: to get to another star, in any form, we must harness immense
> energy. More than is reasonable. And once there, the lifeboat must somehow
> found a colony, a self-sustaining biosphere, which itself is a gargantuan
> task.

There was a time, or more accurately various times for various peoples, that
crossing an ocean was almost beyond imagination. As we continue to gain in
knowledge and understanding of how the universe works, especially our
understanding of spacetime, these problems become far less intractable.
Interstellar travel is just a bigger ocean. The scale of the metaphor is
logarithmic but so is the pace of the growth of our collective intelligence.
My personal belief is that we will find a way around the immense time and
energy requirements that we currently believe are necessary for interstellar
travel. Terraforming is the harder problem: understanding ecosystems is IMO
orders of magnitude harder than understanding fundamental physics, and
building them from scratch seems like a pipe dream even for a dreamer like me.
But if we figure out how to travel smarter then it becomes more a question of
finding the most Earth-like planets that already have much or most of what we
need. Still not easy, but probably easier and faster than trying to terraform
Mars.

> Perhaps, though, this is a benevolent universe: maybe those great chasms of
> space protect intelligences from each other.

This seems plausible to me also. Seeds need a volume to grow in that is often
tens of millions times larger than the seed itself. There could be a cosmic
ecosystem that we are still too young to see or understand, and we may not
catch a glimpse of it until we have spread through the stars.

> There is hope for long-term survival of intelligence, but it is very, very
> small.

If you mean small hope for human intelligence, then I would agree. I think
this has less to do with the harshness of the universe out there and more to
do with our own species' maladaptive behavior patterns. We are in a species-
scale moment of crisis, which is to say one that has been going on for
hundreds or thousands of years and may last a few hundred more, but there's no
question in my mind that we need to get our shit together _now_ if we want the
species (and/or its children) to survive. This is why I think it is important
to view the universe through a lens of possibility: it gives us a direction to
steer towards. A more hopeful outlook inspires activity to push towards a
better outcome.

Thanks for responding!

------
kevinalexbrown
I enjoy the planetary motion analogy immensely. However, a curious complexity
is evident in biological systems - it's physics imbued with 'meaning'. A cell
that behaves one way instead of another will die. This is true from e. coli up
to skin cells. I suspect this adds a lot of complexity to the ultimate
'algorithm' of intelligence.

In fact, it's probably prudent to stipulate whether the goal is to specify the
_developmental_ algorithm that gives rise to intelligence, or intelligence
itself. The generating process might be a lot more simple than the finished
product, in the same way that the output of a pseudorandom number generator is
very complex in terms of entropy (but not kolmogorov complexity, I guess), but
the generating algorithm is very simple. Or the Rado graph, which is sorta
maximally complex in the sense that it contains all finite and countably
infinite graphs as induced subgraphs, yet it has a simple generating scheme.
As a last example, consider No Man's Sky - relatively simple algorithm
compared to the astonishing complexity of the worlds.

I do believe there is a relatively simple high-level description of neural
development. But it's curious how it's relatively robust to genetic
manipulations. I remember early in graduate school listening to a lecture
about a particular mouse model for autism, caused by just one gene. The
lecturer excitedly told us that it had a very high rate of behavioral
manipulation - 75% or so. Not coming from a mouse-model background I was
astonished that it was 'only' 75%. What happens in the other 25% - the gene
doesn't just magically reappear, there are other compensatory mechanisms. What
drives those? I suspect that's the more abstract level of description that a
simple algorithm might describe.

[https://en.wikipedia.org/wiki/Rado_graph](https://en.wikipedia.org/wiki/Rado_graph)

------
mathgenius
> I believe it's not in serious doubt that an intelligent computer is possible
> - although it may be extremely complicated, and perhaps far beyond current
> technology - and current naysayers will one day seem much like the
> vitalists.

Physicists (and this guy is one of them) certainly do have a robust set of
taboos against considering consciousness as a quantum phenomena (whatever that
might mean.) It's unfortunate because there is a huge resurgence going on in
the physics of quantum many-body entanglement, quantum computers, foundations
of quantum, etc. Alot of this meandering was just sidelined in the 1940s when
world affairs intervened. However, in light of these new discoveries it's time
to revisit the old vitalist arguments. Imho.

~~~
habitue
Let's say, for the sake of argument, that intelligence requires quantum
effects to work. I don't see how going back to vitalist arguments will aid us
in any way. Researching the phenomena, following evidence, making inferences
from that evidence... that's how progress on that subject would be made. Not
by going back to arguments made by people with less information than we have
now.

Specifically, if we follow the evidence, and find that somehow, beyond all
odds, the vitalists were correct (once you add "quantumness!" to the mix),
then it's clearly only by lucky guess they were right. It would be something
we could look back at and say "wow, that's an interesting coincidence" rather
than "wow, those vitalists had it right all along, we should have paid closer
attention".

~~~
JoeAltmaier
Its often by intuition that we know where to look. Lets not be arrogant and
claim "ok, you guessed it right, but its only true if I measure it and prove
its true". That's correct but disingenuous.

~~~
gnaritas
That's still simply luck; unfounded intuition is meaningless as it's wrong
more often than not. That's it's occasionally correct is coincidence.

~~~
JoeAltmaier
I guess all scientific progress has been by coincidence then? That's what I
mean by disingenuous; some credit has to be given to those that are right.
There are an infinite number of wrong intuitions.

~~~
dragonwriter
> I guess all scientific progress has been by coincidence then?

Correct hypotheses are, in a sense, a product of fortune, and scientific
progress requires them to happen sometimes, so, in a sense, that's true.
Without a certain minimal degree of luck, you would have no scientific
progress.

The scientific method can be viewed as simply a method of filtering out the
intuitions without utility and promoting those (if any) that have utility. It
is a way to avoid _wasting_ the good fortune of the intuition of forming
useful hypothesis, and avoiding the waste demonstrably fruitless intuitions.

> some credit has to be given to those that are right

But vitalism is so vague as to be untestable, and justifies no useful models.
"Some (unspecified elements of) living things are not governed by physical
law" is indistinguishable in effect from "We have not yet discovered the
physical laws governing (some elements of) living things".

~~~
habitue
> Without a certain minimal degree of luck, you would have no scientific
> progress.

A minor disagreement on this point. I think often our intuitions are shaped by
experience, and lead us to conclusions that we don't have a rational line of
inference to (at least one we can explain). Science acts as a good filter,
saying "we don't care how you came up with it, test it and we'll agree if it
turns out it's right". This isn't a random guess, but it's far from reasoning
you could convince anyone else with. This is different from a lucky guess.
Your brain has been seeing a lot of the data, and is picking up on patterns,
in other words your intuition is correlated with the phenomena, and that's
what leads you to the theory you want to test.

Second, sometimes purely rational arguments lead us to theories from other
data. This isn't random chance, but it is exploiting regularities in the world
around us to predict new things. For example, kepler's laws of motion weren't
the direct result of observation. You can't observe equations that guide the
motion of the planets. In between observations, the planets might have jumped
all over the place. But it was assumed that the planets weren't doing that,
and simple equations were inferred from the data available that fit the
evidence. There is a step here between "observation" and "prediction" that
doesn't involve a lucky guess.

------
paulojreis
> "I believe it's not in serious doubt that an intelligent computer is
> possible" -

Yes, of course it is possible, if we change the meaning of "intelligent" to
fit what a computer does. :)

Generally, I think the problem here - as in many other situations - starts
with the lack of rigor that computer science has when "naming" fields or, most
exactly, appropriating concepts from other fields.

* Computer Vision: "vision" has a defined meaning, even "organic-related". Signal processing and a bunch of algorithms aren't exactly what vision is and what we define as "vision";

* Machine learning: does the computer really learns, as understood by the common definition of "learning"?

* Context awareness: if we consider common definitions, neither a few sensor readings are "context", nor acting according to said readings is "awareness".

This might sound like nitpicking, but I truly think it's a real problem.
Talking about vision, learning or intelligence put's the stake at what people
or the general scientific community commonly perceive as those concepts.

What's most frustrating to me is the contrast. Computer science research is
thorough; why are we not thorough on naming our fields of study? Why do we
appropriate concepts by "analogy", knowingly that analogies aren't truly
exact?

~~~
habitue
> Yes, of course it is possible, if we change the meaning of "intelligent" to
> fit what a computer does. :)

I don't think that's what he's doing. Otherwise he could just define
intelligence to mean what computers do now. Clearly he has at least roughly
the same idea of intelligence you or I do.

> "I believe it's not in serious doubt that an intelligent computer is
> possible"

Do you have an objection to this specific statement when taken to mean general
human-like intelligence being exhibited by computers? I have not heard anyone
defend the opposite viewpoint often, and I'm interested if there are any
compelling arguments why it's not possible.

~~~
paulojreis
> Do you have an objection to this specific statement when taken to mean
> general human-like intelligence being exhibited by computers? I have not
> heard anyone defend the opposite viewpoint often, and I'm interested if
> there are any compelling arguments why it's not possible.

A few aspects which relate and shape intelligence:

1) Socialization; 2) Embodiment (the fact that I have a body and the body has
feedback loops - affects and is affected by cognition); 3) Emotions (we have
"hunches" and those hunches are surprisingly many times adequate); 4) Language
(our sign system affects our perception of the world).

Will a computer be able to be a sociable, embodied person? I tend to think
that it won't, so that's roughly why I believe a computer won't ever have
"intelligence" (unless, again, we redefine intelligence).

~~~
nkassis
1) 2) 3) Are I think research goals of robotics. None are simple but I don't
personally think they require anything that isn't possible to accomplish with
computer + physical objects beyond the computer itself.

------
api
I think a strong argument for 'no' comes from the No Free Lunch (NFL)
theorems:

[https://en.wikipedia.org/wiki/No_free_lunch_theorem](https://en.wikipedia.org/wiki/No_free_lunch_theorem)

The NFL theorems have been misused by advocates of 'intelligent design' \--
they do not factor against evolution since evolution doesn't demand a global
maximum, only a local one. (There are other reasons but this is the most
fundamental.) But the theorems themselves are incredibly interesting.

In short they show that _averaged over the space of all possible search
landscapes_ , no search algorithm performs better than any other.

Obviously the universe does not provide "all possible search landscapes,"
since the universe has structure. Some search landscape structures and meta-
structures are pathological and rare in nature. But nature does provide a huge
diversity of them. The NFL theorem is to me powerful evidence that something
as general as human or even animal intelligence must be a superposition of
multiple "algorithms" rather than just one.

It also, I think, explains why all single-algorithm or single-approach methods
of AI end up being domain specific. Obviously they'll be domain specific --
they only work against search landscapes with a certain structure!

The NFL theorems are a big reason I am a short term AI skeptic. I don't think
human-level or beyond AI is impossible, but I think we are quite far from
realizing it. I'll become more optimistic when AI researchers start studying
biology more closely and deeply, since biological systems are the only
existing examples of truly intelligent systems.

~~~
rsaarelm
Solomonoff induction seems to be an useful innate bias for the current
approaches at general intelligence. It's basically a formalization of Occam's
razor that states that instead of being completely random, the surrounding
environment is mostly lawful and simple, and therefore most of the processes
in it can be described by algorithms with short rather than long source code.

Hutter has written a bit about NFL:
[http://arxiv.org/pdf/1111.3846.pdf](http://arxiv.org/pdf/1111.3846.pdf)
[http://arxiv.org/pdf/1105.5721.pdf](http://arxiv.org/pdf/1105.5721.pdf)

~~~
api
Thanks, those look interesting. Will give them a read when I have time. My
naive reaction though is: a lawful universe does not necessarily imply well-
behaved fitness landscapes. Many lawful and even very simple processes give
rise to chaotic and complex results.

------
jonmc12
The author discusses concepts around intelligence without articulating what
definition of intelligence he is adopting. When he discusses the innovation in
understanding planetary motion, or chemical substances, it represents examples
of where scientists could place observed data amongst a common frame of
reference (ie, a meaningful definition to contextualizes observation leads the
way to a theory which explains the data).

A more meaningful direction for article, imo, would have been to adopt a
concrete definition of intelligence, and then talk about how ferrets, chimps,
babies and adults fall on this continuum. Perhaps, connectomics, molecular
biology, psychology or neuro-anatomy help us approximate the 'atomic number',
giving us the opportunity to speculate about the underlying equivalent of
quantum mechanics.

Like the article states, evidence is lacking, and we're somewhere between
1-100 nobel prizes away. But, the power of a concrete definition helps point
to what we know we don't know, and discussion about whether the definition of
intelligence itself needs to be re-framed to answer a question like this.

------
RangerScience
Arguably, yes: [http://phys.org/news/2013-04-emergence-complex-behaviors-
cau...](http://phys.org/news/2013-04-emergence-complex-behaviors-causal-
entropic.html)

Basically, if you act to maximize the adjacency of possible futures, you act
intelligently, and an agent that does this can dynamically figure out the
inverted pendulum, tool use, cooperation, and cooperative tool use. (By
"dynamically" I mean "without prior instruction".)

However, in order to use this process, the agent must also have a way to
predict the future states of a system. When you stop and think about what THAT
part entails, shit starts to get even more interesting.

PS - It gets even more interesting when you pick up on how this mimics
entropy, and then this: [https://www.quantamagazine.org/20140122-a-new-
physics-theory...](https://www.quantamagazine.org/20140122-a-new-physics-
theory-of-life/)

------
compbio
Since I did not see it in the article or in the comments here: Intelligence is
compression / dimensionality reduction. Finding the essential parts of a
problem/object/concept and creating a code book for it. The better you can
compress things, the better you understand them. If you know that the left
wing of a plane is the same as the right wing of a plane, but rotated/flipped,
then you would not need to store (redundant) information about both wings. The
upper bound to an intelligent action is the Kolmogorov complexity (upper bound
to compression) of the problem it aims to solve.

[http://www.hutter1.net/ai/pfastprg.htm](http://www.hutter1.net/ai/pfastprg.htm)
(The Fastest and Shortest Algorithm for All Well-Defined Problems)

------
eli_gottlieb
>So 125 million base pairs is equivalent to 250 million bits of information.
That's the genetic difference between humans and chimps!

I don't mean to nitpick, but epigenetics is a thing.

Also... how up-to-date on the research literature is this essay supposed to
be?

~~~
TheOtherHobbes
The information isn't just in the base pairs. It's in the entire evolved
ecology (with optional industrial add-on pack) that makes the base pairs do
something useful.

How much of that could you take away and still get a human brain?

~~~
e_modad
Craig Venter says here[1] that Franz Och (of Google Translate fame) has done
some analysis for his new company Human Longevity Inc that shows that about
50% of the genome is responsible for the brain. So the number is quite big and
much larger than I would have thought. Excited to see more results from HLI.

Comment at 13:12 [1]
[https://youtu.be/D3nIIKwwiLc?t=11m43s](https://youtu.be/D3nIIKwwiLc?t=11m43s)

~~~
bohm
Which is interesting considering we share about 85% of our genome with Zebra
Fish.

~~~
e_modad
Source? That sounds a little high.

~~~
bohm
From memory, but maybe a slight overstatement:

[http://www.sci-news.com/genetics/article01036.html](http://www.sci-
news.com/genetics/article01036.html)

70 per cent of protein-coding human genes are related to genes found in the
zebrafish (Danio rerio), and 84 per cent of genes known to be associated with
human disease have a zebrafish counterpart.

------
randcraw
If there were a simple algorithm for intelligence, and assuming the presence
of intelligence confers a competitive advantage, then why wouldn't natural
selection have created many examples of it by now, after some billion years of
multicellular life? Yet it hasn't.

In fact intelligence manifests rarely in nature and only in long lived species
with big complex brains which spend years educating their young to develop
their cognitive skills. This implies that intelligence requires both a
nontrivial nervous system and substantial nurturing thereof.

Ergo, I conclude that intelligence itself is inherently complex and unlikely
to arise simply or spontaneously.

~~~
mpdehaan2
Every animal is intelligent. Using the google dictionary definition, "the
ability to acquire and apply knowledge and skills."

What they are are intelligent about varies, and the assumption that
intelligence arises rarely is dangerous to ecological and environmental
decision making. Rather, we don't know how to communicate with animals well
and we don't think the same ways.

[http://www.nature.com/news/2010/100909/full/news.2010.458.ht...](http://www.nature.com/news/2010/100909/full/news.2010.458.html)

[http://scienceblogs.com/mixingmemory/2007/08/17/metatool-
use...](http://scienceblogs.com/mixingmemory/2007/08/17/metatool-use-and-
analogical-re/)

[http://blogs.scientificamerican.com/running-ponies/catch-
the...](http://blogs.scientificamerican.com/running-ponies/catch-the-wave-
decoding-the-prairie-doge28099s-contagious-jump-yips/)

[http://ngm.nationalgeographic.com/2015/05/dolphin-
intelligen...](http://ngm.nationalgeographic.com/2015/05/dolphin-
intelligence/foer-text)

------
umutisik
Another imperfect analogy: say someone (maybe from an earlier time) was
looking to reverse engineer a modern CPU but could not see what was going on
inside it very well. Surely the whole thing would be very confusing. The CPU
has many many clever designs based on the needs of people who buy it and use
it, but the whole thing is based on how you can make AND/NOT using
transistors. Maybe if the person was to discover this, then they could start
getting past the complexity of reconstructing some of the mechanisms of the
CPU; and they could start making their own CPU for their own needs.

~~~
seiji
Take it one step further: say person from the past sees OS X running on a
computer. They take the CPU and try to reverse-engineer OS X just from the
CPU. Good luck.

------
kazinator
Yes, there is a simple algorithm for intelligence. It's just a few lines of
code. However, it requires a terabyte-sized table of data which has yet to be
filled in.

~~~
knodi123
there's a simple algorithm for running the latest version of windows. It's
only a few lines of code, and a hundred gigs of compressed C++.

------
jokoon
I believe there is research to be done in hardware that resembles neuron
networks. Im sure one could build a dedicated chip with rewirable programmable
gates. I still wonder if gpus are really efficient at simulating NNs.

It might get very expensive, but like he said, im optimist that replicating nn
in hardware could be the way to go. i think its a problem where classic turing
machines are too different from networks of neurons.

------
thomasfoster96
I'd think a 'simple' algorithm for intelligence would be even more distant
than artificial general intelligence itself.

~~~
knodi123
Simplicity is always only a little ways off. If I sat you next to god and told
you to pair program and write an OS equivalent to Windows XP, with you driving
the keyboard- you'd be sitting there for quite a while.

But if I told you guys to pair program an AGI, you could probably finish in a
day or so.

A simple algorithm is a matter of finding just one or two basic discoveries.
It could happen tomorrow! It probably won't, but that's the nature of
discovery.

~~~
thomasfoster96
Well, supposing I'm not programming with god, rather I'm with a bunch of
programmer friends, I still doubt the first artificial general intelligence
(assuming we happen to be the people who make it) will be a beautifully simple
piece of software. It'll probably be horribly hacked together and hard to
maintain, and only in time will a simpler way to do things emerge.

For example, I'd reckon we'll find that a lot of things that today only make
sense to solve with a neural net are actually perfectly representable using a
simple(ish) algorithm, but it's unlikely we'll find that out until after we
create an artificial general intelligence and know more about the problem of
intelligence.

In other words, we'll solve the problem, and then realise how much easier it
could have been.

~~~
knodi123
[https://xkcd.com/224/](https://xkcd.com/224/)

------
ThomPete
Wouldn't there have to be? Given that we evolved from much simpler organisms.

------
cafebeen
A k-nearest neighbor classifier could be an example of a "simple" algorithm
for intelligence, assuming of course, that you've collected a tremendous
number of accurate (input, intelligent response) pairs!

------
astazangasta
I have a question: why do we even want a general artificial intelligence?
Isn't human thought good enough? I can only think of one reason to make such a
thing: to enslave it and use it to control other humans.

------
amelius
There probably is a very simple algorithm for intelligence. It is just way to
expensive to run it.

(The reason is very simple, namely that the rules for the universe are
probably very simple.)

------
UhUhUhUh
I hold the same belief. I also think that we don't pay enough attention to the
form of processes whereas nature seems to place lots of emphasis on it. In
fact, the more it goes, the more a form factor appears to be somehow
determinant. Molecular biology with enzymes, genetics with histones,
neurobiology with firing patterns, etc. We can see that now because we have
access to finer grained observation. Form could be a higher-order type of
content and could also be approximated with an algorithm. This all feeds in a
previous discussion about intuition. Top-down or bottom-up?

------
meeper16
It has to do with pattern matching. In particular, comparing vectors with one
another for similarity or dissimilarity.

~~~
knodi123
define "vector" in this context

------
giardini
tl;dr - "We don't know[but please read my blog]."

------
jtth
No.

------
esaym
There really isn't an algorithm that can make algorithms, so no.

~~~
gjm11
There are algorithms that make algorithms. (Even if we assume that the brain
doesn't run on algorithms.) Genetic programming has made algorithms, for
instance.

------
emergentcypher
Something something law of headlines something answer is "no".

------
escherplex
First define 'intelligence'. OED: 1) faculty of understanding, 2) quickness,
5) knowledge; exchange of knowledge. To abstract from Chalmers: cognitive
software supervening on a conformal neural hardware substrate. OK but in
everyday empirical-focused psychology, how are subjects tested for
'intelligence'? My brother and his colleagues draw from Gardner's eight
'abilities': rhythmic (musical), visual (spatial), linguistic, logical
(mathematical), bodily (kinesthetic), interpersonal, intrapersonal, and
naturalistic. IE, there are subjective elements in evaluating 'intelligence'
plus 'fuzzy logic' only supplies a useful fiction for quantifying the
subjective. Now this article appears preoccupied with mapping-out neural
hardware specs in its emphasis on processing capacity, memory, clock speed and
whatnot. Would a savant mentally capable of calculating pi to a zillion places
but incapable of communicating results be judged 'intelligent'? MRI maps of
mental processing only would suggest such. Would a quantum MB capable of
running MATLAB at warp speed but unable to render output in an ascertainable
form be judged well-scripted? Not! Even a perusal of the current WAIS-R manual
(no particulars since I'm not predisposed to be shot) exhibits classes of both
objective testing and tasks which would be categorized also as measures of
interpersonal engagement (Who was famous for ...). Would the Ava character in
the movie 'Ex Machina' be judged consciously 'intelligent'? Consensus probably
would be 'yes' since both her subjective and objective 3D GUI seemed correct
by human standards towards the end. [wonder if anyone who saw the movie
concluded that the affective agent of Ava’s cognitive development harkened
back to John Locke's premise that percepts functioned as something analogous
to self-extracting .zip files which imposed real cognitive patterns on a
subject's 'tabula rosa', not requiring any native cognitive modeling]. But
would Ava really 'understand' anything or just be a 'Searle's Chinese box'?

~~~
escherplex
strong-AI crowd eh?

