
Building Smarter Artificial Intelligence By ... Shrinking The Body? - njrc
http://www.scientificblogging.com/mark_changizi/building_smarter_artificial_intelligence_shrinking_body
======
olalonde
I disagree with that author and I'm not even sure what point he is trying to
make.

If a smaller brain is more intelligent than a bigger brain, I assume it is
more complex (for instance, it has more interconnections).

In computer science though, it takes more processing power to simulate more
complex representations of reality. Therefore, we would need more processing
power to represent a small intelligent brain than a big stupid brain.

Software and our analog world are two very different beasts. Small and complex
physical things sometimes take more processing power to represent than big
simple ones.

------
indrax
If you think he's not making any sense, You're halfway to understanding. He's
not making any sense.

The author doesn't seem to understand the causal link between intelligence,
brain size, and body size. Smaller bodies don't make things smarter. Heavy
organs have a survival cost. Animals with bigger brains only do better if the
brains pay for themselves with survival-related intelligence. In species that
don't generally get a payoff for figuring things out, the lighter and faster
dummies breed better.

If you want to think clearly about AI, read Lesswrong.

------
Tichy
I don't believe in this (relatively recent) trend of associating AI with
bodies. Why should AIs need bodies? Clearly AIs without bodies must be
possible.

Of course if you want AI that resembles humans, bodies might help. But why
specialize too soon (creating AI is easier than creating AI that is like
humans).

~~~
gnosis
_"I don't believe in this (relatively recent) trend of associating AI with
bodies."_

The problem is that "strong AI" (ie. an AI considered intelligent in the
deepest sense of the term, with intelligence equal to or greater than that of
a human) has pretty much run in to a dead end. Lots of approaches have been
tried over the decades, and all have failed.

Strong AI research needs new ideas and new approaches. Making a close coupling
between the "brain" (or "mind") and the body is one such relatively new
approach. As such, I applaud it.

Like the approaches that came before, it also might lead up a blind alley. But
at least they're trying something new and different, instead of banging their
head on the same wall.

And, apart from the sheer novelty of the approach, there has been a lot of
evidence that much of the "processing" that has been assumed to be done in the
brain is actually done in the body.

Some relatively recent robotics research has been focused on creating robots
without any kind of a "brain", but just with a "nervous system". These robots
are able to move around and achieve goals. Here's some information on a
conference where these sorts of robots are demonstrated:

[http://ine-web.org/telluride-conference-2010/telluride-2010/...](http://ine-
web.org/telluride-conference-2010/telluride-2010/index.html)

These robots are a far cry from achieving the goal of strong AI, but they
point to an alternative approach to AI that is not brain-centric, to coin a
term.

~~~
marshallp
"Lots of approaches have been tried over the decades, and all have failed."

I beg to differ with that.

Marvin Minsky's idea of having 5000 people creating a common sense database of
sorts wasn't tried. (the closest is cyc, which only had 20 people most of the
time).

Genetic programming has not been tried at all really.

Machine learning hasn't been seriously tried either (the closest efforts have
been google goggles and google search maybe).

AI is serious and complicated engineering project. Billions are spent on
creating skyscapers, tunnels, and airplanes. Yet the money spent on AI is
orders of magnitude less even though the payoff is orders of magnitude more.

If I had to guess, I'd say AI would be easily achievable within a year using
any of the above approaches if a trillion dollars were spent on it this year.

~~~
eru
I beg to differ. All the approaches you mentioned are still under heady
investigation. It's only the name of AI that has been tarnished.

And the progress in the tools will eventually lead to more "intelligent"
systems.

~~~
marshallp
What do you mean by 'heavy investigation'. The latest skyscrapers in dubai
cost billions of dollars each. How much is spent on AI per year worldwide. I'd
bet it's less than 100 million. I wouldn't call that serious effort.

How has the name of AI been tarnished. If people believe you can spend a
fraction of the money spent on a building to create human level intelligence
then it's their 'intelligence' that is tarnished.

If your goal is create human level ai, than more 'intelligent' systems is not
progress, we've had 'intelligent' tools since the first loom weaving machines
were created hundreds of years ago.

~~~
coffeeaddicted
The game industry does spend some money on AI development. Not sure if it's
100 million, but it's a serious amount as most game development teams have at
least one or two AI developers. Although most people probably don't realize
how much game AI has advanced in the last decade as most of the advances had
been to cope with increasingly complex game worlds.

~~~
marshallp
The point is, there are no new advancements to be had in in ai - you either
hand code it (good old fashioned ai) or search for (machine learning/genetic
programming) or a combination of both. No 'new' algorithms will be found that
will magically solve ai. Little bits of coding here and there that use an 'ai'
technique don't cut it. Automated trading uses 'ai' techniques, billions of
dollars worth, but it won't make a dent towards creating ai.

~~~
eru
I guess in a roundabout way those will make a difference. Also the `unrelated'
working of Moore's law will help.

------
coffeeaddicted
How far can you take a brain out of it's environment before it stops working?
For the brain the body size is just an external factor. I suspect changing any
factors will probably make copying the brains rather harder than easier.

------
geuis
What correlation is the author trying to make? To my knowledge, Blue Brain has
so far gotten as far as simulating a single cat cortical column. In most
higher mammalian brains, they are built up from hundreds to thousands of these
columns. The approach that Blue Brain is taking is to start with the neurons,
making sure the simulations match real-world data. Then wire the neurons into
the cortical columns and make sure the simulations match real-world data. The
idea is to build from the base components, test the hell out of them each time
using the best medical data we can get about how neurons work, then scale to
the next level. In this case, hooking together an increasing number of columns
to simulate ever more functional parts of the brain.

This has crap-all to do with a 'body'. This is such an obvious case of someone
not defining their terms that it bugs the hell out of me. In the terms of the
Dictionary from Anathem, it comes across as "bullshit" in sense 2.

We are still at the point where we don't have a whole enough picture to
completely figure out the math of what makes a thing "think". We have ideas,
hypothesis, and some theories. But its all very much in flux.

People quite often make the wrong analogy between the discovery of flight and
conscious AI.

Before engineers knew about how air moving over a curved surface creates
pressure differentials that provide lift, they tried to build flying machines
by simulating wings. For decades, these people were laughed at by the common
folk who believed human flight was impossible.

However, once it was discovered how air movement made pressures and it was
possible to mathematically describe it, the approach changed. Then the first
successes happened. Then there was a huge amount of innovation in the next
40-50 years before the innovation in plane and wing design leveled off.

Figuring out how to organize matter so that it "thinks" is still being figured
out. We have some ideas that are well-supported by data. I feel the approach
IBM is taking is among the best. Since we don't know the completely
mathematical theory that applies to neuron networks to allow some to be
conscious and others not, our best chance is to take as much real world data
as we can and to build one in simulation. On the way, we discover all kinds of
things we didn't know before.

In a way, it sounds like I'm countering my previous argument about how people
trying to build wings failed. But, I'm not. It was in the process of
attempting to re-create examples of structures that obviously could fly
(birds) that we figured out the principles of _how_ they flew.

Blue Brain probably won't become THE method of building AI in the future.
However, its one of the first massive attempts to do the modern equivalent of
building a wing from the atom up. Along the way, we will start really figuring
out exactly what needs to happen to make one network of neurons conscious, and
another one not conscious. Then we'll be able to take those mathematical
algorithms and start playing with them.

Eventually we will get to a point where we can test an entity for
consciousness. Once we can scientifically describe how consciousness works, we
can devise tests for neuronal networks of any complexity to say if its
actually conscious or not. This opens up the prospect of seeing if other
animals on earth ranging from ants to dolphins are conscious in the same way
that we are (or not).

It also gives us an actual true test for AI consciousness. If we see the same
or similar results on a human that we do for an AI, its fairly obvious that if
the AI says "I think, I am" then it probably is.

I've always had a problem with the Turing Test. Its too subjective. Just
because you can fool 8 of 10 people, or even 10 of 10 people into thinking
something typing to you is conscious doesn't mean it is. You CANNOT determine
that without analyzing the structure of the mind that you're communicating
with. If that mind is based on biology, you examine the network of neurons. If
its based on software, you analyze the structure of the programming and the
flow of data. Since the biological systems and software systems would both be
using similar patterns of information to describe their structure and
functioning, the same tests apply to both.

Sorry to get off on a rant. A "body" has shit all nothing to do with "making a
better AI".

~~~
marshallp
"""Sorry to get off on a rant. A "body" has shit all nothing to do with
"making a better AI"."""

So you're saying Rodney Brooks's (previously head of AI at MIT) behavioural ai
is wrong. Also, without 'sensory' data from a 'body' machine learning
approaches can't work either.

"""Eventually we will get to a point where we can test an entity for
consciousness. Once we can scientifically describe how consciousness works, we
can devise tests for neuronal networks of any complexity to say if its
actually conscious or not """

Are you sure there is such a thing as conciousness? Even if there is, I'd bet
we get to AI before we can test for conciousness.

