
When Will We Be Able to Build Brains Like Ours? - nreece
http://www.scientificamerican.com/article.cfm?id=when-build-brains-like-ours
======
AngryParsley
I almost dislocated my jaw while reading this article.

 _When we come to understand how brains function, we should become able to
build amazing devices with cognitive abilities -- such as cognitive cars that
are better at driving than we are because they communicate with other cars and
share knowledge on road conditions._

Later:

 _As this cognitive infrastructure evolves, it may someday even reach a point
where it will rival our brains in power and sophistication. Intelligence will
inherit the earth._

Uhh... talk about lack of imagination. Self-driving cars? Smart power grids?
Those are _toys_ compared to other consequences of brain emulation. And
"someday"? If you can model 1/1000th of a human brain at full speed today,
then you'll be able to model a whole brain in 20 years (assuming no software
speedups). 2 years after that, you'll have a model that runs at 2x human
speed. Then it's off to the races. Biological humans are nowhere near the
upper bound for intelligence. Just one example of our inefficient brains: Our
neurons conduct signals at 0.000001c. If these things are as smart or smarter
than us, then they'll be able to build _even smarter_ intelligences _even
faster_ than us. Lather, rinse, repeat; you have I. J. Good's intelligence
explosion.

A minor detail in this future would be, "Oh by the way, biological humans can
upload themselves and become practically immortal." Seriously, cars are the
best the writer can come up with?! What's the point of cars if you can
transfer consciousness at the speed of light?

~~~
kul
I don't think it's a given that our brains are "ruthlessly inefficient". In a
complex system of trade-offs, they are actually pretty damn good at what they
do. Human vision alone is incredibly complex. Anyone interested in how
difficult it is to replicate the brain should read Steven Pinker's "How the
Mind works".

~~~
AngryParsley
Sure, the brain is impressive. It's not bad considering the limitations of
biology. Likewise, I think the human eye is impressive. I'd certainly have a
hard time designing a video camera made of jelly. But the brain, like the eye,
is highly sub-optimal compared to non-biological solutions. The laws of
physics allow for brains that run a million times faster than ours while
consuming the same amount of energy. And that's without using tricks like
quantum or reversible computing.

Evolution can only hill-climb. It can't jump to the highest points of the
solution space, and it can't explore any areas of the solution space
surrounded by valleys. This is why no animal has evolved wheels, treads, or
impeller pumps. This is also why animals haven't evolved brains made out of
materials that conduct signals at speeds close to c.

~~~
lkozma
"This is why no animal has evolved wheels"

instead animals evolved brains and hands that invent and build wheels. Nice
way to jump out of the supposed limitation of hill-climbing algorithm.

~~~
AngryParsley
Evolution has limitations that go beyond inefficiency. For example: it is
completely indifferent to the organisms it operates on. Evolution has
absolutely no compassion. It will not select for traits that reduce suffering
in dying organisms. Evolution isn't forward-looking. It doesn't notice when it
starts on a journey to trap itself in a local maxima. Evolution is _slow_.
Life started on this planet not long after the surface solidified, but it took
4 billion years for general intelligence to evolve. 4 billion years and
countless organisms suffering and dying just to get 3lbs of gray matter. Had
it taken much longer, the Sun would have gotten farther along in its life
cycle and sterilized the Earth before evolution stumbled on general
intelligence.

Sorry for that mini-diatribe, but I don't like it when people paint evolution
in a positive light. It really is one of the dumbest possible ways to explore
a solution space.

------
psyklic
Ridiculous. Large-scale neuron simulations currently are like an algorithms
assignment where I have some code written but don't understand the algorithm
so I just tweak things all night and never get anywhere. In other words:
"let's hope it does something interesting by just connecting millions of these
things that we can kind of describe but don't understand at all."

We are VERY far from understanding the brain. Even for single neurons, we do
not understand what is important and what we should ignore. We do not
understand real-life neural networks at all. For example, we have the entire
neuronal connectivity map for C elegans (only 302 neurons), the worm's genetic
structure, and more -- and we still don't understand how its nervous system
works, and we can't even simulate it.

This is a problem with this field -- why in the heck are we trying to simulate
human and cat brains -- perhaps the most complex of them all -- when we can't
even simulate a simple worm?? What ever happened to the idea of starting
simple then working our way up??

I am always shocked when I look on the shelves at the bookstore and see lots
of books titled "How the brain works" -- when we actually understand so
little.

------
arethuza
Having worked in AI research for a number of years in the late 80s and 90s
(very much in the symbolic side of things) - I ended up thinking that the only
way anyone was ever going to construct an artificial mind capable of real
general intelligence is by reverse engineering biological systems. So I agree
with that part of the article.

However, I wouldn't get too excited about any of this happening any time soon.
Personally, I still feel the field is still thrashing around looking for an
approach that achieves some kind of traction and that provides a long term
basis for ongoing progress.

As is often said, AI is still in its pre-Newtonian phase - I would love for a
breakthrough to happen (although you have to refer to Vinge for some potential
issues with this) but I really don't expect, like with fusion, that much to
happen in my lifetime.

For the foreseeable future there are only going to be intelligent entities on
one side of our screens - so far a great job is being done on augmenting our
own natural intelligences (e.g. Google) so I'm personally more interested in
that side of things.

~~~
marshallp
AI can be created today - it's just a matter of money. Put a few hundred
billion dollars into genetic programming, or cyc, or data sets for a svm and
you can build general AI.

~~~
arethuza
There are some things that aren't a matter of money, and I would suggest
"true" AI is one of them (i.e. human level general intelligence).

I honestly don't think, even if you had $100 billion, anyone would know where
to start to build such a beast. It isn't just a case of building sufficiently
powerful hardware - I suspect that would be the easy bit.

Spend $100 billion on CYC and you would end up with a big interesting
database, not a functioning mind.

~~~
marshallp
I don't agree with that.

1) How did the human brain come about - evolution. So you can get there with
genetic programming. How much computer time do you need - maybe $100 billion,
maybe $100 trillion, but theoretically it's doable.

2) What is the brain - conventional opinion is that it's a neural network -
ie. if you spend enough money as -assemble a million neural network engineers
or -create billions of data sets to train against simple learning algorithms
you can probably replicate it's "interface"

3) You can also view a human, or at least a human speaking through email, as a
database. If you had enough people working on cyc, let's say a million rather
than the 20 or so it's usually had, you could get a lot closer, or exceed,
human level performance.

Imagine you a couple of professors and a handful of grad students. Would they
be able to build a skyscraper within 50 years, and do so from scratch, without
any prior knowledge of civil engineering, no access to construction labor, and
only access to raw materials? Yet that is what AI has been about for 50 years
- a handful of people attempt to build something much more complex than a
skyscraper.

It probably took many trillions of dollars and millions of workers to get to
the point where skyscrapers could be built, and each one individually probably
costs more than all the money ever spent on AI research.

~~~
arethuza
Well, some people think that the brain _might_ be doing something deeply weird
with quantum level behaviors to create what we call "consciousness", if that
is the case (of course a huge if) then it might _not_ be possible to replicate
it in any meaningful sense.

If you look at artificial neural networks and natural ones there are a lot of
differences. Artificial neural networks are useful tools for some kind of
problems, not physiological models of what happens in our brains.

I can recommend this book if you are interested:

[http://www.amazon.co.uk/Philosophy-Artificial-
Intelligence-O...](http://www.amazon.co.uk/Philosophy-Artificial-Intelligence-
Oxford-Readings/dp/0198248547)

~~~
rsaarelm
If it's not possible to replicate, how is human biology replicating it?

------
simonsquiff
This is complete fantasy unfortunately.

They make this sound like AI is just a hardware problem, waiting to be solved
by the wonder of Moore's Law. It's not. The human brain has 100 billion
neurons, and 100 trillion synapses (wikipedia) - but it is the connections
between them that make it do what it does. The possible connections are just
mind boggling. Let's say you build hardware with those numbers (utterly non-
trivial) - how does it connect together? How to we get it to structure in a
way that those networks do anything? You can't expect it to wire itself into
anything useful.

We have no idea how the brain does anything of significance (merging of senses
into a coherent whole; memory; consciousness etc). We can't even define
consciousness let alone understand how it comes about. We can't 'program' a
neural network of any complexity. Even things that we take utterly for granted
like object recognition is immensely complicated, let alone things that we
know are 'difficult' like sentience. We can't expect this stuff to magically
appear from a large neural network; nor can we program it (as we don't
understand it); nor can we train it as we don't know what intermediate steps
to train for.

Then, what are the inputs into it (and how are those interpreted by the
hardware), and how do we get any output out of it? Let's say you solve both
the hardware and the wiring problem - so you have a perfect replica human
brain in a jar - how do you tell it what to do? How do you get any sense out
of it?

We're fumbling in the dark here. To think those fumbles will achieve
intelligence in any form in the foreseeable future is unfortunately fantasy.

~~~
Jach
Admitting that the problem is hard is important but not very helpful for
actually solving the problem. Are you trying to discourage people from trying
to solve it? Why, when the benefits should they succeed are astronomical? I
agree that a lot of people have rosy-eyed views of an AI future, even ignoring
the entire problem of Friendly AI, but why are you attacking them by just
stating the difficulty of the problem instead of trying to suggest solutions?
Hell, you could even donate some money to the people who do pay attention to
the problems: <http://www.singinst.org/>

I don't know how accurate all of your "we" statements are, I'd bet at least
one of them is wrong or will be wrong in the near future, considering there's
about 7 billion of us and I doubt you've polled us all.

~~~
simonsquiff
>Are you trying to discourage people from trying to solve it?

Not at all, the complete opposite. It's a fascinating area but I feel the
expectations are completely out of whack with reality. Most of the articles
like this have people saying we're only a decade or two away from proper AI,
when I'm arguing that that is just not backed up at all - and people are
underestimating what work is involved here.

When expectations are set wrong, and people eventually realise it, it can
damage research in this area as people don't take it seriously any more. Ref
the'AI Winter' - the dramatic cut in funding when governments realised most of
the experts in the field had massively over-promised and under-delivered.

I think we need to break this down into smaller chunks of bounded problems.
There is some amazing research around using human like neural networks and
similar processes steps for machine vision - e.g. for object recognition,
trying to mimic what happens in nature. This is greatly achievable.

Trying to model the entire human brain, with talk about how this will lead in
the near future to intelligent, reasoning - even sentient AI - is fun but for
the reasons I outlined needs an expectation reboot.

------
10ren
> What does it mean to model the cat brain?

They don't seem to have a way of testing it. Sounds like Dawkins' computer
models of "insect" evolution: fun, interesting and even beautiful, but not
falsifiable. Visual art rather than science.

Seems they're saying they've constructed a neural network of comparable
complexity to a cat's brain - for some value of _comparable_. And they're
arguing about that value. Meanwhile, their cat's brain performs nothing like
an actual cat's brain. The question doesn't even arise. But it's hard to tell,
as the article is very information light.

As in code, complexity is impressive. Not necessarily useful or truthful.

------
emarcotte
Personally, I wouldn't try to build a brain like ours. Sure we have nice
passions, and deep insight frequently. But there is so much badness possible
with our brains. Take for instance the "cognitive cars" if we built a brain
like ours, how long would it take for it to get road rage?

I would like a brain that was not like ours. I'd like something capable of
deep insight and critical thinking, but nothing like a human brain.

------
fiaz
I'm still surprised that much of the talk around future "intelligence" is
geared more towards creating it from scratch (difficult) rather than enhancing
or augmenting what already exists (less difficult). Regardless of this, if
creating intelligence out of silicon is where the research is heading, we
would have to first prove that silicon is at least an equivalently suitable
substrate for hosting a model of our carbon based brain as opposed to merely
assuming that whatever brain model we create can exist on any substrate.

I think more progress will be made in the meantime towards enhancing existing
intelligence as it will be more immediately accessible in many cases.

------
donaq
_graphics processing units (GPUs) give desktop personal computers the same
speed that supercomputers had ten years ago_

Huh? GPUs give our desktops their speed? O RLY?

~~~
frisco
See: [http://www.hardwareinsight.com/wp-
content/uploads/2009/10/gp...](http://www.hardwareinsight.com/wp-
content/uploads/2009/10/gpu_vs_cpu.PNG)

Graphics cards (streaming multiprocessors) are computational beasts.

~~~
donaq
Regardless, we don't really use them for computation on desktops, right?

[Edit] Besides graphics computing, that is.

~~~
frisco
Today we do increasingly frequently.

<http://en.wikipedia.org/wiki/GPGPU#Applications>

Especially on the Mac, since the vast majority of new Apple machines come with
CUDA-capable NVIDIA cards now and developers can expect them.

Even so, the article was referring to the fact that you _can_ pack that kind
of power in a desktop now, regardless of whether or not the average consumer
tends to take advantage of it.

