
Brain vs. Deep Learning (2015) - akkishore
http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/
======
ewjordan
The key to the claim that other computational estimates are way off is
essentially that there's a lot of data crunching happening within a single
neuron, rather than it being something we can model as collecting a bunch of
inputs and either firing or not. He's arguing that each neuron does a ton of
internal computation that can itself be modeled as a (sometimes very large)
convolutional network.

I think in a sense this is well-established when looking at real neurons, but
several times in the article he uses phrases like "shown to be important for
information processing", and that's where I get off the boat a bit. When
you're saying that it's _so_ important for information processing that it
warrants a 1000x or more increase in the computational power necessary to
implement an algorithm, I think it's necessary to dig into what the actual
work being done there is, not just that there's _some_ non-trivial
transformation. A lot of interesting and extremely tough to model fluid and
chemical dynamics are in play when I drink too much water and have to pee, but
that doesn't mean that we need to understand them to build a waste disposal
system using pipes.

In particular, does the within-neuron processing actively tune itself based on
the data it processes to an extent on-par with inter-neuron connections (in
which case the argument that it's fundamental to the learning process would
hold a _lot_ more weight), or is it mostly static? I think a lot of us
consider "important for information processing" to mean "is a meaningfully
dynamic parameter involved in a learning algorithm", rather than an accidental
shmearing of data.

I'd really love more info on what the actual processing that's happening is.

~~~
visarga
> there's a lot of data crunching happening within a single neuron

I'll just leave this here:
[https://en.wikipedia.org/wiki/Gene_regulatory_network](https://en.wikipedia.org/wiki/Gene_regulatory_network)

Inside any cell there is a system that works like a neural network - the gene
regulatory network. Each gene acts like a neuron, with chemical inputs and
outputs. That would make the processing power of any cell on par with that of
a small neural net.

~~~
shirakawasuna
Gene regulation is very slow, on the order of minutes to hours. It's involved
in our brains, for sure, but it's just way too slow to be a dynamic part of
our brains' processing of information.

------
philipkglass
Long, and fascinating. I can see why this made the front page. I oscillated
between 20% strident disagreement and 60% strident agreement. (The rest -- no
strong feelings, or I feel that the questions are too ill-formed to answer.)

To pick one point of disagreement,

 _“We do not need as much computational power as the brain has, because our
algorithms are (will be) better than that of the brain.”_

 _I hope you can see after the descriptions in this blog post that this
statement is rather arrogant._

Machines are already better than brains at many cognitive tasks of practical
interest. Believing that we'll continue to find "tricks" to allow computers to
outperform brains on useful cognitive tasks, despite the brains' much greater
complexity, seems like a perfectly sober and conservative prediction.

If I had to advance my own pet reasons for discounting the likelihood of a
technological singularity, here are my top two:

1) It's a more challenging case of the general Fermi paradox. Show me the
Hubble images of the computronium Dyson swarms. If it takes less than a
century to go from the first transistorized computers to superintelligence,
and superintelligence is as prone to run amok as Bostrom/Yudkowsky think,
signs should already be visible from Earth.

2) You need experiments to validate scientific models. Even if a machine-
intelligence could think a billion times faster than a biological
intelligence, it couldn't complete experiments a billion times faster.
Technologies that act on the material world will improve sublinearly with
respect to thinking/computing power, for at least this reason and probably
others as well.

~~~
visarga
>> “We do not need as much computational power as the brain has, because our
algorithms are (will be) better than that of the brain.”

> I hope you can see after the descriptions in this blog post that this
> statement is rather arrogant.

The brain does more than computers: the brain builds its internal structure
all by itself. When was a computer able to evolve from a bunch of transistors
laying on the table? Also, it uses very little energy and is resilient for
80-100 years, compared to the computers of today, its apples and oranges.

> Show me the Hubble images of the computronium Dyson swarms. If it takes less
> than a century to go from the first transistorized computers to
> superintelligence, and superintelligence is as prone to run amok as
> Bostrom/Yudkowsky think, signs should already be visible from Earth.

What if the AGI will prefer to build virtual worlds and societies of virtual
agents instead of grand space domination ? Even humans prefer games to reality
nowadays (or a large percentage of us do). If we're part of such a sim, then
it would explain the lack of external signals from extraterrestrial aliens or
AGIs.

The probability that AGI will appear and create amazing simulations is much
larger than that of picking up signs of life in the vastness of space. Also,
by replacing physical with sim we can do all sorts of things such as use less
energy, recover from any accident, hack our own brains/minds, even
immortality.

~~~
icc97
The $24M per year (23MW) for the electricity for the Tianhe-2 was quite
shocking.

This is against the 13W that the brain uses:
[https://www.scientificamerican.com/article/thinking-hard-
cal...](https://www.scientificamerican.com/article/thinking-hard-calories/)

So the brain has a million times more the processing power and uses a million
times less power.

~~~
JacobiX
>> So the brain has a million times more the processing power and uses a
million times less power.

A commodity smartphone is millions of times more powerful than all of NASA's
combined computing in 1970 and more energy efficient.

A factor of 10^6 is maybe achievable with a specialized quantum computer in
the mid-term future ?

~~~
icc97
The post speculates 2037 - 2080. So yes similar timescales from the 70s til
now.

------
Seanny123
The math this guy performs is so questionable and his idea that Deep Learning
is the pinnacle of biologically plausible cognitive modelling is incorrect. I
write about this [in a blog post]([https://medium.com/@seanaubin/deep-
learning-is-almost-the-br...](https://medium.com/@seanaubin/deep-learning-is-
almost-the-brain-3aaecd924f3d)).

tl;dr doing pure Deep Learning (also known as connectionist) models of the
brain limits your tools in a bad way. Using tools from Dynamicism and
Symbolicism is better. As proof, check out Spaun, the world's largest
functioning brain model.

Note: I mostly just disagree with him philosophically, in terms of his
reasoning because it's overlooking some evidence. Don't really have an option
on his conclusion. Probably agree with him more than I disagree with him.

------
spot
tldr: 2037-2080 for "brainlike computers". sounds like a reasonable estimate
to me. but who would say that is "nowhere near"? A few decades compared to
billions of years of evolution to create life here on earth? a few decades
compared to millions of years to go from monkeys to humanity? compared to
thousands of years to go from prehistory to contemplating this question? a
date that many of us will live to see? IMO this means we are on the verge.

~~~
sporkologist
Since we're already at the level of coding genes, we are short-circuiting the
normal process of evolution, which was a trial-and-error process. We are
living in interesting times.

------
mos_basik
>Quantum tunneling will become relevant in 2016-2017 and has to be taken into
account from there on. New materials and "insulated" circuits are required to
make everything work from here on.

He wrote this in 2015. Was he correct? I don't really know where to start
researching something like this; is there anyone here familiar with the field
who could comment on it?

~~~
ivan_ah
I think he's referring to the general idea that transistor fabrication size
shrinking cannot last forever. Since atoms have a fixed physical size, if the
shrinking continues indefinitely, at some point the "wires" in the transistors
will become zero-atoms thick (actually a few atoms thick). See this for
explanations better than I could
[https://en.wikipedia.org/wiki/5_nanometer](https://en.wikipedia.org/wiki/5_nanometer)

Note the "quantum" here is not a good thing (like the theoretical "quantum
speedup" possible for certain computations on quantum computers), but a bad
thing: imagine you want to send current down this wire, but the current is
jumpy and often leaks out of the wire to a neighbouring wire thus causing
errors in computations.

------
alanbernstein
This looks like a fascinating article that I will have to come back to. I'm
not sure how well it fits the blog's tagline, "Making deep learning
accessible." ...

~~~
baking
Maybe if you read the title of his blog as "Making deep learning
understandable" and this blog post as "If you understand how deep learning
really works (and not just treat it as a magic black box) and understand how
the brain works you will realize we are nowhere near the singularity" it makes
perfect sense.

------
markan
An interesting read, but the conclusion that "the singularity is nowhere near"
was reached by assuming that only neural modeling could get us there, and that
assumption wasn't defended well. (In fact it looks rather dubious, given all
the quasi-intelligent things computers have achieved _without_ copying neural
dynamics.)

------
yters
If intelligence is not computable then by definition deep learning is not
capable of human intelligence. There are any reasons to think intelligence is
beyond computation: Gödel's incompleteness theorems, no free lunch theorem,
data processing inequality, Solomonoff induction and Kolmorov complexity is
uncomputable, the halting problem, if the mind is code which program are you,
all programs are finite yet we can think about infinity, split brain but
unified consciousness, the inherent difference between third person and first
person descriptions, reasoning about paradoxes, the ability to know we are
wrong, the ability to write AI programs, the whole connectionist vs. modules
problem Fodor points out, and probably many more.

~~~
felippee
I think the major problem is to think of "intelligence" as a problem in
"computation".

We are so used to this framing, that it may seem foreign. Others will argue,
that in principle it is a problem of computability but in extreme almost every
problem is, but that is not necessarily how we frame other problems. I'd say
"intelligence" is a control problem (as in controlling a robot). This framing,
though subtle, makes the entire problem quite different. You no longer talk
about computability but you talk about survival in "high temperature thermal
bath" (or otherwise called "physical reality"), full of unpredictability and
dangerous stuff.

When you frame it like this, it is clear we have not even began to address the
problem properly, not to even mention solving it.

~~~
chanakya
Aren't other problems getting framed that way, too? Car driving is not
inherently a problem in computation, but is becoming one. Robotic control (or
other control problems) are certainly being solved by computation.

Perhaps treating AI as a derivative of the control branch of computation could
practically help speed up progress in some areas, but it shouldn't
fundamentally change its nature.

~~~
felippee
Well this is exactly my point. A lot of problems are currently framed as
computation problem. As much as it might be useful for some, I'd argue that we
should be careful with this.

We get all the great marvels of deep learning, but robots remain stupid as
bricks over 30 years of Moore's law. To me this (Moravec's paradox) is a
signal that we are doing something wrong, and typically we do things wrong
when they are not framed properly.

