
Why Is the Human Brain So Efficient? (2018) - rcshubhadeep
http://nautil.us/issue/86/energy/why-is-the-human-brain-so-efficient-rp
======
kristopolous
They aren't the same thing. They are different classes of objects, different
tasks. This comparison is kind of silly.

I'd hate my computer to have the memory accuracy or the computational accuracy
of my brain. I'd hate to have the creativity and inspiration of a computer.

Delete being such a nontrivial operation is probably a good thing for humans.
Copy being imperfect probably has something to do with the phenomenon we call
imagination. We use computers because they are complementary, not
substitutive.

They're just so fundamentally different.

~~~
canjobear
When we say the brain has poor computational accuracy, we’re usually talking
about the conscious brain we’re aware of. But our low-level motor actions and
perceptions, coordinated by the brain, require a lot of precise computation.
These low-level brain computations are the thing to compare to AI, not our
conscious thinking. Our conscious mind is more like low-precision software
running on top of an enormously powerful computer.

~~~
TeMPOraL
> _But our low-level motor actions and perceptions, coordinated by the brain,
> require a lot of precise computation._

That accuracy is more likely achieved through fast, analog feedback loops than
precise calculation.

~~~
dTal
We don't have analog hardware. Nerves are digital. The best we can do with
them is pulse-frequency modulation.

~~~
TeMPOraL
I'm not convinced real neurons are just binary summators. But regardless of
that, there's also chemical transmission involved, which is an analog thing.

~~~
dTal
Right, but the end state of all that is "neuron fires" or "neuron doesn't
fire", or at best "neuron fires n times per second" \- a binary state.

~~~
PeterisP
We have some behavior that depends on very exact timing of neuron spikes (e.g.
determining direction of sounds by connecting signals from both ears), so
that's kind of analog - though it does get reduced into a binary state, in the
end either the "detector for offset of x microseconds towards left" fires or
not.

------
dtnewman
I imagine a group of dogs sitting around and asking "How are we _so_ good at
thinking about fun ways to play with squeeky toys?".

The truth is, that our ability to reason about ourselves is limited by our
ability to reason. Perhaps there are aliens out there who would laugh our
cognitive abilities--their's being so much better than ours.

~~~
jjoonathan
Less complicated systems successfully reason about more complicated systems
all the time. Ditto for self-reasoning. See: bootloaders, update systems, and
package managers.

In order to prove that some kind of meta-cognition is inherently beyond our
grasp, you don't just have to prove that the system we are attempting to
reason about is more complex than ourselves, you also have to prove that the
problem isn't meaningfully reducible. Otherwise we can and will eventually
figure out the mental tools we need to tackle the problem, and tackle it.

The same applies to brute physical strength. Humans have no problem building
machines vastly stronger, tougher, larger, more precise etc than ourselves
even though narrow-minded reasoning might lead you to believe that this was
impossible ("a tool can only cut something less hard/strong than itself," "a
ruler can only measure less precisely than itself" etc).

~~~
ars
> "a ruler can only measure less precisely than itself"

That's actually super interesting. How do you bootstrap (as it were) a ruler?

Like, assume you don't have any machine that is itself created by using a
ruler (so no screws or gears, except hand cut ones).

Obviously it's possible, since we did it. But how do you do it?

~~~
jjoonathan
Symmetry.

To make a very flat surface plate, you can grind three less-flat surface
plates against each other (three because if you only have two, the common
surface guaranteed by symmetry can still have curvature). To make a very
precise cylinder, you can grind a less-precise cylinder in a "V" formed by two
flat surfaces. To make a very regular lead screw, you can grind a less regular
lead screw with a less regular reversible nut. Now you can construct a
micrometer, and from there your mill/lathe and you're off to the races :)

That's how you bootstrap precision, but taking precision from a basic form and
putting it into a complex form is a whole other art. These days we use
numerical techniques, but historically geometric construction would have been
the ticket. For instance, take a string, use a "standard twig" or something to
mark 3+4+5 sections of equal length, cut & tie it into a loop, tension it into
a triangle with sides of length 3, 4, and 5 using the marks, and now you've
got a right angle.

Modern metrology looks a bit different because time symmetry started beating
spatial symmetry. Badly. Absurdly badly. Like "you get twice the digits for
the same price" badly. You get 6 digits for pennies in a quarz oscillator and
I know the metrologists have chased their clock stability out to at least 19
digits, probably more by now. Also, you can just transmit the reference over
the air to globally coordinate accuracy for dirt cheap, you don't even have to
ship blocks of platinum-iridium in inert atmosphere. Modern metrology is
basically the art of "rebasing" other types of measurement onto measurements
of time because time measurements are so ridiculously great.

------
est31
One kind of efficiency which hasn't been talked about is the energy loss of
things like state switching and keeping the current state enabled. I think
that brains build on much more efficient primitives than the silicon
transistors computer chips use and thus can perform far more computations for
far less energy than a desktop CPU.

Another difference between CPUs and brains is that brains are much less
general purpose. CPUs do run-time interpreting of instructions while brains
process data in a more straightforward way like GPUs do. Many problems can be
implemented into GPUs and they will run much faster. I'd argue that brains
excel at such tasks while being harder at tasks that require lots of state to
be kept around as well as conditional jumps like computing a hash function or
compiling a program. CPUs excel at those tasks.

~~~
tegeek
Comparing Human Brain with a CPU is misconception. In the past when we didn't
have digital computers we used to compare Brain with other machines. And now
with a CPU. A Brain from a primitive neuron to higher level is not comparable
to any machine at all including the CPU.

~~~
The_rationalist
_Comparing Human Brain with a CPU is misconception._ no it is not. Yeah
architecturally they are very different and CPU are arguably more programmable
/ general and less efficient.

What does matter is whether CPUs are theoretically able to achieve all the
things that a brain can do (and even more) And indeed CPUs as turing complete,
programmable machine are a strict superset of what brains can do. The gap
between what task and at which accuracy a brain achieve vs a CPU is decreasing
each year as you can contemplate on the paperswithcode.com leaderboards. The
difficulty is in software, hardware through clusterisation has arguably order
of magnitude more compute than a brain has.

There are four big missing pieces to match human brain performance:

1) Matching its pattern recognition abilities I believe that current
statistical learning techniques of SOTA neural networks actually outperform
humans on learning continuous data. But humans outperforms by far current
software at zero/few shot learning on sparse/discrete data (where gradient
descent is not applicable) I believe humans have this performance edge because
of 2), 3) and 4):

2) humans can encode and decode meaning with great accuracy in a high level,
descriptive complete declarative language called natural languages. They are
in many ways far superior to current GQL/datalog/SQL DB languages at encoding
and retrieving meaning (that is an isomorphic description of a denoted thing).
The field of semantic parsing (+ question answering from the parsed knowledge)
is the key to general language understanding and crucially lack funding. Once
machines will be able to understand language and retrieve all the knowledge of
say Wikipedia, they will be able to transcend human performance on many
intelligence/erudition tasks.

3) humans seems to be able to do meaningful runtime code generation.

That is you can develop on demand new solutions to new problems: such as
[https://www.kaggle.com/c/abstraction-and-reasoning-
challenge](https://www.kaggle.com/c/abstraction-and-reasoning-challenge) The
field of specification and implementation generation is too underfunded.

4) is the observation that 3) is probably a necessary key for unlocking 2) and
that both 2) and 3) are needed to achieve this communication/feedback loop
between high level semantic reasoning and statistical operations.

As we can see, humanity overfocus funding on 1) despite being the most solved
of all others necessary foundation's to achieve AGI and hence, as a side
effect, empirically prove that CPUs superset brains

~~~
coreyp_1
"And indeed CPUs as turing complete, programmable machine are a strict
superset of what brains can do."

This is a fundamental assertion that I do not believe you can make.

The brain cannot simulate a turing machine. It does not have infinite memory,
which is a requirement for a turing machine. It can, however, stimulate a
linearly bounded automata.

It is also not implicitly obvious that a turing machine can simulate a brain.
The primary difficulty that I do not yet see a way around is the fact that a
turing machine, which has as its control unit a finite State machine, is bound
by the finiteness of those states (finiteness of representation, not of
number). The brain has no such constraint. It is analog, and therefore
infinite in State representation.

In my opinion, this is more akin to the P versus NP problem, and that we know
what needs to be equivalent in order to say that P equals NP, but no one has
proved it or disproved it yet. That's how I feel about the statement about
Turing machines and the brain. I do not believe we can be dogmatic on that
aspect yet either way. We may have opinions, just as we may have opinions
about P vs NP, but we must also be careful about stating what is provable and
what is opinion, and that is all I'm trying to do.

Of course, I am willing and very interested to gain more insight in this area,
so discussion is welcome!

~~~
shawnz
> The brain cannot simulate a turing machine. It does not have infinite
> memory, which is a requirement for a turing machine.

In practice we call modern computers turing-complete even though they don't
have infinite memory. The brain can simulate such a machine.

> The brain has no such constraint. It is analog, and therefore infinite in
> State representation.

If this mattered, then it would mean analog computers are more powerful than
digital computers and therefore the Church-Turing thesis is wrong

~~~
deepnotderp
Isn't the recent Google quantum "supremacy" experiment evidence against the
extended Church-Turing thesis?

~~~
anchpop
No, quantum computers as we understand them can be simulated by a turing
machine

~~~
deepnotderp
The _extended_ Church-Turing thesis which I specifically referred to concerns
efficient simulation, not just whether it can be simulated.

------
tehsauce
Why is the human brain so inefficient? It takes years just for it to compute
the sha-256 of this media file.

------
headalgorithm
See discussion from 2018:
[https://news.ycombinator.com/item?id=16895124](https://news.ycombinator.com/item?id=16895124)

------
gwern
I was wondering why this seemed so outdated and ignorant for something
published in 2018 (only 10b transistors? 'computers are serial', really?), but
I see that it's from a 2015 textbook, using citations for computing hardware
published in 2008, and presumably referencing hardware from 2007 or earlier...

------
HarHarVeryFunny
"At a global level, the architectures of the brain and the computer resemble
each other, consisting of largely separate circuits for input, output, central
processing, and memory."

This is fundamentally wrong, even at the 10 mile high summary level.

In our brain processing and memory are not in the least bit separate, and
memory distinct from processing doesn't really exist.

If you really had to make a computer analogy of how the brain works, it's more
like self-modifying code where the only memory of the data flowing through it
is the changes to the code that were made as the result of that prior flow.

------
Laakeri
Leslie Valiant has done some interesting work on quantifying the efficiency of
the brain from the viewpoint of computer science, see e.g.
[https://www.youtube.com/watch?v=X9hRRh76QEA](https://www.youtube.com/watch?v=X9hRRh76QEA)
and the book Circuits of the Mind.

------
jonnypotty
World record tennis serve is 144 miles an hour and a human can't really move
across a court and return a ball moving at this speed. If they're lucky they
can reach it and react in time to hit it. I'm a bit confused by an article
that claims tennis players can react to and return serves up to 160 miles an
hour. I think evidence suggests that returning balls anywhere near this fast
is dependent on analysing factors before the ball starts moving, the other
players body position, racket position etc. Players have an intuition about
where the ball is going to go without having to look at and analyse the flight
of the ball.

Just did some very basic checking. Tennis court 23m 260mph = 72m/s Ball takes
approx. 0.3ms to travel length of court. human reaction time to visual
simulous 0.25ms So the idea is they move and hit the ball in the remaining
0.05ms? Hmmmmm.

~~~
Implicated
'Players have an intuition about where the ball is going to go without having
to look at and analyse the flight of the ball.'

This is pretty easily observable with baseball players as well. After playing
thousands of games while standing in the same (relative) place on the field
I/they can anticipate where the ball is going to go based on a variety of
variables in real-time... instantly.

~~~
ajuc
There was a festival of jugglers in my city and they taught me to juggle in
like 15 minutes, I was amazed it's so easy (the basic 3-ball juggling, and
just for a monute or two, the more difficult juggling is HARD and I had to
train later to be able to keep juggling forever).

There is a very easy trick - you look forward in the distance keeping the
balls in peripheral vision and there's 2 automatic reactions you have to
develop:

1\. when the ball going up is at the top of the curve - throw another ball up

2\. when a falling ball goes out of your peripheral vision - do the "oh shit
something's falling let's catch it" routine with the hand that has less balls
in it.

Hands learn very quickly how to move to catch the balls that leave the
peripheral vision "by itself" basing on the trajectory you've seen.

It's actually harder to juggle when you look at the balls directly, and it's
impossible when you think about it and try to do the moves consciously because
you're too slow.

It was mindblowing to me that it's easier to catch a ball when you don't look
at it.

------
stared
I wouldn't say that human brain is that efficient (per volume). Compare and
contrast with the brain of rats or Corvidae:
[https://www.youtube.com/watch?v=ZerUbHmuY04](https://www.youtube.com/watch?v=ZerUbHmuY04).

~~~
jcims
It's not even a good example. Humans are about the least physically agile
vertebrates on the planet.

Think of a fruit fly. It can walk, fly, forage for food, mate, etc. The entire
critter has a mass of .2mg and their brains have ~135k neurons. Making the
horrible assumption of linear power scaling, that's one microwatt.

~~~
rantwasp
but can it do math? can it paint? drink wine and muse on its own brain
efficiency?

~~~
stared
Fruit flies can drink wine.

Doing maths - well, it is a common trope that we extrapolate skills of a
fraction of humans on the entire population. For an _average_ human 1/3 + 1/2
can be problematic.

Abstract counting up 5 or so - well, many birds can do that, including
pigeons.

~~~
rantwasp
the wine part was a joke. especially because fruit flies definitely appear to
be attracted to fruit, wine, etc

i believe you are underestimating how capable humans really are. all of us can
learn to do math and i’m talking serious math not basic math.

~~~
jcims
My point from above is that 'humans playing tennis' isn't a great benchmark
for the efficiency of our brains.

------
mrwnmonm
The brain is weird. You can figure out how to split an atom, then forget your
keys inside your car.

------
LordHeini
Is it though?

I think having a good metric is really hard.

For example i can have a neural net running on my smartphone doing recognition
tasks.

A task the brain is typically good at due to its neural net structure while
the computer basically has to simulate the net.

But still my smartphone can mark all the faces in a crowd multiple times over
in a time i can not recognize even a single person.

And that with a camera way beyond the capabilities of the human eye.

Modern smartphone processors draw around 1 or 2 watt max. So is my phone more
efficient at doing this?

One could argue that my brain does other stuff at the same time like
controlling heartbeat and what not but my phone has to keep the wifi, clock
and so on too.

The truly impressive part is the ability of the brain to do completely generic
problem solving for basically everything; while running on 10 watt. With the
added ability to learn a few activities to a really high level.

Its is not efficient at doing a singular thing it is efficient doing
everything at once.

~~~
JoeAltmaier
Yes but for integrating information, your brain is marvelous. Somebody in the
crown laughs or moves a certain way or you catch a sniff - and BAM you found
your person.

Any automated single-skill system might be more efficient, but of course it
becomes useless outside its parameters. Put a hat on those people in the crowd
and your phone may be totally defeated.

------
sddfd
I feel uncomfortable at the ubiquitous, silent assumption that what is
marketed as AI is a computer implementation of a brain.

I see how the term neuronal network reinforces this believe, but we
(especially the researchers among us) should allow for the possibility that we
are missing something.

~~~
papito
Neural networks also have no ability to create new information based on their
own mistakes. What is a mistake? When does something look "off" but still very
interesting?

For example, you can feed a neural net all the recipes of burgers to create a
perfect burger. Great. But how does the same net _invent_ the burger?

The burger, like many foods or accidental art, was invented as a result of
scarcity, circumstance, experimentation, or just fortunate error. That sort of
imperfection is very hard to achieve with AI, because it is designed to be
either perfect or fail.

~~~
gambiting
>>For example, you can feed a neural net all the recipes of burgers to create
a perfect burger. Great. But how does the same net invent the burger?

Wait....but it just....did? It took the information about all possible burger
recipes and invented a new one out of these. Like, a human could only invent a
new burger if they knew anything about burgers in the first place, at the very
least that it's a bun with some filling in between, otherwise you'd have no
context to invent anything.

~~~
jayjader
Not OP, but I think they're not talking about inventing a _new_ burger, but
inventing _the_ burger, as in the first one ever.

As in, the neural net in this example is able to improvise a new burger recipe
solely because it was given existing recipes to burgers as input; it did not
come up with the notion of a burger and then produce a recipe that outputs
something fulfilling that notion when followed.

Personally, I would argue that this distinction is not as clear-cut as the
tone of the original comment seems to suggest. Humans didn't invent the burger
from nothing either. We've been grilling meat and making bread for millennia,
and sandwiches have been a thing for over a century.

A 'burger' is just another iteration of our biological neural nets' attempts
to make food from ingredients already present in our physical reality. Given
that we flow in a single direction through time, any food we make is in turn
added to our list of ingredients for making food "the next time". One could
argue it is only a matter of time once meat can be ground into patties and
grains turned into bread that burgers start being made - given the relative
benefits humans gain from consuming both.

This comes back to what others have expressed elsewhere in this thread, that
the probable [most] important distinctions aren't between software vs
hardware, or organic life vs silicon processors, but the environment &
capacity to interact with said environment. Some sense of "innate tendency to
experiment" (i.e. curiosity) is probably either equal in importance or a
direct runner-up.

~~~
papito
The burger was invented because a hungry traveler walked into a restaurant in
Connecticut that was closing, and the owner had nothing but some beef and
bread left. So he improvised - cooked the beef patty and squeezed it between
two bread slices.

To this day they serve their burgers between two bread slices - not buns.

If you want to look it up, it's called LOUIS’ LUNCH.

AI my ass :D

------
RivieraKid
I don't know.

------
plutonorm
This article contains inaccuracies and says almost nothing novel for your
average hacker new reader.

~~~
MaxBarraclough
> Please don't post shallow dismissals, especially of other people's work. A
> good critical comment teaches us something.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

What are the inaccuracies?

~~~
plutonorm
The same accusation could be levied at the original artical.

~~~
MaxBarraclough
Please make a specific and substantive point. Worthwhile discussions do not
follow from vague and shallow dismissals.

