
The brain exceeds the most powerful computers in efficiency - yters
https://mindmatters.ai/2019/02/the-brain-exceeds-the-most-powerful-computers-in-efficiency/
======
ben_w
My disagreement is something this article half-acknowledges:

> The brain has the potential for a petaflop per second of processing power.
> However, the measured performance of conscious processing is only about 50
> bits per second.

Why assume that either training or playing Go involves only conscious decision
making? Anything that we call “common sense”, “gut feeling”, “aesthetically
pleasing” etc. is something we can’t (or cannot _easily_ ) generate a
conscious rationale for.

Of course, that’s doesn’t mean humans don’t beat computers at all; last I
checked, humans can make acceptable quality inferences from far fewer examples
than an A.I.

~~~
gasg0d
I once played in two poker tournaments on back to back weekends.

Tournament 1 on adderall. Tournament 2 without adderall.

Tournament 1 my consciousness and subconscious seemed to be working in tandom.
I knew what action to take, and why I was taking that action.

Tournament 2 my subconscious seemed to play a bigger role being off the
adderall. I was making the right decisions for the most part same as the week
before while on adderall, but I was not consciously completely sure of my
reasoning for most decisions until several seconds, minutes, or even hours
after I had reflected on my decision making.

A little food for thought

~~~
skohan
I wonder if that's part of what's going on with autism-spectrum-disorders:
i.e. an autistic mind doesn't set the threshold of what's worthy of conscious
thought at as high a level as neurotypical people, so they get more caught-up
in the low-level details.

------
js8
I think there are two important factors to this.

One is that the operating frequency of the human brain is much lower. That
results in much smaller energy consumption.

The other is that current neural networks are probably very inefficient
algorithms. They are fundamentally based on linear algebra but there is some
evidence that the non-linearity is the key.

My feeling is also that brain extensively uses probabilistic algorithms with
hash encoding, conceptually similar to bloom filters, count-min, etc.

~~~
marcyb5st
My theory is that, instead, the brain is a super-duper complex neural network
with some differences about how the update of the weights happen :) Let me
explain:

Albeit true that our brain is much slower than a modern CPU in terms of ops/s,
every neuron is connected to up to 10000 other neurons[0]. And we have
something around 100 billion neurons in our brains. This means that the number
of synapses is between 100 and 1000 trillions.

If we assume that every synapse can be considered like a "weight" in a neural
network, we have a model that is much much bigger than anything developed to
date. Moreover, the topology of said neural network is not a simple stack of
layers, but much more complex like a super intricated residual network.

Finally, the brain is not using backpropagation to update the mentioned
weights/synapses' strength, but instead is based on how often said synapses
fire.

[0]: [http://www.human-memory.net/brain_neurons.html](http://www.human-
memory.net/brain_neurons.html)

~~~
jcranmer
> My theory is that, instead, the brain is a super-duper complex neural
> network with some differences about how the update of the weights happen

Neural networks were not meant to be accurate analogues to the actual
operation of the neurons in the brain. They were known to be poor matches for
the actual biology, _even back in 1959_. The concept of the multilayer neural
network was actually inspired from our knowledge of the visual cortex, rather
than the lower-level fundamentals of the brain. The brain is most certainly
not a neural network, as the term is understood in machine learning.

~~~
skohan
> The brain is most certainly not a neural network, as the term is understood
> in machine learning.

The brain is most certainly a neural network. It is literally a network of
neurons. But you're right insofar that artificial neural networks are not very
similar to the brain.

~~~
tptacek
That is exactly what the parent comment said.

~~~
skohan
I was objecting to the statement: "The brain is most certainly not a neural
network" \- as if to say that the types of artificial neural networks employed
in machine learning are somehow the central definition of what a neural
network is. I would be happy with "the brain is not an ANN", or "The brain is
very different from the types of neural networks used in Machine Learning",
but to say the brain is anything other than a neural network is just plain
wrong.

~~~
tptacek
That's not what they said. They said, "The brain is most certainly not a
neural network, _as the term is understood in machine learning._ ".

~~~
skohan
I understand what they said. That statement reads: "Machine learning has a
concept of a neural network, and the brain is not that". A correct statement
would be: "The field of machine learning makes use of a computing system
called an _artificial neural network_ , which is loosely based on the
structure and function of a _biological neural network_."

The statement in question would be like saying: "Italians most certainly do
not make real coffee, as the term is understood at Starbucks." There's a
technically correct reading of that sentence, but the implication is offensive
to the subject matter.

------
roenxi
The core of this essay is indisputable that the comparison is Apples and
Oranges. A computer puts in more effort and gets a better result. However when
choosing between more efficient or better results, I'd prefer better results.
The human brain's bias towards efficiency is an evolutionary handicap on
intelligence. I can provide my body with as much energy as it wants to consume
- I'd rather be able to burn energy learning things than have to put up with a
brain that keeps trying to take shortcuts at every turn to conserve energy,
and I would love to be able to think deterministicly! It is a stretch to say
humans are out thinking the machine with that frame, machine intelligence
might simply have firmer foundations.

I do have challenges to the method of calculating how much power is put into
the human training process. 10^8 might be too many orders of magnitude to
breach, but humans:

1) Use data picked up visually to provide basic concepts used in playing go
(like "connected", "lines", "territory", "me", "opponent"). Many of these
concepts are potentially developed from visual data, and humans process a lot
more than 50 bits of visual data in a second. This observation is very hard to
quantity respecting how much visual data can be employed to learn about Go -
how much of an advantage does the concept of territory give humans in terms of
learning? It is probably huge.

 _EDIT_ Thinking about it a bit more, it is very plausible given AlphaGo's
endgame play that it never developed a concept of 'territory' as we would
understand it. Given that humans judge the state of the game by estimating
territory for both players then assuming the winner is the one with more
territory, that is a huge deal. It might explain why the learning seems so
inefficient vs a human. A human would never make AlphaGo's endgame moves
because they are bad by the territory heuristic.

2) Learn Go in a very communal environment. Game records date back a few
centuries and all the patterns that are detected get passed on through the
generations of professional players. The practical amount of cached processing
power available to humans is higher than first inspection might suggest.
Humans don't learn mainly through self play, it is closer to supervised
learning. A 1900s player would simply lose to modern technique because of this
accretion.

3) High level Go players are going to be strongly biased to people who 'got
lucky' in how their brains map concepts. An average human probably isn't as
good at learning Go as a peak professional.

~~~
thaumasiotes
> However when choosing between more efficient or better results, I'd prefer
> better results.

No you wouldn't. Stated without qualifications, this viewpoint is staggeringly
stupid. It's like placing a market order to sell AAPL and then being upset
that your shares sold for $20 each. That's what you said you wanted. But it
wasn't what you actually wanted.

Note also that your stated preference makes complexity theory irrelevant, as a
brute force approach to solving any problem already achieves the theoretical
optimum. Do you really want to claim that that's true?

> I can provide my body with as much energy as it wants to consume

No, you can't, except to the extent that you're assuming your body's existing
limits on the amount of energy it will consume. That assumption means you
accept that your body knows better than you what tradeoffs to make.

~~~
roenxi
> No you wouldn't.

That is an unusually juvenile response for HN. You don't get to determine what
other people want, even if they are stupid.

> It's like placing a market order to sell AAPL and then being upset that your
> shares sold for $20 each.

It is nothing alike; your analogy is incoherent. AlphaGo is claimed to be
hopelessly inefficient vs a human yet has an amazing track record at winning
Go, and whatever your analogy is meant to mean, there is no comparison between
losing money on shares and effortlessly competing with every expert in a
field.

I didn't say give up efficiency and see what happens, I said when choosing
between better results and better efficiency, I'd prefer results.

> Note also that your stated preference makes complexity theory irrelevant, as
> a brute force approach to solving any problem already achieves the
> theoretical optimum. Do you really want to claim that that's true?

It doesn't make complexity theory irrelevant, at some point problems become so
complex you can't solve them with brute force because of physical limits. Up
to that point, if I have an optimal solution and you don't, then yes I think
I'm doing better than you. Efficiency is great, but not if you don't get
results.

> No, you can't, except to the extent that you're assuming your body's
> existing limits on the amount of energy it will consume

Ok, assume that :P. Duh. A human can apparently exert about 80 Watts sustained
[0] and a Gen 1 Google TPU is 40 Watts [1]. Google's laptop might use 4 of the
things, but AlphaGo Zero is pretty strong, and we know from LeelaZero that you
don't need a TPU to be able to crush a human professional. We aren't
approaching the limits of physics or biology here. Even if biology couldn't
handle it for some reason, the point is I'd rather use a TPU for thinking
about Go than not, because for all that people wring their hands about what it
means to think, clearly computers are better at making competitive decisions.
If you'd prefer to lose efficiently, I can't stop you, but you aren't going to
be winning any prizes if I use a TPU and you don't.

> That assumption means you accept that your body knows better than you what
> tradeoffs to make.

This is wrong. The next time you use a bicycle, train or car [2] you can
reflect that you managed to make yourself look stupid on the internet. They
make very different trade offs to a human and get much better results on
efficiency and speed depending on what you prefer. If you were restricted to
the tradeoffs your body made in transport, modern society would collapse and
you'd probably starve to death or be eaten by a mob of city dwellers looking
for food. Intelligence is going down the same path, where the tradeoffs that
the body made are just not good enough for the context we operate in.

[0]
[https://en.wikipedia.org/wiki/Human_power](https://en.wikipedia.org/wiki/Human_power)

[1] [https://www.extremetech.com/extreme/269008-google-
announces-...](https://www.extremetech.com/extreme/269008-google-
announces-8x-faster-tpu-3-0-for-ai-machine-learning)

[2]
[https://en.wikipedia.org/wiki/Energy_efficiency_in_transport](https://en.wikipedia.org/wiki/Energy_efficiency_in_transport)

~~~
thaumasiotes
> You don't get to determine what other people want, even if they are stupid.

Sure, but in this case it's very easy to state with 100% confidence that you
said you wanted something which you don't actually want. That's stupid, but
it's a different kind of stupidity than wanting something you won't like.
Compare how I described your problem:

>> That's what you said you wanted. But it wasn't what you actually wanted.

------
feanaro
The article is comparing the number of raw machine operations (which we can
objectively quantify with a great deal of certainty) with a figure of 50
operations per second, which is the supposed number of operations a human
(mind? brain? consciousness?) can execute in a second. Furthermore, it seems
to conflate the bit rate (bits per second) with operations per second.

It seems wrong to suggest the brain itself, examined as a biological machine,
executes only 50 discrete operations per second. In fact, judging by the
footnote, it doesn't seem to be suggesting that: it's just talking about the
conscious information processing rate. But consciousness is a can of worms
itself. It's not at all clear that information has to travel through
consciousness in order for the mind to compute or decide something. In fact,
the opposite seems clear from many experiments, such as
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3052770/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3052770/)

I'm not sure that the article's comparison is very illuminating.

~~~
darkerside
I don't disagree with the conclusion of the study you linked, but I don't
think the study itself deserves the conclusion that it reaches. Electrocuting
someone, albeit imperceptibly from a conscious standpoint, means that we are
"predicting volition" because they react?

------
bloak
The headline makes an amazingly weak claim.

Since people still seem to be discussing what features of nerve impulses are
significant (their frequency? precise relative timing?) it would appear that a
reasonable estimate of the information processing capacity of a single neuron
is currently lacking, let alone an estimate for an entire brain taking account
of redundancy. Does the comparison even make sense?

------
yetihehe
It's like saying "A motorboat exceeds the most powerful cars in swimming".
Yep, there are amphibia, but most motorboats are still better at swimming.

Now, try to calculate 325*528. Did you made it faster than a millisecond? If
not, your pocket calculator is more efficient than your brain.

~~~
slx26
Yes. While I agree with the idea that brains are extremely efficient,
especially at certain tasks like image recognition or any kind of pattern
identification, I also thought about what you said, and I was like: when doing
mathematical operations, I'd be more efficient riding a bicycle to charge a
battery that powers a computer which then does the operations, than doing them
by myself with my brain. That's... really weird. And there are so many tasks
at which computers seem way more efficient than I could possibly be.

EDIT: I guess someone downvoted you for apparently conflating speed and
efficiency, but I still think the point stands.

~~~
trashtester
It doesn't matter if you look at speed (seconds/operation) of efficiency
(joules/operation). For most mathematical operations, a computer cpu will
outperform a human brain by many orders of magnitudes.

This is both true and a somewhat irrelevant. It is a bit like saying that a
cpu can run recursive, non-parallelizable algorithms faster and more
efficiently than a GPU, even if the GPU has more flops. The GPU optimized to
run a few types of computation very quickly, and is very slow on other tasks.
This applies to the human brain as well, and to a much greater degree.

------
0xBA5ED
The brain is specialized hardware. I wouldn't expect a general purpose
computer to be as efficient at performing the same tasks.

~~~
willbw
It's actually not hardware at all. I say this not to be annoying but to make
the point that perhaps we shouldn't be computer-morphizing the human brain. (I
know that's not a word but was the best I could do!)

~~~
0xBA5ED
I agree with not "computer-morphizing" the brain, but I think hardware still
applies well enough. Is there a specific reason you think it doesn't apply out
of curiosity? I mean, are you just trying to avoid the "duality of hardware
and software"?

------
axaxs
I'm -not- a hardware guy, so this may be a dumb question/thought, but here
goes - The human brain is pretty amazing, often thought of a 'petaflops' or
beyond, and I imagine pretty energy efficient as well, seeing as that our
heads don't reach boiling temperatures. Has there been any work into
'rethinking' chips to work more like a human brain (i dont want to go as far
to to say 'organic chips'), or is it just not well enough understood? Or
unfeasible for some other reason?

~~~
slondr
The answer to that question is 50% "We don't know enough about the human
brani" and 50% "Yes."

There's been a lot of research into organic computing and how it could relate
to neural networks, but it's an insanely difficult task and will be decades
until anything viable is produced.

------
entity345
"CPU cycles" is not a very good metric, in my view.

However, the brain is indeed much more efficient when comparing energy
consumption considering that it uses only about 20W or power.

~~~
trashtester
Actually, training AZ (5000 TPUS for 8 hours) requires 1.4e10 joules, assuming
that each TPU needs 100watts. For comparison, the human brain operating
training 4 hours per day every day for 14 years also uses 1.4e10 joules.

So the energy efficiency is quite similar.

~~~
entity345
That's comparing apples and oranges.

The brain works and learns 24/7\. It needs 20W to function.

~~~
trashtester
So how do you see that changing my point, that the brain seems to need about
the same amount of energy as AZ does?

~~~
entity345
The point is that it obviously does not.

~~~
trashtester
I'm not sure what your claim is. Have you switched to the view that AZ is more
efficient (by a factor of 6, if you assume 24/7 consumption), or do you still
think that the brain is more efficient at learning GO?

Edit: In case this is the cause for your objection: In the calculation for the
energy needed for the human brain to master go (where I ended up with
14GJ=1.4*10^6 J), I only counted the consumption during the average 4 hours
per day that one can expect someone to train. If you count all 24 hours, the
14GJ budget would be spent in 2-3 years. (Obviously, nobody can master GO in
2-3 years, regardless of effort.)

------
rdruxn
Yeah, you can’t power a super computer with a sandwich

------
finnjohnsen2
TLDR: "it is comparing apples and oranges"

~~~
mattnewport
Worse than that, it's comparing apples and the color orange.

