Hacker News new | past | comments | ask | show | jobs | submit login
The computational power of the human brain (frontiersin.org)
63 points by geox 9 months ago | hide | past | favorite | 40 comments



> The brain has always been compared with a highly sophisticated computer.

This is not strictly true. In ancient times, Plato compared the mind to a chariot pulled by two horses, representing emotion and reason. This reflected the importance of horses for transportation at the time.

In the 1600s, Descartes saw the mind as akin to a mechanical clockwork machine, reflective of the clockwork and automatons built then.

In the late 1800s, Freud described the psyche in terms of hydraulics and steam engines - cutting-edge technology.

Each era attempts to explain the complexities of the human mind through the lens of its most advanced creations. But all these analogies fall short.


Iain McGilchrist puts the matter of our lopsided repertoire of metaphors for the brain well in The Master and His Emissary (apologies for the lengthy quote):

> This leads to a fundamental point about any attempt to understand the brain. It is a particularly acute case of the problems encountered in understanding anything. The nature of the attention one brings to bear on anything alters what one finds; what we aim to understand changes its nature with the context in which it lies; and we can only ever understand anything as a something.

> There is no way round these problems—if they are problems. To attempt to detach oneself entirely is just to bring a special kind of attention to bear which will have important consequences for what we find. Similarly we cannot see something without there being a context, even if the context appears to be that of ‘no context’, a thing ripped free of its moorings in the lived world. That is just a special, highly value-laden kind of context in itself, and it certainly alters what we find, too. Nor can we say that we do not see things as anything at all—that we just see them, full stop. There is always a model by which we are understanding, an exemplar with which we are comparing, what we see, and where it is not identified it usually means that we have tacitly adopted the model of the machine.

> <…>

> Such considerations apply to the attempt to understand anything at all. But when we come to look at what we refer to as brain functions, there is a problem of a wholly different order. We are not ‘just’ looking at things in the world—a lump of rock, or even a person—but the processes whereby the world itself, together with the rock or the person, might be brought into being for us at all, the very foundations of the fact of our experience, including any idea we might have about the nature of the world, and the nature of the brain, and even the idea that this is so. If it is true that attention changes the nature of what we find, how do we decide the most appropriate attention for that? One that tries to ignore the inwardness of experience? What possible context is there in which to place the foundations of experience of all contexts whatever? And what kind of a thing are we to see it ‘as’? The answer is far from obvious, but in the absence of an attempt to address the question we do not give no answer. We answer with the model we understand—the only kind of thing we can ever fully understand, for the simple reason that we made it: the machine.


Interesting quotes, considering that our latest and greatest achievemnets in the AI field are based on "attention is all you need" transformers. Yet it seems that in a way we're so close, yet so far from replicating the full human attention in a machine. Maybe precisely because it can't be transformed into a a purely objective/measurable/scientific category.


> > The brain has always been compared with a highly sophisticated computer.

> This is not strictly true.

It is just a small stylistic lapse. The author used "always" in the sense of "frome time to time since we have digital computers". To suggest that he would think that people before the invention of the computer could have compared the brain to a computer seems quite absurd to me. And trivial that they did not.


The difference is, there's no Church-Turing thesis for chariots.


True, but there was Ben-Hur's thesis that "a chariot is capable of carrying any person", which says as much about brains as the Church-Turing thesis does unless you beg the question.


Do you mean that it's begging the question to assume that the brain comes to its conclusions via a physically deterministic procedure that terminates in finite time?


> [...] to assume that the brain comes to its conclusions via a physically deterministic procedure [...]

I don't think determinism makes any difference here?

There are various flavours of non-deterministic Turing machines, and whether the brain is deterministic is more of a philosophical question than anything else.

(I would say it's convenient and proper to model the brain as using a lot of random noise, eg even if just from thermal noise. And that kind of biological cellular level is the probably the 'right way' to think about the brain in most circumstances.

One could try to argue that the noise comes from deterministic chaos. And then someone else could argue that the underlying quantum processes are already non-deterministic. And yet another person can argue that quantum mechanics is completely deterministic, if you go by the many worlds interpretation and not the silly Copenhagen Interpretation of quantum mechanics. Etc)


That's not what the Church-Turing thesis is about. The Church-Turing thesis says that any "effective procedure" (i.e. algorithm) for performing computations on natural numbers is equivalent to a Turing machine. It is generally accepted but cannot be proven because "effective procedure" is not defined. Indeed it can be seen as providing a definition for the term "effective procedure".

To extend that to say anything about the physical world (i.e. some form of "Physical Church-Turing Thesis") is to make quite a different assertion and one that is not so "generally accepted". To make it say anything about brains you have to assert some form of equivalence between brains and "effective procedures" on the natural numbers, which is indeed to beg the question in this instance.

There's a different idea you may be confusing it with, which is the idea that since the known laws of physics (aside from state vector reduction in quantum mechanics) are in the form of differential equations that can be numerically integrated to arbitrary precision anything can be "simulated". However, this is not part of the Church-Turing thesis. (And personally I find the phrase "to arbitrary precision" here, and the exception for state vector reduction, quite a barrier to basing any assertions about the nature of the universe or minds on it.)


There are different levels of argument depending on the precision of the definition of the concepts in play, which is unhelpful in general but also unavoidable.

I think that the folk definition of computation is so wide and vague that almost any physical process can be characterized as a computation in that sense. So brains (being physical) must be (folk) computers. The problem is that this doesn't get us anywhere because there are lots of equivalences and for practical purposes in there and what we really need is a characterization of the brain as it operates rather than what that operation might be modeled as.

As Searle says, you can simulate the processes of digestion but that doesn't mean that you have an artificial stomach that can turn food to energy.


> This reflected the importance of horses

Very marginally, in front of the basic direct image of "dominating the animal", consistently with the charioteer being the subtle part in the "chariot system".


You are very right, but I still see a technological asymptote. What is said that the mind is like a computer is because it is based on logic, mathematics. The analogies may be to the latest current technology but they have one thing in common and it is the abstraction of a perfect machine or system, {We have little of perfect, but our mistakes make us unique and essential}


I don’t think that holds though because logic is a high level function of the brain not something fundamental to its design.


Depends on whether you think mathematical reasoning is uncovering fundamental tenets of reality. I think it is, and the history of experimental physics in particular supports this too.


I think the modern comparison to an advanced computer is far, far more accurate, though. If we look at each of the metaphors used to describe the mind in tje past, they got more sophosticated as time went on, so it stands to reason that the computer is our best comparison, to date. In a sophomoric way, it makes me wonder if that's what this is all about; our advancement as a species is a culmination of us trying to build something that works as our minds do, something that can perhaps reflect our minds since we really don't know of anything else that has achieved our (percieved?) level of consciousness.


This is going to sound really obvious, but you can only make analogies with things you know. Or better yet, something known and understood by the people around you.

I'm sure there are some subtleties in chariots being pulled by two horses that I'm not aware of, and so Plato's analogy doesn't work for me.

We won't be able to compare the brain to anything better than a computer until we either understand the brain better, or until we understand some other brain-analog better.


The clockwork actually is the best kind of computer they had in the 1600s (in a clock it's dividing time units, they had mechanical calculators, and see also the antikyra mechanism), so that's a similar comparison

If in the future the comparison will be "quantum", then again, a quantum computer is also a kind of computer


Insightful. And the next era will describe it in terms of quantum interactions.


> Insightful. And the next era will describe it in terms of quantum interactions.

Roger Penrose already talks about a theory of consciousness based on non-computable quantum effects in microtubules. http://www.consciousentities.com/penrose.htm


Keep in mind that this is probably (a) bollocks, and (b) doesn't preclude brains being (quantum) computers.

You can use a Turing machine to (inefficiently) simulate a quantum computer just fine.

The page you linked to also seems to think that Gödel's incompleteness theorem says that human brains can do thinks that computers can't. That conclusion isn't warranted:

Yes, there are specific statements that specific formal systems can neither prove nor disprove. And for every formal system you can find such systems.

However, there are also plenty of statements that humans can neither prove nor disprove. Human also make plenty of mistakes. (You can also build a probabilistic formal system that's allowed to make mistakes.)


In terms of quantum computations. Most quantum interactions average into classical mechanics. And, no, it's not guaranteed. We are already beyond vague analogies like "two chariots". In the next era we probably will know equivalent (quantum) (noisy) computations for the brain activity.


Yeah, "always" is always problematic when making gross comments about the world


that makes sense


Separately but related: decades ago, research was done into the "Bandwidth of Consciousness", that is: how much information can we consciously process. That research came up with the startling low number of about 40 bits per second _at_most_. Suggesting that the huge amount of information our senses take in is massively compressed before we become 'conscious' of it.

References:

W R Garner and Harold W Lake "The Amount of Information in Absolute Judgements" - Psychological Review 58 (1951) - they attempted to measure people's ability to distinguish stimuli (such as light and sound) in bits. Result: 2.2 to 3.2 bits per second.

W E Hick "On the Rate of Gain of Information" - Quarterly Journal of Experimental Psychology 4 (1952) - this experiment measured how much information a person could pass on if they acted as a link in a communication channel. That is, faced with a series of flashing lights, subjects had to press the right keys. Result: 5.5 bits per second.

Henry Quastler "Studies of Human Channel Capacity" - Information Theory, Proceedings of the Third London Symposium (1956). Measured how many bits of information are expressed by a pianist while pressing keys on a piano. Result: 25 bits per second.

J R Pierce "Symbols, Signals and Noise" (Harper 1961) - used experiments involving letters and symbols. Result: 44 bits per second.

Mostly they are measuring bandwidth as being 'what can someone take in and then act on' and probably the 'acting on' bit is the bottleneck. I think this is an area of research that could do with being revisited, perhaps someone could devise a way of measuring bandwidth-in to consciousness vs bandwidth-out.


Funnily enough, large language models are similar. Gigabytes and gigabytes of numbers being processed, to input/output a few words per second.


I think using terms and concepts from classical Information Science when talking about brains is a non-starter. To speak HN lingo, it's like measuring a computer's performance while it is in the middle of compiling the Linux kernel and training a GPT model.

Sure, the throughput for these discrete tasks was low, but the brain is not discrete. It is a heavily optimized complex system that can be taught to predict complex vector interactions in an instant and be correct enough every time. It can spot tiny movements in a sea of leaves while searching for a green herb in a green forest and immediately switch to group preservation mode when it becomes clear the movement is a tiger.

> devise a way of measuring bandwidth-in to consciousness vs bandwidth-out.

This is a dead end, "consciousness" is just a tiny goal-setting and evaluation layer on top of a a gigantic stack of "unconscious" processes. That stack is so powerful that we even use its mysterious, chaotic power to outsource formerly conscious processes after enough repetition.


"consciousness" is just a tiny goal-setting and evaluation layer on top of a a gigantic stack of "unconscious" processes

that was the point of what I wrote, for more details please re-read


Fascinating paper, though if you're looking for a quantified "computational power of the human brain" you won't find it. He concludes the brain isn't similar enough to a computer for a comparison to be made.


We will have to wait for an AI that can truly understand us.

A jest I fear may be taken too seriously by the reality we seem to be quickly heading into.

I would guess that despite all the disparate systems, the basic computations will be similar. Different parts of our generalized connectome, making different connections at different time frames.

I.e. time and multi-signal integration with decay, positive and negative acting signals, squashing or frequency limiting functions, with forward, lateral & feedback connections.


Not even we understand ourselves. I wonder if it's even possible for a machine to do it. And if it is possible and we manage to create this machine, there's still the problem of explaining us to ourselves.


I feel bad that this guy has to really strain the analogy here (at least in the introduction). It would probably be a lot easier to study how the brain works if we could abandon the idea that they are computers. Marrying ourselves to the idea that the brain must be doing computation as we understand it is harming the field imo.


Rodney Brooks has also expressed similar ideas, criticizing the computational metaphor in studies of the brain. However, I find his writings on this topic to be a bit vague.

https://www.nature.com/articles/482462a


If you know how sensory organs encode information as spike trains and the many ways neurons input, process and output signals, there's no way around the hypothesis that the brain does information processing. Denying that is just trying to sound clever by pattern-matching to the standard "it's all relative to our culture" criticism, and it's completely divorced from any actual evidence.


Except that for the most part we don't know "how sensory organs encode information as spike trains". This is the problem of "neural coding" and is very much at the stage of unproven hypotheses.

Meanwhile we have abundant evidence that continuous variables such as the precise timing of individual spikes have physiological effects.


That's just rejecting information processing by one descriptive system and pretending another isn't information processing. Analog computers are possible, just difficult to build and make reproducible in a useful way for mankind.

But then of course, any analog process can be simulated by a sufficiently high fidelity digital one (i.e. digital music encoding). So all you've really got to is "it might take a considerably more powerful computer to fully reproduce the analog effects of human consciousness".


This kind of question is meaningless, and I’m pretty sure it comes from the old view of neurons being somewhat akin to a 1-1 or many-1 digital switch (it isn’t).

In fact, we currently don’t even know the full extent of the types of neurons in the brain or the way they function and interact [1]

There are neurons that only fire when a certain percentage of their inputs are stimulated, some that have a temporal component to their firing and more.

In all the computer neural net research I read I never saw an implementation that truly explored or implemented more than a few types (like 3-4) of neuronal input and only one type of output

[1] https://qbi.uq.edu.au/brain/brain-anatomy/types-neurons


I dont think this is a problem. The computer nrural networks are closer to a single biological neuron than to a biological neural network. That provides tighter coupling. Long distance generalized "connections" can materialize as weight chains, recurence can provide timing.


I think there's gonna be a new wave of idea, I see more and more concepts related to distributed / massive parts related through their topologies / tissue like structures.


Yes Walter Pitt's proposal, which he disproved. But we know the prefrontal cortex is capable of simulating a turing machine, or any machine for that matter.


computers (IC's) are both analog and digital. same goes for the brain.

imo all the "analogies" are authors imagination and will only distract him from understanding how brain really works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: