Hacker News new | more | comments | ask | show | jobs | submit login
The brain exceeds the most powerful computers in efficiency (mindmatters.ai)
71 points by yters 15 days ago | hide | past | web | favorite | 69 comments



My disagreement is something this article half-acknowledges:

> The brain has the potential for a petaflop per second of processing power. However, the measured performance of conscious processing is only about 50 bits per second.

Why assume that either training or playing Go involves only conscious decision making? Anything that we call “common sense”, “gut feeling”, “aesthetically pleasing” etc. is something we can’t (or cannot easily) generate a conscious rationale for.

Of course, that’s doesn’t mean humans don’t beat computers at all; last I checked, humans can make acceptable quality inferences from far fewer examples than an A.I.


Exactly this. Conscious information processing constitutes only a tiny part of the processing within the brain. Real computational power of the brain is in the order of petaflops to exaflops 1).

On the other hand, AZ used a lot more than 1E18 calculations of training to beat human level play, so that part is also wrong. In fact, the total processing power of the 5000 TPU's used are in the order of 1E17 operations per second.

Still, since 1E17 flops is comparable (within 1-2 orders of magnitude) to the human brain's raw capacity, and AZ could reach superhuman level in GO in less than 10 hours, that means that AZ can be seen as more efficient than human brains at GO (in terms of learning rate/calculation, not necessarily per watt)

This should also not be surprising. After all, GO (like chess) is a very abstract game, far removed from the tasks that the biological brain has evolved to do well.

For tasks that the brain does well, such as pattern recognition in the environment that humans live, the human brain is still much more efficient. A human can learn to drive a car in 10's of hours, while AI's still require millions of hours of training, and they still end up inferior.

So, in the end, the title is correct (and also common knowledge for anyone following this development), even if almost all of the content in the article is way off.

Was this a high school essay?

1) http://hplusmagazine.com/2009/04/07/brain-chip/


> A human can learn to drive a car in 10's of hours, while AI's still require millions of hours of training, and they still end up inferior.

I think it's a bit unrealistic to compare these numbers. A human learning to drive has usually started with 16-18 years of learning how the world works before the 10 hours of transfer learning.


1 million hours ~= 114 years. At best, humans are still a factor of 10 better in terms of necessary training examples.


The brain does however have the advantage of evolution optimising the hardware for specific tasks. That's been going on for a while.


Agreed. I think the next two steps in this thread would be:

* Driving isn’t natural, it is movement which involves mostly fine motor skills combined with situational awareness of unnaturally shaped objects, which are 20 times out mass, moving 5 times as fast as we normally do, in an unnatural environment (roads, traffic, street furniture)

* Yes, but (1) evolution still helped us get the basics, and (2) our designs are based on what our minds can cope with, the ones which we can’t cope with are either unpopular or get banned


But that's still an advantage humans have over AI isn't it? Humans have the capacity to transfer learned knowledge and behaviors to relatively unrelated domains with immediate utility. Show me an ANN which can do that.


You are right, and there is a whole area of AI research called Transfer Learning attempting to replicate this with machines.

https://en.m.wikipedia.org/wiki/Transfer_learning


I once played in two poker tournaments on back to back weekends.

Tournament 1 on adderall. Tournament 2 without adderall.

Tournament 1 my consciousness and subconscious seemed to be working in tandom. I knew what action to take, and why I was taking that action.

Tournament 2 my subconscious seemed to play a bigger role being off the adderall. I was making the right decisions for the most part same as the week before while on adderall, but I was not consciously completely sure of my reasoning for most decisions until several seconds, minutes, or even hours after I had reflected on my decision making.

A little food for thought


I wonder if that's part of what's going on with autism-spectrum-disorders: i.e. an autistic mind doesn't set the threshold of what's worthy of conscious thought at as high a level as neurotypical people, so they get more caught-up in the low-level details.


So which version worked for you better financially?


>Why assume that either training or playing Go involves only conscious decision making?

The author doesn't -- assume that. They just state that "the measured performance of conscious processing is only about 50 bits per second".

Not only they do not make any explicit assumption that that (conscious processing) is the only processing going on, but also by spelling out that 50 bits is just the "conscious" part, they already imply that there's is/might be also subconscious processing going...


> humans can make acceptable quality inferences from far fewer examples than an A.I.

Perhaps the fact that we think we can do this isn't the greatest strength. While a lot can be said about the flaws of "Thinking Fast and Slow", one of the things that is certain is that many people reach an inference point and stop, but these inferences are often wrong.

The fact that an AI will be able to develop inferences with so much data in so little time is a real strength that we aren't going to be able to match.

I don't think that it's necessarily useful to compare the human brain with computers, because we have to apply distraction filtration to the system that a computer doesn't need to worry about much, and I'll bet that takes up tons of processing. I would argue that any computing system is an extension of your own mind, and with the data that it can compute and display it can be used to improve a broken decision making process.


>many people reach an inference point and stop, but these inferences are often wrong.

Still good enough for us to be top dogs on the planet and even reach space. Where's the AI doing that?


And soon we'll be top dogs on a smoldering rock. We're doing really well.


Every dog has its day.


Well sure, but that’s a completely different measure of performance, and sometimes ‘number of examples needed’ is the important one — we don’t always have the luxury of Big Data, as some experiments are just too expensive.


> humans can make acceptable quality inferences from far fewer examples than an A.I.

Is this actually true? If I show you a bunch of shapes and ask you to make an inference, you're unavoidably drawing from an entire lifetime of experience.


I think there are two important factors to this.

One is that the operating frequency of the human brain is much lower. That results in much smaller energy consumption.

The other is that current neural networks are probably very inefficient algorithms. They are fundamentally based on linear algebra but there is some evidence that the non-linearity is the key.

My feeling is also that brain extensively uses probabilistic algorithms with hash encoding, conceptually similar to bloom filters, count-min, etc.


My theory is that, instead, the brain is a super-duper complex neural network with some differences about how the update of the weights happen :) Let me explain:

Albeit true that our brain is much slower than a modern CPU in terms of ops/s, every neuron is connected to up to 10000 other neurons[0]. And we have something around 100 billion neurons in our brains. This means that the number of synapses is between 100 and 1000 trillions.

If we assume that every synapse can be considered like a "weight" in a neural network, we have a model that is much much bigger than anything developed to date. Moreover, the topology of said neural network is not a simple stack of layers, but much more complex like a super intricated residual network.

Finally, the brain is not using backpropagation to update the mentioned weights/synapses' strength, but instead is based on how often said synapses fire.

[0]: http://www.human-memory.net/brain_neurons.html


> If we assume that every synapse can be considered like a "weight" in a neural network, we have a model that is much much bigger than anything developed to date.

This is a very bad assumption. Each synapse is a highly complex chemical computer, and its "weight" with regard to its contribution to the postsynaptic neuron's activation at any point in time depends on a multitude of factors.

Those factors include dozens of interacting metabolic pathways, the interactions of various neurotransmitter systems which each have distinct behavior, and things like where the individual synapses sit relative to one another on the dendritic arbor. There's even evidence that at least in some neurons the activation pattern may influence RNA production in the neuron, creating a kind of recording which influences the behavior of the cell.

Current ANN models should be considered a very low dimensional approximation of the real thing.


> My theory is that, instead, the brain is a super-duper complex neural network with some differences about how the update of the weights happen

Neural networks were not meant to be accurate analogues to the actual operation of the neurons in the brain. They were known to be poor matches for the actual biology, even back in 1959. The concept of the multilayer neural network was actually inspired from our knowledge of the visual cortex, rather than the lower-level fundamentals of the brain. The brain is most certainly not a neural network, as the term is understood in machine learning.


> The brain is most certainly not a neural network, as the term is understood in machine learning.

The brain is most certainly a neural network. It is literally a network of neurons. But you're right insofar that artificial neural networks are not very similar to the brain.


That is exactly what the parent comment said.


I was objecting to the statement: "The brain is most certainly not a neural network" - as if to say that the types of artificial neural networks employed in machine learning are somehow the central definition of what a neural network is. I would be happy with "the brain is not an ANN", or "The brain is very different from the types of neural networks used in Machine Learning", but to say the brain is anything other than a neural network is just plain wrong.


That's not what they said. They said, "The brain is most certainly not a neural network, as the term is understood in machine learning.".


I understand what they said. That statement reads: "Machine learning has a concept of a neural network, and the brain is not that". A correct statement would be: "The field of machine learning makes use of a computing system called an artificial neural network, which is loosely based on the structure and function of a biological neural network."

The statement in question would be like saying: "Italians most certainly do not make real coffee, as the term is understood at Starbucks." There's a technically correct reading of that sentence, but the implication is offensive to the subject matter.


People also gloss over the fact that there are multiple neurotransmitters involved.


The brain has other differences that are probably a major influences on it's efficiency;

1) It's incredibly parallel, few operations happen linearly. Even modern GPUs don't really match that.

2) It doesn't run everything all the time. Parts of the brain not in use are shut down. Modern computers do this to some extend but CPU cores don't switch off parts of the ALU not in use. The brain has much better energy saving options, only 10-30% of the brain are active at a given time.


Building on 2- this is likely not a result of saving energy but rather hyper-specialization in specific regions of the brain. Like if modern CPUs had 10-20 cores, each tuned to handle a specific task. The result is energy efficiency, but in contrast to current energy saving options, the brain can't simply "speed up" and use more power and activate more areas of the brain.


> It's incredibly parallel, few operations happen linearly. Even modern GPUs don't really match that.

Modern GPUs aren't even close: neither in terms of degree or in kind. A GPU does a lot of linear operations in parallel, but in the brain all the "compute units" are interacting with each-other in real time.


> The other is that current neural networks are probably very inefficient algorithms. They are fundamentally based on linear algebra but there is some evidence that the non-linearity is the key.

Deep neural networks, though they use linear algebra, are definitely non linear, as a result of both their architecture and their use of non linear activation functions.

Their non-linearity is part of the reason they can classify phenomena that linear techniques like simple regression cannot.


What about the idea that the brain is analog?


The brain is analog.


It's digital and analog. At the level of the action potential, the brain certainly does seem to operate with discreet units of information. Everywhere else though is certainly analog.


So if a neuron fires or not (I guess that's what you mean by action potential) can simply be translated to 1 or 0 hence it's digital?


Basically yes. That's only one small piece of the information processing of the brain, but it would in principal be possible to represent the "firing state" of a biological neural network in binary terms.


The core of this essay is indisputable that the comparison is Apples and Oranges. A computer puts in more effort and gets a better result. However when choosing between more efficient or better results, I'd prefer better results. The human brain's bias towards efficiency is an evolutionary handicap on intelligence. I can provide my body with as much energy as it wants to consume - I'd rather be able to burn energy learning things than have to put up with a brain that keeps trying to take shortcuts at every turn to conserve energy, and I would love to be able to think deterministicly! It is a stretch to say humans are out thinking the machine with that frame, machine intelligence might simply have firmer foundations.

I do have challenges to the method of calculating how much power is put into the human training process. 10^8 might be too many orders of magnitude to breach, but humans:

1) Use data picked up visually to provide basic concepts used in playing go (like "connected", "lines", "territory", "me", "opponent"). Many of these concepts are potentially developed from visual data, and humans process a lot more than 50 bits of visual data in a second. This observation is very hard to quantity respecting how much visual data can be employed to learn about Go - how much of an advantage does the concept of territory give humans in terms of learning? It is probably huge.

EDIT Thinking about it a bit more, it is very plausible given AlphaGo's endgame play that it never developed a concept of 'territory' as we would understand it. Given that humans judge the state of the game by estimating territory for both players then assuming the winner is the one with more territory, that is a huge deal. It might explain why the learning seems so inefficient vs a human. A human would never make AlphaGo's endgame moves because they are bad by the territory heuristic.

2) Learn Go in a very communal environment. Game records date back a few centuries and all the patterns that are detected get passed on through the generations of professional players. The practical amount of cached processing power available to humans is higher than first inspection might suggest. Humans don't learn mainly through self play, it is closer to supervised learning. A 1900s player would simply lose to modern technique because of this accretion.

3) High level Go players are going to be strongly biased to people who 'got lucky' in how their brains map concepts. An average human probably isn't as good at learning Go as a peak professional.


Forget human. Get a computer to do what the ant brain does with a spec of dust sized power supply.


> However when choosing between more efficient or better results, I'd prefer better results.

No you wouldn't. Stated without qualifications, this viewpoint is staggeringly stupid. It's like placing a market order to sell AAPL and then being upset that your shares sold for $20 each. That's what you said you wanted. But it wasn't what you actually wanted.

Note also that your stated preference makes complexity theory irrelevant, as a brute force approach to solving any problem already achieves the theoretical optimum. Do you really want to claim that that's true?

> I can provide my body with as much energy as it wants to consume

No, you can't, except to the extent that you're assuming your body's existing limits on the amount of energy it will consume. That assumption means you accept that your body knows better than you what tradeoffs to make.


> No you wouldn't.

That is an unusually juvenile response for HN. You don't get to determine what other people want, even if they are stupid.

> It's like placing a market order to sell AAPL and then being upset that your shares sold for $20 each.

It is nothing alike; your analogy is incoherent. AlphaGo is claimed to be hopelessly inefficient vs a human yet has an amazing track record at winning Go, and whatever your analogy is meant to mean, there is no comparison between losing money on shares and effortlessly competing with every expert in a field.

I didn't say give up efficiency and see what happens, I said when choosing between better results and better efficiency, I'd prefer results.

> Note also that your stated preference makes complexity theory irrelevant, as a brute force approach to solving any problem already achieves the theoretical optimum. Do you really want to claim that that's true?

It doesn't make complexity theory irrelevant, at some point problems become so complex you can't solve them with brute force because of physical limits. Up to that point, if I have an optimal solution and you don't, then yes I think I'm doing better than you. Efficiency is great, but not if you don't get results.

> No, you can't, except to the extent that you're assuming your body's existing limits on the amount of energy it will consume

Ok, assume that :P. Duh. A human can apparently exert about 80 Watts sustained [0] and a Gen 1 Google TPU is 40 Watts [1]. Google's laptop might use 4 of the things, but AlphaGo Zero is pretty strong, and we know from LeelaZero that you don't need a TPU to be able to crush a human professional. We aren't approaching the limits of physics or biology here. Even if biology couldn't handle it for some reason, the point is I'd rather use a TPU for thinking about Go than not, because for all that people wring their hands about what it means to think, clearly computers are better at making competitive decisions. If you'd prefer to lose efficiently, I can't stop you, but you aren't going to be winning any prizes if I use a TPU and you don't.

> That assumption means you accept that your body knows better than you what tradeoffs to make.

This is wrong. The next time you use a bicycle, train or car [2] you can reflect that you managed to make yourself look stupid on the internet. They make very different trade offs to a human and get much better results on efficiency and speed depending on what you prefer. If you were restricted to the tradeoffs your body made in transport, modern society would collapse and you'd probably starve to death or be eaten by a mob of city dwellers looking for food. Intelligence is going down the same path, where the tradeoffs that the body made are just not good enough for the context we operate in.

[0] https://en.wikipedia.org/wiki/Human_power

[1] https://www.extremetech.com/extreme/269008-google-announces-...

[2] https://en.wikipedia.org/wiki/Energy_efficiency_in_transport


> You don't get to determine what other people want, even if they are stupid.

Sure, but in this case it's very easy to state with 100% confidence that you said you wanted something which you don't actually want. That's stupid, but it's a different kind of stupidity than wanting something you won't like. Compare how I described your problem:

>> That's what you said you wanted. But it wasn't what you actually wanted.


> I can provide my body with as much energy as it wants to consume

No, you can't. All energy that your body uses transforms to heat. The amount of head your body can handle is fairly limited. Bigger bodies perform substantially worse since they have a smaller volume/surface ratio. Even heatsinks such as elephant ears just provide a small help.


>The core of this essay is indisputable that the comparison is Apples and Oranges. A computer puts in more effort and gets a better result. However when choosing between more efficient or better results, I'd prefer better results.

Not really. If we waited for better results humanity wouldn't exist.


Efficiency, or better, the ability to dissipate waste heat, is a constraint on computational power though.


The article is comparing the number of raw machine operations (which we can objectively quantify with a great deal of certainty) with a figure of 50 operations per second, which is the supposed number of operations a human (mind? brain? consciousness?) can execute in a second. Furthermore, it seems to conflate the bit rate (bits per second) with operations per second.

It seems wrong to suggest the brain itself, examined as a biological machine, executes only 50 discrete operations per second. In fact, judging by the footnote, it doesn't seem to be suggesting that: it's just talking about the conscious information processing rate. But consciousness is a can of worms itself. It's not at all clear that information has to travel through consciousness in order for the mind to compute or decide something. In fact, the opposite seems clear from many experiments, such as https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3052770/

I'm not sure that the article's comparison is very illuminating.


I don't disagree with the conclusion of the study you linked, but I don't think the study itself deserves the conclusion that it reaches. Electrocuting someone, albeit imperceptibly from a conscious standpoint, means that we are "predicting volition" because they react?


The headline makes an amazingly weak claim.

Since people still seem to be discussing what features of nerve impulses are significant (their frequency? precise relative timing?) it would appear that a reasonable estimate of the information processing capacity of a single neuron is currently lacking, let alone an estimate for an entire brain taking account of redundancy. Does the comparison even make sense?


It's like saying "A motorboat exceeds the most powerful cars in swimming". Yep, there are amphibia, but most motorboats are still better at swimming.

Now, try to calculate 325*528. Did you made it faster than a millisecond? If not, your pocket calculator is more efficient than your brain.


Yes. While I agree with the idea that brains are extremely efficient, especially at certain tasks like image recognition or any kind of pattern identification, I also thought about what you said, and I was like: when doing mathematical operations, I'd be more efficient riding a bicycle to charge a battery that powers a computer which then does the operations, than doing them by myself with my brain. That's... really weird. And there are so many tasks at which computers seem way more efficient than I could possibly be.

EDIT: I guess someone downvoted you for apparently conflating speed and efficiency, but I still think the point stands.


It doesn't matter if you look at speed (seconds/operation) of efficiency (joules/operation). For most mathematical operations, a computer cpu will outperform a human brain by many orders of magnitudes.

This is both true and a somewhat irrelevant. It is a bit like saying that a cpu can run recursive, non-parallelizable algorithms faster and more efficiently than a GPU, even if the GPU has more flops. The GPU optimized to run a few types of computation very quickly, and is very slow on other tasks. This applies to the human brain as well, and to a much greater degree.


I have implemented a stereoscopic vision once (1 angled camera and projector, not 2 cameras, but details...). It worked and the quality of the results could reasonably be scaled with additional computing power (and general hardware).

While I wouldn't claim that the setup was very sophisticated, it worked reliable at least. But it doesn't even come close to the little nub in the back of your skull.

Matching minuscule differences and quantifying these in images probably uses a lot of processing power, indifferent of the algorithms involved. It doesn't only eat you CPU, it would do the same with your memory.

So yes, the brain is pretty good with images. But I expect it to be good at other tasks too that doesn't involve the visual cortex.

You might not remember what you have eaten today morning, but there must be at least some form of reliable memory in your brain. And you do complex, high-speed mathematical calculations that include basic algebraic calculus. Just maybe not those you want.


The brain is specialized hardware. I wouldn't expect a general purpose computer to be as efficient at performing the same tasks.


It's actually not hardware at all. I say this not to be annoying but to make the point that perhaps we shouldn't be computer-morphizing the human brain. (I know that's not a word but was the best I could do!)


I agree with not "computer-morphizing" the brain, but I think hardware still applies well enough. Is there a specific reason you think it doesn't apply out of curiosity? I mean, are you just trying to avoid the "duality of hardware and software"?


It is a substrate. Not quite the same as hardware but close enough for government work.


technomorphizing maybe? it's definitely something we do.


The difference between the deterministic substrate in an ASIC and non-deterministic substrate in a brain means we should resist simple comparisons even with any sort of computer hardware, specialized or not.

Synapses sit, from an electronics perspective, in a weird in-between space between linear analog computation and digital computation. There are lab prototype computational devices that might make good comparisons but it's just too different from, say, the sort of specialized decoders you'd find on your phone SoC for that to be a good analogy.


I'm -not- a hardware guy, so this may be a dumb question/thought, but here goes - The human brain is pretty amazing, often thought of a 'petaflops' or beyond, and I imagine pretty energy efficient as well, seeing as that our heads don't reach boiling temperatures. Has there been any work into 'rethinking' chips to work more like a human brain (i dont want to go as far to to say 'organic chips'), or is it just not well enough understood? Or unfeasible for some other reason?


The answer to that question is 50% "We don't know enough about the human brani" and 50% "Yes."

There's been a lot of research into organic computing and how it could relate to neural networks, but it's an insanely difficult task and will be decades until anything viable is produced.


"CPU cycles" is not a very good metric, in my view.

However, the brain is indeed much more efficient when comparing energy consumption considering that it uses only about 20W or power.


Actually, training AZ (5000 TPUS for 8 hours) requires 1.4e10 joules, assuming that each TPU needs 100watts. For comparison, the human brain operating training 4 hours per day every day for 14 years also uses 1.4e10 joules.

So the energy efficiency is quite similar.


That's comparing apples and oranges.

The brain works and learns 24/7. It needs 20W to function.


So how do you see that changing my point, that the brain seems to need about the same amount of energy as AZ does?


The point is that it obviously does not.


I'm not sure what your claim is. Have you switched to the view that AZ is more efficient (by a factor of 6, if you assume 24/7 consumption), or do you still think that the brain is more efficient at learning GO?

Edit: In case this is the cause for your objection: In the calculation for the energy needed for the human brain to master go (where I ended up with 14GJ=1.4*10^6 J), I only counted the consumption during the average 4 hours per day that one can expect someone to train. If you count all 24 hours, the 14GJ budget would be spent in 2-3 years. (Obviously, nobody can master GO in 2-3 years, regardless of effort.)


Yeah, you can’t power a super computer with a sandwich


TLDR: "it is comparing apples and oranges"


Worse than that, it's comparing apples and the color orange.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: