> The brain has the potential for a petaflop per second of processing power. However, the measured performance of conscious processing is only about 50 bits per second.
Why assume that either training or playing Go involves only conscious decision making? Anything that we call “common sense”, “gut feeling”, “aesthetically pleasing” etc. is something we can’t (or cannot easily) generate a conscious rationale for.
Of course, that’s doesn’t mean humans don’t beat computers at all; last I checked, humans can make acceptable quality inferences from far fewer examples than an A.I.
On the other hand, AZ used a lot more than 1E18 calculations of training to beat human level play, so that part is also wrong. In fact, the total processing power of the 5000 TPU's used are in the order of 1E17 operations per second.
Still, since 1E17 flops is comparable (within 1-2 orders of magnitude) to the human brain's raw capacity, and AZ could reach superhuman level in GO in less than 10 hours, that means that AZ can be seen as more efficient than human brains at GO (in terms of learning rate/calculation, not necessarily per watt)
This should also not be surprising. After all, GO (like chess) is a very abstract game, far removed from the tasks that the biological brain has evolved to do well.
For tasks that the brain does well, such as pattern recognition in the environment that humans live, the human brain is still much more efficient. A human can learn to drive a car in 10's of hours, while AI's still require millions of hours of training, and they still end up inferior.
So, in the end, the title is correct (and also common knowledge for anyone following this development), even if almost all of the content in the article is way off.
Was this a high school essay?
I think it's a bit unrealistic to compare these numbers. A human learning to drive has usually started with 16-18 years of learning how the world works before the 10 hours of transfer learning.
* Driving isn’t natural, it is movement which involves mostly fine motor skills combined with situational awareness of unnaturally shaped objects, which are 20 times out mass, moving 5 times as fast as we normally do, in an unnatural environment (roads, traffic, street furniture)
* Yes, but (1) evolution still helped us get the basics, and (2) our designs are based on what our minds can cope with, the ones which we can’t cope with are either unpopular or get banned
Tournament 1 on adderall. Tournament 2 without adderall.
Tournament 1 my consciousness and subconscious seemed to be working in tandom. I knew what action to take, and why I was taking that action.
Tournament 2 my subconscious seemed to play a bigger role being off the adderall. I was making the right decisions for the most part same as the week before while on adderall, but I was not consciously completely sure of my reasoning for most decisions until several seconds, minutes, or even hours after I had reflected on my decision making.
A little food for thought
The author doesn't -- assume that. They just state that "the measured performance of conscious processing is only about 50 bits per second".
Not only they do not make any explicit assumption that that (conscious processing) is the only processing going on, but also by spelling out that 50 bits is just the "conscious" part, they already imply that there's is/might be also subconscious processing going...
Perhaps the fact that we think we can do this isn't the greatest strength. While a lot can be said about the flaws of "Thinking Fast and Slow", one of the things that is certain is that many people reach an inference point and stop, but these inferences are often wrong.
The fact that an AI will be able to develop inferences with so much data in so little time is a real strength that we aren't going to be able to match.
I don't think that it's necessarily useful to compare the human brain with computers, because we have to apply distraction filtration to the system that a computer doesn't need to worry about much, and I'll bet that takes up tons of processing. I would argue that any computing system is an extension of your own mind, and with the data that it can compute and display it can be used to improve a broken decision making process.
Still good enough for us to be top dogs on the planet and even reach space. Where's the AI doing that?
Is this actually true? If I show you a bunch of shapes and ask you to make an inference, you're unavoidably drawing from an entire lifetime of experience.
One is that the operating frequency of the human brain is much lower. That results in much smaller energy consumption.
The other is that current neural networks are probably very inefficient algorithms. They are fundamentally based on linear algebra but there is some evidence that the non-linearity is the key.
My feeling is also that brain extensively uses probabilistic algorithms with hash encoding, conceptually similar to bloom filters, count-min, etc.
Albeit true that our brain is much slower than a modern CPU in terms of ops/s, every neuron is connected to up to 10000 other neurons. And we have something around 100 billion neurons in our brains. This means that the number of synapses is between 100 and 1000 trillions.
If we assume that every synapse can be considered like a "weight" in a neural network, we have a model that is much much bigger than anything developed to date. Moreover, the topology of said neural network is not a simple stack of layers, but much more complex like a super intricated residual network.
Finally, the brain is not using backpropagation to update the mentioned weights/synapses' strength, but instead is based on how often said synapses fire.
This is a very bad assumption. Each synapse is a highly complex chemical computer, and its "weight" with regard to its contribution to the postsynaptic neuron's activation at any point in time depends on a multitude of factors.
Those factors include dozens of interacting metabolic pathways, the interactions of various neurotransmitter systems which each have distinct behavior, and things like where the individual synapses sit relative to one another on the dendritic arbor. There's even evidence that at least in some neurons the activation pattern may influence RNA production in the neuron, creating a kind of recording which influences the behavior of the cell.
Current ANN models should be considered a very low dimensional approximation of the real thing.
Neural networks were not meant to be accurate analogues to the actual operation of the neurons in the brain. They were known to be poor matches for the actual biology, even back in 1959. The concept of the multilayer neural network was actually inspired from our knowledge of the visual cortex, rather than the lower-level fundamentals of the brain. The brain is most certainly not a neural network, as the term is understood in machine learning.
The brain is most certainly a neural network. It is literally a network of neurons. But you're right insofar that artificial neural networks are not very similar to the brain.
The statement in question would be like saying: "Italians most certainly do not make real coffee, as the term is understood at Starbucks." There's a technically correct reading of that sentence, but the implication is offensive to the subject matter.
1) It's incredibly parallel, few operations happen linearly. Even modern GPUs don't really match that.
2) It doesn't run everything all the time. Parts of the brain not in use are shut down. Modern computers do this to some extend but CPU cores don't switch off parts of the ALU not in use. The brain has much better energy saving options, only 10-30% of the brain are active at a given time.
Modern GPUs aren't even close: neither in terms of degree or in kind. A GPU does a lot of linear operations in parallel, but in the brain all the "compute units" are interacting with each-other in real time.
Deep neural networks, though they use linear algebra, are definitely non linear, as a result of both their architecture and their use of non linear activation functions.
Their non-linearity is part of the reason they can classify phenomena that linear techniques like simple regression cannot.
I do have challenges to the method of calculating how much power is put into the human training process. 10^8 might be too many orders of magnitude to breach, but humans:
1) Use data picked up visually to provide basic concepts used in playing go (like "connected", "lines", "territory", "me", "opponent"). Many of these concepts are potentially developed from visual data, and humans process a lot more than 50 bits of visual data in a second. This observation is very hard to quantity respecting how much visual data can be employed to learn about Go - how much of an advantage does the concept of territory give humans in terms of learning? It is probably huge.
EDIT Thinking about it a bit more, it is very plausible given AlphaGo's endgame play that it never developed a concept of 'territory' as we would understand it. Given that humans judge the state of the game by estimating territory for both players then assuming the winner is the one with more territory, that is a huge deal. It might explain why the learning seems so inefficient vs a human. A human would never make AlphaGo's endgame moves because they are bad by the territory heuristic.
2) Learn Go in a very communal environment. Game records date back a few centuries and all the patterns that are detected get passed on through the generations of professional players. The practical amount of cached processing power available to humans is higher than first inspection might suggest. Humans don't learn mainly through self play, it is closer to supervised learning. A 1900s player would simply lose to modern technique because of this accretion.
3) High level Go players are going to be strongly biased to people who 'got lucky' in how their brains map concepts. An average human probably isn't as good at learning Go as a peak professional.
No you wouldn't. Stated without qualifications, this viewpoint is staggeringly stupid. It's like placing a market order to sell AAPL and then being upset that your shares sold for $20 each. That's what you said you wanted. But it wasn't what you actually wanted.
Note also that your stated preference makes complexity theory irrelevant, as a brute force approach to solving any problem already achieves the theoretical optimum. Do you really want to claim that that's true?
> I can provide my body with as much energy as it wants to consume
No, you can't, except to the extent that you're assuming your body's existing limits on the amount of energy it will consume. That assumption means you accept that your body knows better than you what tradeoffs to make.
That is an unusually juvenile response for HN. You don't get to determine what other people want, even if they are stupid.
> It's like placing a market order to sell AAPL and then being upset that your shares sold for $20 each.
It is nothing alike; your analogy is incoherent. AlphaGo is claimed to be hopelessly inefficient vs a human yet has an amazing track record at winning Go, and whatever your analogy is meant to mean, there is no comparison between losing money on shares and effortlessly competing with every expert in a field.
I didn't say give up efficiency and see what happens, I said when choosing between better results and better efficiency, I'd prefer results.
> Note also that your stated preference makes complexity theory irrelevant, as a brute force approach to solving any problem already achieves the theoretical optimum. Do you really want to claim that that's true?
It doesn't make complexity theory irrelevant, at some point problems become so complex you can't solve them with brute force because of physical limits. Up to that point, if I have an optimal solution and you don't, then yes I think I'm doing better than you. Efficiency is great, but not if you don't get results.
> No, you can't, except to the extent that you're assuming your body's existing limits on the amount of energy it will consume
Ok, assume that :P. Duh. A human can apparently exert about 80 Watts sustained  and a Gen 1 Google TPU is 40 Watts . Google's laptop might use 4 of the things, but AlphaGo Zero is pretty strong, and we know from LeelaZero that you don't need a TPU to be able to crush a human professional. We aren't approaching the limits of physics or biology here. Even if biology couldn't handle it for some reason, the point is I'd rather use a TPU for thinking about Go than not, because for all that people wring their hands about what it means to think, clearly computers are better at making competitive decisions. If you'd prefer to lose efficiently, I can't stop you, but you aren't going to be winning any prizes if I use a TPU and you don't.
> That assumption means you accept that your body knows better than you what tradeoffs to make.
This is wrong. The next time you use a bicycle, train or car  you can reflect that you managed to make yourself look stupid on the internet. They make very different trade offs to a human and get much better results on efficiency and speed depending on what you prefer. If you were restricted to the tradeoffs your body made in transport, modern society would collapse and you'd probably starve to death or be eaten by a mob of city dwellers looking for food. Intelligence is going down the same path, where the tradeoffs that the body made are just not good enough for the context we operate in.
Sure, but in this case it's very easy to state with 100% confidence that you said you wanted something which you don't actually want. That's stupid, but it's a different kind of stupidity than wanting something you won't like. Compare how I described your problem:
>> That's what you said you wanted. But it wasn't what you actually wanted.
No, you can't. All energy that your body uses transforms to heat. The amount of head your body can handle is fairly limited. Bigger bodies perform substantially worse since they have a smaller volume/surface ratio. Even heatsinks such as elephant ears just provide a small help.
Not really. If we waited for better results humanity wouldn't exist.
It seems wrong to suggest the brain itself, examined as a biological machine, executes only 50 discrete operations per second. In fact, judging by the footnote, it doesn't seem to be suggesting that: it's just talking about the conscious information processing rate. But consciousness is a can of worms itself. It's not at all clear that information has to travel through consciousness in order for the mind to compute or decide something. In fact, the opposite seems clear from many experiments, such as https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3052770/
I'm not sure that the article's comparison is very illuminating.
Since people still seem to be discussing what features of nerve impulses are significant (their frequency? precise relative timing?) it would appear that a reasonable estimate of the information processing capacity of a single neuron is currently lacking, let alone an estimate for an entire brain taking account of redundancy. Does the comparison even make sense?
Now, try to calculate 325*528. Did you made it faster than a millisecond? If not, your pocket calculator is more efficient than your brain.
EDIT: I guess someone downvoted you for apparently conflating speed and efficiency, but I still think the point stands.
This is both true and a somewhat irrelevant. It is a bit like saying that a cpu can run recursive, non-parallelizable algorithms faster and more efficiently than a GPU, even if the GPU has more flops. The GPU optimized to run a few types of computation very quickly, and is very slow on other tasks. This applies to the human brain as well, and to a much greater degree.
While I wouldn't claim that the setup was very sophisticated, it worked reliable at least. But it doesn't even come close to the little nub in the back of your skull.
Matching minuscule differences and quantifying these in images probably uses a lot of processing power, indifferent of the algorithms involved. It doesn't only eat you CPU, it would do the same with your memory.
So yes, the brain is pretty good with images. But I expect it to be good at other tasks too that doesn't involve the visual cortex.
You might not remember what you have eaten today morning, but there must be at least some form of reliable memory in your brain. And you do complex, high-speed mathematical calculations that include basic algebraic calculus. Just maybe not those you want.
Synapses sit, from an electronics perspective, in a weird in-between space between linear analog computation and digital computation. There are lab prototype computational devices that might make good comparisons but it's just too different from, say, the sort of specialized decoders you'd find on your phone SoC for that to be a good analogy.
There's been a lot of research into organic computing and how it could relate to neural networks, but it's an insanely difficult task and will be decades until anything viable is produced.
However, the brain is indeed much more efficient when comparing energy consumption considering that it uses only about 20W or power.
So the energy efficiency is quite similar.
The brain works and learns 24/7. It needs 20W to function.
Edit: In case this is the cause for your objection: In the calculation for the energy needed for the human brain to master go (where I ended up with 14GJ=1.4*10^6 J), I only counted the consumption during the average 4 hours per day that one can expect someone to train. If you count all 24 hours, the 14GJ budget would be spent in 2-3 years. (Obviously, nobody can master GO in 2-3 years, regardless of effort.)