Hacker News new | past | comments | ask | show | jobs | submit login
The Brain vs. Deep Learning Part I: Computational Complexity (timdettmers.wordpress.com)
106 points by mbeissinger on July 27, 2015 | hide | past | favorite | 16 comments




Quote of the day for me so far: "According to Bitcoin Watch, the aggregate power of the bitcoin network is about 4.8 x 1021 FLOPs. Using your own estimates, this would be enough to simulate 4 or more brains."

I have been wondering recently if there is some way of combining cryptocurrencies with deep learning. I know the Bitcoin hash calculations are used to secure the transactions, but if there was some way of either re-purposing these calculations or adding some additional ones, then that could unleash a massive amount of computing power. Plus if you could mention both machine learning and the blockchain in the same sentence you'd be sure to get a lot of interest from VCs.


Consider that every single cell in the body is essentially a teeny tiny distributed system with on the order of 10 million moving parts only counting proteins (RNA is probably as important for storing and encoding the state of various cellular processes) with a bare minimum of 23k distinct types not counting splice variants. We have barely begun to understand the computations that occur within a single cell for tuning, learning, and transmission of information and metabolic load balancing.

I suspect that there is some additional computational overhead needed to run the requisite biological processes in neurons but ANNs are astoundingly far from even a single neuron let alone a network of neurons (perceptrons are sometimes compared to dendritic branches and even that is a stretch).

The real question to me is whether we actually need to replicate all the biology underneath to get some of the higher level abstractions that we recognize as intelligence. I also have to point out that it took nature on the order of 2 billion years to develop the set of rules that are used to run cells and coordinate multicellular systems and it may very well be the case the some of them are purely empirical. I know the AI guys gave up on rule based systems long ago, but even if you aren't going to fill them all in by hand you need a way to find the rules that work and the search space is monstrously large (keeping in mind that even a hyper intelligent being remains bound by the laws of physics, it would still have to do a whole bunch of experiments in order to develop a model that might let it predict what rules it would need to operate more effective).

edit: The assumptions that go into the calculation of the computational complexity are gross simplifications. His average firing rates are also about an order of magnitude too high (though this apparently is Kurzweil's fault).


As several people pointed out, the arguments made in this article are severely weakened by the fact that we do not know what fraction of this complexity is essential for brain function and what fraction is unnecessary complexity caused by evolutionary history and biological constraints.

In addition, we should also keep in mind that a digital AI can have many 'unfair advantages' compared to human brains. Besides lacking the obvious biological constraints, digital AIs are not necessarily limited to pure neural networks. AIs can be made up of hybrid systems in which neural networks have access to web-scale knowledge bases, extremely fast and prcecise databases for writing and reading arbitrary data and storing them indefinitly, APIs for running computations that would be difficult to implement in neural networks, and so on. The neural network part of the AI could evolve and learn with having zero-latency, high-bandwith access to several such resources and learn how to make optimal use of them.


The article mentioned an example of a feral child, I looked it up, and it was probably the saddest story I've read in a while: https://en.wikipedia.org/wiki/Genie_%28feral_child%29


After coming across this, I wonder just how much of the brain is required to exhibit intelligence. It seems like some people can get away with a lot less that the 'normal' amount.

http://www.rifters.com/crawl/?p=6116


Claiming we're not going to be able to simulate the human brain this century is a little bold. That is 85 years away. Eight five years is a long time in technology. Compare for example where we are now with where we were 85 years ago. In 1930 electronic computers did not exist, there was no sharing of information and research via the internet, the global population was around one third of what it is today so there were fewer people to work on technology research, and so on.

And forecasts about what won't happen which turn out wrong (false negatives if you like) don't tend to be viewed in the same way as forecasts about what will happen which turn out wrong (false positives). Compare how we view "there is a world market for maybe five computers" and "640Kb ought to be enough for anybody" (we tend to laugh) with all the false predictions about flying cars and so on for example (we tend to be more rueful). I was just thinking this morning about Clifford Stoll's 1995 book Silicon Snake Oil about the over-hyping of the internet, and wondering why someone would commit to a negative position in such a way.


People with a dualist view whose fundamental belief makes the human mind a supernatural force will give illogical arguments like this.

Basically he is saying "we can't get AGI via deep learning because brain simulation is really hard."

Deep learning, brain simulation, and AGI are all related, but different concepts.

Take something like deep learning and combine that know-how with agent-based developmental AGI approaches that have been in progress for some time. We don't need any more computing power than we have. We need a few more years of solid research and engineering and then you will see surprisingly general and human-like capabilities.

Take a look at the AGI-15 research.


Thank you for posting this, I was beginning to go insane with all the deep learning hysteria.

I would encourage everybody here to purchase a neuroscience textbook. Then, if you see an AI researcher, bash them over the head with it.


>I would encourage everybody here to purchase a neuroscience textbook. Then, if you see an AI researcher, bash them over the head with it.

While I fully support doing so, it must be said: the active cabal of researchers behind "deep learning" don't go around saying their theories are neurologically accurate, only that their ANN models perform well on supervised classification and unsupervised feature-learning tasks.

I would indeed say that if you want to create "AI" or a "singularity", you need to know a lot more about the inner mechanism of what you're actually doing, what inference problem you're solving and how, than current deep-learning theories allow for.


There is some reasonable scientific middle ground between being an unreflected Deep Learning fanatic at one end of the spectrum and being an AGI denier at the other end. In typical internet fashion, we're mostly exposed to extreme and exaggerated viewpoints that are not above distorting a few things in order to get their message across - and this article is no exception.

Just to grab one strategy used in these articles (again, on both ends of the opinion spectrum): comparing apples to oranges. In this case, it's the neuronal firing rate. The biological brain uses firing rate to encode values, but that's rarely used in silico outside biochemical research because we have a better way of encoding values in computers.

One side (in my opinion, reasonably) asserts that this is an implementation detail where it makes sense to model a functional equivalent instead of strictly emulating nature. This view has gained credence from the fact that ANNs do work in practice, and even more importantly, several different ANN algorithms seem to be fit for the job. A lot of people believe this bodes well for the "functional equivalence" paradigm, not only as it pertains to AI, but also as it relates to the likelihood of intelligent life elsewhere in the universe.

The other side asserts that implementation details such as the neuronal firing rate are absolutely crucial and cannot be deviated from without invalidating the whole endeavor. They believe (and I'm trying to represent this view as fairly as I can here) that these are essential architectural details which must be preserved in order to preserve overall function. And since it's not feasible to go this route in large-scale AI, the conclusion must be that AGI is impossible. A lot of influential people believe this, including Daniel Dennett if I recall correctly.

The article is very close to the latter opinion, but it goes one step further in riding the firing rate example by not even acknowledging the underlying assumption and jumping straight to attacking the feasibility of replicating the mechanism.


Well...

ANNs may turn out to have enduring usefulness, but more likely is that better (more accurate, more efficient) tools will be found. I see no evidence that ANNs are optimal for the problem space they tackle, and non-optimal techniques tend to be quickly forgotten once bettered. Such is progress.

The only reason anybody seems to think ANNs have some kind of assured longevity is because of the magical word "neural" in the name.

So I agree. It would be wrong to conclude that "AGI cannot exist" on the basis of differences between ANNs and the human nervous system. On the other hand, if and when AGI does happen, ANNs may not have a major role.


Awesome post. I'm not sure I understand the estimates of computational power of various brain parts (does that even make sense?) but overall I think the author has the right idea on how far we are away from simulating the brain.

Not only does planet Earth train trillions of neural nets in parallel, but it's running a very elegant evolutionary algorithm for hyperparameter selection among all possible units.

Some interesting (and somewhat depressing) tidbits of neuroscience that further corroborate the futility of full brain simulation:

- Collision avoidance in locusts is entirely implemented in a single neuron [1]. All the computation is performed by the complex nonlinear integration of action potentials across the dendritic tree. The geometry of the dendritic tree is really important. - Neurons with mechanosensory receptors can actually fire in response the dilation of a blood vessel pressing up against it [2]. The vascular system innervates throughout the brain like a secondary connectome, and is implicated in information processing as well. Good luck simulating blood flow in 400 miles of elastic tubing.

- Every voltage-gated channel or patch of cellular membrane basically acts as a leaky integrator, and digital systems are pretty bad at this kind of operation. Analog circuits/neuromorphics are useful for this (as well as the asynchronous dynamics) but good luck fabricating a chip that operates in 3D and integrates an arbitrary nonlinear equation.

- Simulations often involve injecting random stimulus into the network, or showing it images through some approximation of the retinal ganglion cells + V1 cortex. However, the brain has evolved to operate in a closed sensorimotor loop, so the brain's activity ought to influence subsequent perception (and brain state). The inputs one feed in through the eyeballs and thalamus play a large role in the dynamical state. This is one of the main arguments for embodied cognition approaches.

Not all is hopeless, though: - I think neuroscience / deep learning models complement each other well. The success of techniques like dropout-based regularization and ReLU in practical AI tasks have prompted neuroscientists to actively look for how biology solves rectification and un-learning.

- If they can get their act together and fire their middle management, I could see Cisco making a huge contribution to deep learning by building faster switches. Nvidia has done a good job with pushing GPU Flops, and the bottleneck right now is on the network side.

- DNNs and other data-driven generative models are really cool because they "replay" the human condition back to us. Images generated by DeepDream have surprising amounts of "ordered randomness" in comparison to fractal-based images. Perhaps instead of trying to build this generative process from inside-out (i.e. from neurons to minds), it might be interesting to see what happens if we train a DNN to mimic human behavior, and see what internal states self-organize as a result. The film "Ex Machina" mentions Jackson Pollock and the use of Search Engine Data to capture how people think, which I thought was brilliant.

Citations: [1] http://www.frontiersin.org/10.3389/conf.fphys.2013.25.00090/...

[2] http://www.ncbi.nlm.nih.gov/pubmed/17913979

[3] http://www.cl.cam.ac.uk/~jgd1000/metaphors.pdf

[4] http://hub.jhu.edu/2015/04/02/surprise-babies-learning


Jesus. This post exhibits classic hype.


I spent a nontrivial amount of time researching my comment. What do you find hyped?


We have no idea how far we are from simulating a brain. Because we have no idea how exactly computation is performed in a brain. Neither in a locust's brain, nor in a human brain.

On the other hand, we have already built systems (ANN based) which can do non-trivial things: play Atari games, tell a cat from a dog, convert speech to text, translate from one language to another, etc.

This points to the strong possibility that all that biological complexity in neurons is completely irrelevant to the principles of intelligence, just like the fact that a modern transistor needs 500 parameters and a ton of complicated equations to describe its physical operation is irrelevant to its main function - a simple ON/OFF switch. If we want to simulate a computer, using more complicated transistor models gains us nothing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: