Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Is neuroscience-inspired machine learning the next big thing?
81 points by hsikka 23 days ago | hide | past | web | favorite | 45 comments
In their article and review paper from last year(https://deepmind.com/blog/ai-and-neuroscience-virtuous-circle/) the team at DeepMind indicated that while neuroscience inspired the first generation of artificial neural networks, the two fields aren’t collaborating.

I think formalizing computational paradigms in the brain and then building new models and topologies could be huge, what do you think?




No. Or it depends on what you mean by inspired. There is no need to build airplanes with flapping wings, but you could still say that human flight is inspired by bird flight. When we look at the brain we have no way of disambiguating which properties are implementation details and which properties play an important role in learning. We made much more progress of understanding learning in the bottom up approach, where we find from first principles what kind of computations enables us to create certain behaviors. Connections to neuroscience are mostly interesting parallels that are found post-hoc. We don't even know if the human brain is actually good at what it's doing.


>> There is no need to build airplanes with flapping wings,

As I understand it, birds don't need to flap their wings to fly. Many birds can glide for long distances, say. They flap their wings to give themselves a push and get off the ground, etc, but not to stay aloft. In other words, airplanes do work on the same principles as birds do, they just employ them in a different manner.

Similarly, the whole idea that we can reproduce human intelligence using computers is based on an understanding of human intelligence as computation, and of the brain as a computational device [1]. Without this assumption, AI would have been very difficult to justify, and I do mean AI in all its forms, from its beginnings with the Dartmouth conference and what can be called "McCarthy's project", to modern days.

For example, for most of the history of AI, the main thrust of research was on propositional and first order logic as models of human reasoning. The current wave of deep learning itself is predicated on the idea that the human brain is a kind of computer and so it can be simulated by a digital computer. The connectionists are just a little more literal in that sense, than most other AI people.

But, yes, absolutely, wa are totally trying to make artificial minds that behave just like human minds, that "flap their glia like brains" or whatever. The only problem is that we don't actually have a very good idea how human brains work- let alone the minds they produce.

_______________

[1] These are the main ideas behind cognitive science. See the wikipedia article:

https://en.wikipedia.org/wiki/Cognitive_science


Hummingbirds flap their wings at 60 Hz or they drop like a rock. Some flying birds cannot glide at all. Flap frequency is directly related to the size of the bird, so that there's optimal matching of Reynolds number effects.


I don't know what Reynolds number effects are.

Hummingbirds are a special case, if I understand correctly; they fly like insects, most of which can't glide.

So maybe the analogy about planes should be with insects, not birds? "Planes don't flap their wings like insects".


This is a cop out to be honest.

Just because you can make an airplane that works bereft of the working principles of a bird does not mean that the same fundamental principles of the biological system cannot lead you to further breakthroughs.

In fact, all your analogy really suggests taken to it's logical conclusion is that we shouldn't look at a system already created to solve new problems. You could just as easily say "There is no reason to build a helicopter or hovercraft like a plane." At the end of the day, the same working principles are at work, and there is value to be found in the process by which the essential characteristics of one design are distilled and modified to give birth to another.

A hovercraft can be inspired by a turbofan. A helicopter or or bird can lead one to the conclusion of fixed wing flight, just as the fixed wing can lead you to rotary flight.

In short, it is a shallow person who stops looking because airplanes don't flap.


> There is no need to build airplanes with flapping wings, but you could still say that human flight is inspired by bird flight.

This is a super interesting analogy and it's changed my perspective a bit. But it all depends on your end goal. You say:

> We don't even know if the human brain is actually good at what it's doing.

What is it doing? Do we want our AIs to maximize learning or just human-like behavior? If it's the latter, I believe you absolutely want to look at neuroscience and emulate the human brain. Of course, airplanes are super good at flying, but they look nothing like birds.


I've come to think of machine learning as an engineering approach for building more scalable statistical systems. Computational neuroscience seems like it does more math modeling for understanding of brain function, but still might use machine learning methods as part of its research.

Also, good is subjective, and we don't have wetware in our engineering toolchain anyway. Loosely coupled metaphors seem to be popular and effective due to ambiguities like this.


https://www.youtube.com/watch?v=Fg_JcKSHUtQ

Nature almost always does it better, it's OG.


Moreover, we don't know if we can get the brain's cool features without the (possible) problems that humans have.


I went to the Goto Copenhagen conference a few days ago and was quite taken aback by a keynote by Oshi Agabi. His company Koniku was developing what they call "neurotech" - chipsets composed of actual biological neurons that could be programmed.

He was showing these various prototypes of a product they are developing that is used to detect smells. It's a little box that keeps a certain stable temperature. Inside the box, they connected cells with various taste receptors to this synthetic "brain", also modifying the DNA in the smell receptor cells to be hypereffecient in the process.

It seemed liked they could train these networks in a similar way to machine learning, but they had trouble remembering over time, so now they were experimenting with using neurotransmitters (emulating feelings) to persist the changes.

He said their customers were various American 3-letter agencies.

Their website is at https://koniku.com/ if you're curious.


Very interesting, I’ve been thinking about doing an open sourced implementation of essentially this, I think wetware computing could be really big


The old Dijkstra quote about submarines swimming comes to mind. Getting hints from nature is sometimes (although not always) helpful when we're just starting out in a field, and it's great when we've mastered that field and are just looking for the last 5%, but it doesn't help very much in between the two.

I don't think we understand what the human brain's doing, on a semantic level, well enough to really get hints from it yet. Last I heard we understand (on a functional can-reproduce-in-silico level) most of how flies and rats can see, how snails figure out whether to munch or not, and a bit of how rats navigate the world. We've mapped a worm's connectome but don't really understand it. Unless I'm wrong (and I'd love links to any research to the contrary) we're miles and miles away from understanding most of what the human brain does.


That doesn't mean we're unable of creating whole new concepts that are simply inspired by the architecture of brain/other nature concepts that we don't understand, but that has no relation to the concept it inspired.


“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

― Edsger W. Dijkstra


That's the one! Although if you see fish swimming, it might give you some ideas about how a submarine might be constructed.


One key "architectural" difference seems to be the asynchronous or event driven nature of the brain vs. the "full computational" approach of most DNN architectures currently en vogue. There is some research into spiking neural networks, however, there seems to be no relevance of them yet with respect to most current problems (e.g. NLP, visuals). Regarding the use case AGI (assuming we would want to model a complete brain to do that) my quick over the envelope calculation is:

Given that the brain has 100 billion neurons with about 5000 connections each that contain state (ignoring the various neurotransmitter side effects) at half precision we require roughly 1 PB of memory for it. Add some factor <10 if you want to include the routing information to make the connections dynamic.

Regarding computational capacity of a event driven architecture for AGI based on the brain: Assuming each neuron fires on average at 100 Hz on each connection that would amount to 50 PetaFlops. Effectively, this number could be lower, if that average firing rate and connection utilization goes down. So when looking at the current supercomputer list, there should be some machines around that would be able to do such calculations assuming we have tools to model the architecture.


Are you sure about the 100 hz? I always thought it was more like 1?


What makes you think a neuron firing is equivalent to a single Flop?


One neuron firing on 5000 connections I assumed to be 5000 Flops


That's an assumption you have to defend.


I always like to use flight as an analogy. When we tried to copy nature, we failed pretty hard but when we came up with our own way it worked out best. Computers are not like human brains. We are in the process of inventing a new kind of intelligence for machines. It may guide us or give us ideas, but it's not going to solve the problem for us.


I like the flight analogy too. Extending it further, we failed to build flying machines by blindly copying bird design when we had no idea how bird flight actually worked, and we successfully built heavier-than-air powered flying machines when we understood how bird flight worked and realised the power-to-weight ratio etc. was such that we'd need a completely different design to get humans into the air. I believe it'll be a similar case with AI and neuroscience - we're unlikely to get general AI until we have a pretty solid understanding of the way human intelligence works, but when we do we'll probably find that the artificial form of intelligence will have to be designed differently due to inherent constraints.


I'd say unlikely - mainly due to the difficulty of analyzing the computational paradigms of the brain is impossibly difficult. Basically it's like trying to reverse engineer a program using a thermometer: this part of the hardware gets warm when x


I tend to resist arguments that rely on the term "impossibly difficult". At one time it was thought that a chess program playing at Grandmaster level would be impossible because of the combinatorial explosion, but we now have chess programs playing at Grandmaster levels. When chess programs started approaching Grandmaster level, people said programs playing Go at Grandmaster level would be impossibly difficult. But there they are.


I agree with the sentiment but more rigorously the current analytical techniques (fMRI, dissection, etc.) simply do not allow sufficient information - from a Shannon entropy POV - to decode the biological 1's and 0's. When/if the physics gets to the point where biological computation becomes decipherable, I have not doubt brains will both be accurately modeled in silicon and that these models will contribute greatly to AI development.


There is plenty of work going on along these line:

1. Numenta.com : founded by Jeff Hawkins of Palm Fame

2. Vicarious.com : founded by Dileep George a Numenta alumnus.

3. Joshua Tanenbaum's work at MIT.

4. Eric Horvitz at MSR

Also Check out Pentti Kanerva work into sparse models of human brain.


Also Chris Eliasmith's team, especially their work on Intel's Loihi


There was a rather interesting talk at the Forbes Under 30 summit last month on this, "How The Brain is Inspiring AI" [1], but I cannot find the video. Lavin argued neuro and cognitive science -inspiration, not derivation, is useful at varying levels of abstraction, i.e. Marr's levels [2]. He showcased three works at the respective levels: Tenenbaum et al at MIT [3], some Vicarious research [4], and Blake Richards' work towards "DL with segregated dendrites" [5].

[1] https://www.forbes.com/forbes-live/event/a-i-machine-learnin...

[2] http://blog.shakirm.com/2013/04/marrs-levels-of-analysis/

[3] https://cbmm.mit.edu/about/people/tenenbaum

[4] https://www.vicarious.com/2017/10/26/common-sense-cortex-and...

[5] https://arxiv.org/abs/1610.00161


Great ICML presentations by,

Tenenbaum: https://youtu.be/RB78vRUO6X8

Richards: https://youtu.be/C_2Q7uKtgNs


Joscha Bach [0] has some really interesting CCC talks about the reverse: Computational theories of the mind, really good stuff.

[0] http://bach.ai/


I think it’s important to understand what our objective is.

Reading these comments comparing birds and aeroplanes is nonsense since they both have very different flight behaviours and objectives. Birds’ wings flap for agility to avoid predators. They have brains which is half reflex and allows them to regulate their bodies. Planes don’t have predators and don’t need cognition.

If our objective is to solve a business problem, machine learning is great for specific tasks and can achieve superhuman results in some cases. We don’t need much neuroscience here.

But if our objective is AGI, it gets interesting because it is very far from current machine learning / deep learning / reinforcement learning. It’s hard to put a definition on AGI at all. What do we want to achieve? To replicate the human brain, of course we need neuroscience. To replicate intelligence without designing the components for bodily function will need an approach which looks at brain circuitry and function but is implemented with a good level of abstraction.

I believe we know a lot more about the brain than the public thinks. Read Cell Neuron and Nature Neuroscience journals and clinical encyclopaedias to get an understanding. I don’t think we should be replicating things on the neuron level but at a more abstract level of neuronal dynamics, neuronal populations and networks with a focus on understanding the developmental biology of the first few years of human life where learning really happens.


As someone who took this approach to AI I don’t see another way to get there. We should be reverse engineering the salient aspects. But neuroscience won’t be sufficient, we should be looking at neurogenesis. My bet is that the implementation details won’t matter that much and that the main driver is the architecture and ability to construct those architectures. Right now we assume fully connected ANNs and then optimize those connections. Neurogeneis certainly doesn’t work that way.


Our understanding of the neural tissue of the brain - as well as what it's actually doing are so poor, and the needs of what we're doing so different that there's a greater chance that our understanding of neural tissue is improved by developing our understanding of deep learning than the other way around.

To put it in bird flight analogy, by constructing airplanes and making it an exact science, we're able to get a much finer understanding of the pricipal problems in flight and get to appreciate the difference between the flight of colibris and bumblebees (who can't fly by gliding but must beat their wings frenetically) versus larger birds (which can fly by gliding, more closely resembling plane flight).


Your underlying assumptions here is that the the brains material implementation details are a dominant driver in the computational function of the brain. I’ve studied networks that suggest the function falls out of the topology (circuitry) not the material implementation details.


How nuerons do addressing could be interesting to know. Is it like dhcp?. also is there data in the signals, like CAN-bus? Trying to draw comparisons maybe useful in building some kind of architecture. We know language is the tool or technology of understanding. And that developing it creates context for further understanding so in some respects language modelling itself is self learning. You'd then maybe only have to make it goal oriented. How that mental model works and grows seems to operate at metaphysical level but is represented in that physical pink lump. But as people often point out you don't need to be a bird to fly. Aren't we going for something beyond our own contraints when we try to make thinking machines? In which case the brain could be a limiting architecture. Harder to model is how we 'feel'. Pain is felt beyond just heat, pressure sensors or something unwanted demanding our attention. It's simple to concieve a model for reasoning using symbols, so in which case how it's done physically may be irrelevant? The intolerable ache of a tooth however is a hard thing to comprehend. Physcologically pleasure and pain are just multiplication or division or a success rate of our desired state. But understanding and feeling sensors and incorporating them into our condition brings in the whole needing a body argument. It's probably inaccurate to just call the CNS sensors and the PNS actuators. How the limbic system or the gut interact with our intelligence needs factoring in too for the creation of values. We identified many of the salient aspects of the mind, brain and body a long time ago. I wonder if it's enough to simulate bodies or needing sensory robots is more important? I don't feel I would know much if I had sensory deprivation of more than 3 senses. We encode many things onto memories. Time, geospatial information, emotional information, sensory information. Our corellation of all that stuff is our subjective understanding of it. It feels naive to assume one small group of people in one small domain will get us there given all the touchpoints.


I heard a funny saying about Neuroscience (by a well know AGI expert) that went something like this: Neuroscience research and its ability to explain AGI is like an engineer would take a microscope to study birds in order to explain the physical laws regarding flight.


Well a lot of folks are working on it. I’m not sure if anyone has done anything strikingly successful with eg spiking neural networks or something.

These guys built a more biologically plausible model and got it to do very simple tasks

https://pdfs.semanticscholar.org/a5c4/19fcd6ea6f33be067b665e...

Don’t know if their lab has had more success or not on practical tasks but that’s not really the point I guess


I don't know anything about this question, but "neuromorphic photonics" sounds so cyber and I guess it's already kind of real:

https://www.crcpress.com/Neuromorphic-Photonics/Prucnal-Shas...


we don't necessarily need to believe the hype. Neuroscience has not discovered something that can be used reliably in connectionist networks recently. The prevailing model for learning/plasticity (hebbian - "fire together wire together") is an obviously dumb model, and plasticity has proved hard to crack. A lot of people do not believe that the artifacts observed under STDP experiments are fundamental[1]. So the key question that has to be answered before neuroscience can influence deep learning is how plasticity works. With that in mind, all the neuromorphic computing platforms that have been proposed so far seem premature optimizations.

1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3059684/


Neuroscience inspired machine learning is what's happening now. Others approaches are logic-based and evolutionnary inspirations. A nice piece commenting on some of that: https://neurovenge.antonomase.fr/


CPU and memory design have always been intrinsically inspired by neurology. A fatal flaw of humanity is to design new things by drawing from what we know already works.


Yes but we will have to come to grips with the superior hardware structure of the brain, and scale, in the process.


They absolutely do collaborate on methods and algorithms.

In terms of theories, the goals are rather different for now.


No, because to achieve an animal-like, or even insect-like performance is way too complex and neuroscience itself is still poorly understood. How bees know how to dance? There is no supervised learning practices for bees, no schools. How a bird knows how to make a nest? How a newborn goat knows how to walk?


I am surprised Karl Friston is not mentioned. He seems to be the man of the hour regarding AI and neuroscience: https://www.wired.com/story/karl-friston-free-energy-princip...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: