Hacker News new | past | comments | ask | show | jobs | submit login
How brains are built: Principles of computational neuroscience (arxiv.org)
144 points by blopeur on Apr 23, 2017 | hide | past | favorite | 31 comments



Here is my problem with the article: It has the title of a magna opus. The article has a title that suggests content that is expansive and authoritative, containing both rigorous theory and seminal empirical research. This article is not at all deserving of such a title, and I cannot think of a single neuroscientist I know that would put such a title on even their most ambitious work, much less this loosely thought out treatment.

It is, frankly, embarrassing. Even in 2011.


It was an introductory article written for private foundation that gives out research grants.

Title was fine. Why it was posted to HN is different question.


If the context of the title is to capture the goals of the private funding foundation and the article's contents are to describe some potential avenues of investigation or outstanding problems, then that does make me reconsider my expressed opinion in the grandparent. Thanks for sharing.


Dave Touretzky's course Computational Models of Neural Systems should be checked out by anyone with the interest in the topic.

Lectures, assignments and matlab code are all available online: http://www.cs.cmu.edu/afs/cs/academic/class/15883-f15/

The readings page alone is a treasure trove of background text in computational neuroscience theory starting from 1970s.


In the 1900s, people used to think the mind (and body, in a way) worked like a steam engine. In part, the steam engine was used as the analogy because that was the nearest and most technologically advanced input/output closed system that was available. (And, importantly, that most people could grasp and talk about.)

Hence colloquialisms like I need to "let off steam" or "I am under so much pressure".

It turned out to be an analogy that was so far removed from reality, it was useless.

I wonder if we are making the same mistake with computers as we know them today?

"I really just need to reset and reboot, y'know."


From the Abstract: "...computational science 'is no more about computers than astronomy is about telescopes.'"

So when they say 'Computational' Neuroscience, they're not particularly referring to using computers, but analyzing neurological systems using computational analytical techniques.


A salient difference from astronomy is that computational neuroscience is typically concerned with describing neural systems in terms of information processing. Our principle technological example of an information processing system is the computer. So while there's a distinction between "computational" as (a) a tool used for analysis, (b) a methodology or model employed to describe a system, and (c) an statement about a property of the system under study, computational neuroscience refers to at least both ab and often abc. This isn't the case with astronomy, because we aren't typically using telescopes to study how stars bend and collect light like telescopes (although we of course sometimes do).


It's just an analogy.


Are you looking for the perfect metaphor?

"Stress", "strain", and "tension" were all taken from mechanical physics.

Would I be burnt at the stake if I were to suggest that these concepts as we use them in psychology are more-than-just isomorphic to the way they're used in physics? That perhaps we are structures, and the stress occuring in our abstract social realm often manifests in the physical realm as creases on the forehead, and chewing of the fingernails.

I mean, we're made of matter just like the living tree is. Shouldn't we go through the same physical stresses at every level of our being?

We're the rube-goldberg-machines of structures, here. Really impressive skyscrapers that haven't quite yet noticed that they can be anything and everything, given the metaphor for it.

And so what's so different from a steam engine "letting off steam" and a load-bearing structure "letting off tension". Well, look up the Newtonian age formulas for calculating pressure and tension and you tell me the difference.

Not much of one, is there?

But we're talking about electricity here, right? Tooootaly different substance! Oh wait, there is voltage, however. How does that definition go again?

> One volt is the amount of pressure required to cause one ampere of current to flow against one ohm of resistance.

Oh my.. back in pressure land. Or was that psychology land?

I'm under a lot of voltage attempting to convey this vast homogony to you.

Anyway, my point is that yes, we're not computers, but also yes, we are computers.

I leave an open question for the one smarter than me: What is the "pressure" of data science?

It must be a ratio between a metaphorical force applied, and a metaphorical surface area on which to act.

Im excited to hear the answer.

* http://www.humanstress.ca/stress/what-is-stress/history-of-s...


Well, metaphors are like perspectives. The steam engine perspective has many useful aspects to it, even today. But it has more limits than the computer metaphor.

But if you want to use the metaphors to capture the core of what the brain does, then no, I don't think either are much good.

I would put much more emphasis on learning and surprise. Not the big kind of learning, like a new language. But learning what to expect in situational patterns. Making predictions of what might happen, and surprise when what really happened did not fit anything.

But that does not have a good metaphor from ordinary life.


The article says nothing about how brains are built, nor does it mention any principles of computational neuroscience.

The paper largely consists of smug statements such as:

> Despite huge efforts and large budgets, we have no artificial systems that rival humans at recognizing faces, nor understanding natural languages, nor learning from experience

Progress in these areas is very rapid, I hope the author won't be too disappointed in the outcome.


Currently artificial systems cannot rival humans at recognizing images, understanding natural languages and learning from experience yet, despite all the great progress that has been made. Great progress doesn't mean we're there yet.


I had to check if you were the author of the paper. You seem to think exactly the same way.


If you actually try current vision systems out on real raw video data as opposed to clean datasets of "good" photos pre-selected by humans, you'll see that they are terribly far from human performance.

Same goes for translation systems.

Most current systems (by their very design) lack dynamical representation capability necessary for modeling interactions in/of the world. I hypothesize this is important for AI that actually gets what it's dealing with.


I know it was published in 2011, but to extend the logic in this article:

To build a machine that can fly, we need to build a machine that can flap its wings.

To build a car that moves, we must build a machine that can lift its two feet in alternating motion.

To build a camera that sees, we need to build a lens that can flex itself to change focus.


If you are talking about building AI, maybe you have a point. But its pretty clear the domain of discourse here is computationally understanding the brain. In which case I think it is prudent to actually understand the brain.


What you really want to understand is the abstract model the brain uses to think, then you can go build that using other tools that may or may not look like a brain.

Sure, the biology gives us some clues, but it may not be the most useful way to view what is going on.


Your examples have a point in them -

To really understand how insects fly, it is helpful to build and analyze various machines that can flap its wings - since aerodynamics is complex and the equations we use to model airplanes don't really work on that scale.

To understand how exactly humans walk, it really helps to build a machine that can lift its two feet in alternating motion and analyze how all the minor forces interact to make it work. It's immensely useful when we're building e.g. powered ankle prosthetics, and bipedal movement has some advantages in terrain, so we also want machines to be able to do that (e.g. https://www.youtube.com/watch?v=rVlhMGQgDkY)

For understanding human eyes - experimenting with lense systems that change focus was how we got to "augmented vision" e.g. humans with spectacles.

The same goes for analyzing and understanding how human brains work. It's also valuable to think about minds in general, but for many purposes we care about a particular mind, and all the individuals I currently care about are homo sapiens, not machines; so we need to understand their brains.


Does the author claim that AI should be neuromorphic?


I haven't read the article but wanted to comment on specifically for your question. As a person working on neuromorphic computing, I would like to make something clear:

Most of frontier neuromorphic research of today, neither focus on creating an "general artificial intelligence" by copying human brain nor think that neuromorphic computing is the most probable discipline to achieve it. Instead, optimizing hardware for neural networks is the focus. If we want to achieve high number of (>10^14) weights, low energy consumption and spatial shrinkage we would like to give up running 1000 GPU clusters which only a handful of companies have. Neuromorphic computing only suggests that we need a better hardware (which may or may not require working with spiking neural nets as a consequence) in order to make AI hardware scalable, nothing more.


We also have the mind, Computational Theory of the Mind (CTM) https://plato.stanford.edu/entries/computational-mind/


And Memory Evolutive Neural Systems (MENS) https://www.ncbi.nlm.nih.gov/labs/articles/26193173/


I like the mix between philosophy, biology and computation. Philosophy tries to describe what is thinking, biology tries to understand how it happens and in computation we try to replicate it.


psychlogy is what tries to describe thought and emotions. Philosophy tries to describe what to think about.


I said it right, psychology is not involved. Thoughts and emotions are emerging properties. But you need to define the basics as what is intelligence, what is thinking. You only do that with philosophy. Philosophy is not only about what to think about.

One of the biggest challenges to create an intelligent systems is to define what is intelligence. If you can define it in a way you can measure it you can track the progress. Nowadays there is not a clear definition.


> One of the biggest challenges to create an intelligent systems is to define what is intelligence

That would be paradoxical. The challenge can't be to define the challenge. Likewise, Philosophy presupposes a notion of Philos and Sophia. Psychology is a related field that can help to refine this notion, isn't it?


The paper is not about computational neuroscience, but about the brain in general. For those interested, there is a great book actually titled "Principles of computational neuroscience"[1]. Also, the free "Book of Genesis"[2] has an excellent short introduction to computational neuroscience.

1. https://www.amazon.com/Principles-Computational-Modelling-Ne...

2. http://www.genesis-sim.org/iBoG/iBoGpdf/index.html


It would be nice if the article's title mentioned its publishing date. I wrote a comment criticizing the article for ignoring a number of important papers published since 2010, the latest published article that was cited, and then had to delete it.


If you have the references handy, those recent papers would still be interesting to folks (like myself) who aren't in the field/up-to-date but are interested in the topic!


Hey guys, I found the non-crappy papers:

https://arxiv.org/abs/1604.00289 -- Building Machines that Learn and Think like People

http://rsif.royalsocietypublishing.org/content/13/122/201606... -- Active Inference and Robot Control: a case-study


The origin of the quote in the abstract is sort of disputed. While many people attribute it to Dijkstra, the true origin is unclear.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: