It is, frankly, embarrassing. Even in 2011.
Title was fine. Why it was posted to HN is different question.
Lectures, assignments and matlab code are all available online: http://www.cs.cmu.edu/afs/cs/academic/class/15883-f15/
The readings page alone is a treasure trove of background text in computational neuroscience theory starting from 1970s.
Hence colloquialisms like I need to "let off steam" or "I am under so much pressure".
It turned out to be an analogy that was so far removed from reality, it was useless.
I wonder if we are making the same mistake with computers as we know them today?
"I really just need to reset and reboot, y'know."
So when they say 'Computational' Neuroscience, they're not particularly referring to using computers, but analyzing neurological systems using computational analytical techniques.
"Stress", "strain", and "tension" were all taken from mechanical physics.
Would I be burnt at the stake if I were to suggest that these concepts as we use them in psychology are more-than-just isomorphic to the way they're used in physics? That perhaps we are structures, and the stress occuring in our abstract social realm often manifests in the physical realm as creases on the forehead, and chewing of the fingernails.
I mean, we're made of matter just like the living tree is. Shouldn't we go through the same physical stresses at every level of our being?
We're the rube-goldberg-machines of structures, here. Really impressive skyscrapers that haven't quite yet noticed that they can be anything and everything, given the metaphor for it.
And so what's so different from a steam engine "letting off steam" and a load-bearing structure "letting off tension". Well, look up the Newtonian age formulas for calculating pressure and tension and you tell me the difference.
Not much of one, is there?
But we're talking about electricity here, right? Tooootaly different substance! Oh wait, there is voltage, however. How does that definition go again?
> One volt is the amount of pressure required to cause one ampere of current to flow against one ohm of resistance.
Oh my.. back in pressure land. Or was that psychology land?
I'm under a lot of voltage attempting to convey this vast homogony to you.
Anyway, my point is that yes, we're not computers, but also yes, we are computers.
I leave an open question for the one smarter than me: What is the "pressure" of data science?
It must be a ratio between a metaphorical force applied, and a metaphorical surface area on which to act.
Im excited to hear the answer.
But if you want to use the metaphors to capture the core of what the brain does, then no, I don't think either are much good.
I would put much more emphasis on learning and surprise. Not the big kind of learning, like a new language. But learning what to expect in situational patterns. Making predictions of what might happen, and surprise when what really happened did not fit anything.
But that does not have a good metaphor from ordinary life.
The paper largely consists of smug statements such as:
> Despite huge efforts and large budgets,
we have no artificial systems that rival humans at recognizing faces, nor understanding natural languages, nor learning from experience
Progress in these areas is very rapid, I hope the author won't be too disappointed in the outcome.
Same goes for translation systems.
Most current systems (by their very design) lack dynamical representation capability necessary for modeling interactions in/of the world. I hypothesize this is important for AI that actually gets what it's dealing with.
To build a machine that can fly, we need to build a machine that can flap its wings.
To build a car that moves, we must build a machine that can lift its two feet in alternating motion.
To build a camera that sees, we need to build a lens that can flex itself to change focus.
Sure, the biology gives us some clues, but it may not be the most useful way to view what is going on.
To really understand how insects fly, it is helpful to build and analyze various machines that can flap its wings - since aerodynamics is complex and the equations we use to model airplanes don't really work on that scale.
To understand how exactly humans walk, it really helps to build a machine that can lift its two feet in alternating motion and analyze how all the minor forces interact to make it work. It's immensely useful when we're building e.g. powered ankle prosthetics, and bipedal movement has some advantages in terrain, so we also want machines to be able to do that (e.g. https://www.youtube.com/watch?v=rVlhMGQgDkY)
For understanding human eyes - experimenting with lense systems that change focus was how we got to "augmented vision" e.g. humans with spectacles.
The same goes for analyzing and understanding how human brains work. It's also valuable to think about minds in general, but for many purposes we care about a particular mind, and all the individuals I currently care about are homo sapiens, not machines; so we need to understand their brains.
Most of frontier neuromorphic research of today, neither focus on creating an "general artificial intelligence" by copying human brain nor think that neuromorphic computing is the most probable discipline to achieve it. Instead, optimizing hardware for neural networks is the focus. If we want to achieve high number of (>10^14) weights, low energy consumption and spatial shrinkage we would like to give up running 1000 GPU clusters which only a handful of companies have. Neuromorphic computing only suggests that we need a better hardware (which may or may not require working with spiking neural nets as a consequence) in order to make AI hardware scalable, nothing more.
One of the biggest challenges to create an intelligent systems is to define what is intelligence. If you can define it in a way you can measure it you can track the progress. Nowadays there is not a clear definition.
That would be paradoxical. The challenge can't be to define the challenge. Likewise, Philosophy presupposes a notion of Philos and Sophia. Psychology is a related field that can help to refine this notion, isn't it?
https://arxiv.org/abs/1604.00289 -- Building Machines that Learn and Think like People
http://rsif.royalsocietypublishing.org/content/13/122/201606... -- Active Inference and Robot Control: a case-study