Hacker News new | past | comments | ask | show | jobs | submit login
Algorithms of the Mind (medium.com/deep-learning-101)
47 points by lebinh on June 7, 2015 | hide | past | favorite | 18 comments



I think this is an incredibly bad way to try and study the mind. The neural net bears some passing resemblance to a neuron (both have graph connectivity), but the neuron is a biological structure with complex biochemical inputs and outputs. In addition it took us twenty or so years to proceed from simple feed forward neural networks to so-called "deep learning" neural networks. How shallow such networks are when measured against the complexity of an actual neural system is unknown. We may be standing at the shore of a great ocean with one foot in the water congratulating ourselves on our understanding.


Yet more and more studies of learning (granted, very basic forms of learning like fear conditioning) provide evidence in support of the connectionist approach to learning , in which machine learning is based. In fact, no other theory has emerged as a popular candidate to replace it , despite the fact that experimental neuroscience has done huge steps forward since the 60s. If machine learning works so well, then its a valid question whether real brains work similarly.


Nguyen claims that the renaissance in neural networks will provide us with concepts to understand the human brain in an analogous way to how the steam engine allowed us to conceive of entropy.

I'm not convinced it goes in that direction yet, though. Neural networks are loosely biologically inspired to begin with, and the idea of the primate visual system as a deep feedforward network predates the recent machine learning advances by many years.

If that's true, it undermines his entire thesis. What's missing: how a concept in machine learning allowed us to conceptualize something new in neuroscience, rather than just describe a process we have a vague intuition for (still obviously useful).

FWIW, and I'm a little biased here, I would argue that it's (high-level,vague) concepts in neuroscience that have been driving machine learning. There are ways we behave and learn that we've been trying to emulate in machines. Someday it will swing back the other way, but not yet.


While intriguing, it's important to remember that humankind has always compared the mind to whichever recent technology was available - the catapult, the mill, the steam engine, and eventually, computers. While Deep Neural Networks -- unlike mills -- are of course inspired by what seems to be the actual biology of our brains, and the results are fascinating, it's humbling to keep the above in mind.


I can see how one might talk in parallels between the mind and a mill, or steam engine, or a computer. I don't see how it would work with a catapult, even in a historical context. Can you elaborate? Or even better, if you could show a reference to that.


The relevant quote is from Philosopher of Mind John Searle (of "Chinese Room" argument fame):

_Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. (‘What else could it be?’) I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer._ (John Searle, Minds, Brains and Science, 44)


The subtitle of the article is "What Machine Learning Teaches Us About Ourselves"; This is backwards. Brain sciences inform ML (In fact, ML techniques are often coined after the biological counterpart). A result or finding in ML does not necessarily, or at all, imply anything for neuroscience.

Artificial neural networks do not teach us about biological neural networks, or 'Neuronal Networks', a term reluctantly used by a close neuroscientist for contradistinction. We don't need Google's cat research, but Hubel and Wiesel's cat research.

Let's see: Cheap reference to Kant, check. Vague parallel to the Sapir-Whorf hypothesis, check.

The 'intriguing' mapping that involves 3 ML terms is desperate.

This article appearing on the front page of HN shows how delusional some of today's ML lovers are with respect to neuroscience, the discipline that actually studies human brains.


I wouldn't be so dismissive. The last time neuroscience informed neural networks was in the 1940s


Frederick Jelinek, a researcher in natural language processing, has a funny quote, "Every time I fire a linguist, the performance of the speech recognizer goes up."

In general, I think a neuroscientist would be a distraction to any ML team. I don't mean to say that neuroscience is what drives ML insight, but if asked to pick which field influences the other most, my choice is clear.


Interesting overview of the recognition/imagination duality, but I dislike the tendency to play the game of "oh that's totally what [famous person] must have meant with his dense prose hundreds of years ago."


The author of this article fails to incorporate two relevant prior explorations of this topic: (1) from the Buddhist perspective and (2) from Wilfrid Sellars' work, in particular "Empiricism And The Philosophy Of Mind". The remarks below pertain to the first; the second is beyond my philosophy-fu to say anything meaningful.

Take the idea, "We see with our brains, not with our eyes" as a criticism of the "naive view" that sense data / fabrications are neutral, that they are just "out there", and it is only when they come into contact with the mind that the mind infuses the raw sense data with desire and aversion. The idea that we are just passive observers of phenomena.

Thanissaro Bhikkhu critiques this idea from the Buddhist perspective:

"040920 Disenchantment & Dispassion \ \ Thanissaro Bhikkhu \ \ Dhamma Talks" https://www.youtube.com/watch?v=k8M-_Msav1Q

He says that on the contrary, desire and aversion are involved a priori in the formation of the fabrications (sense data).

So this is not a new idea. It is a very old idea. The idea that the technology of ML can confirm this particular critique of the naive view is novel (although I'm not convinced it is wise to draw conclusions about the mind in this way, just as I'm not convinced it is wise to draw conclusions about the way evolution operates based on artificial life simulations).


There are two contradictory claims:

1. The brain is like a neural network (which is purely logical) in the sense of ML.

2. Human brains cannot be explained by purely logical things.

The author also uses "concept," which is a technical term in computational learning theory with a specific meaning, as if it meant "intuition." How can you present "intuition" to a neural network? This distinction is swept under the rug. Not to mention all the recent work showing how easily neural networks can be fooled by slightly adversarially noisy inputs.

There are many grains of salt required for a useful discussion on neural networks. Instead of taking something we have no understanding of and making grand philosophical claims, we should be using the tools we have to understand that thing.


Very much agreed. For one thing, comparing the human brain to a deep neural network leaves out the fact that the human brain mostly performs unsupervised perceptive learning, unsupervised causal induction, and reinforcement learning. None of these resemble the deep backpropagation done in most ML models.


The field of computational cognitive neuroscience studies the mind as a machine learning algorithm.

http://grey.colorado.edu/CompCogNeuro


I was immediately put off when the author trotted out Sapir-Whorf; and, not even apologetically: in its strong form! Everything in the article became suspect. S-W is not correct, end of story.


He later mentions the argument against the strong version of S-W.

As for "S-W is not correct", that's interesting - arguments countering it are not known to me.


I'm not going to refer you to Wikipedia. Instead, I'll take the top hit off of google scholar, given the search term for "Sapir Whorf"[1]. The conclusion is in the abstract:

    """
    These findings suggest that the mastery of the English subjunctive is probably quite tangential to counterfactual reasoning in Chinese. In short, the present research yielded no support for the Sapir-Whorf hypothesis.
    """
Every serious study of S-W, results in the same: no evidence.

Now, there is -minute- evidence that languages that have very short number words allows students to master the memorization of number sequences easier---the students literally have less information (in terms of phonemes) to memorize. This sort of thing is actually pretty prevalent; but it is not really what most people are thinking of when they discuss S-W.

Also, the Himba "study" about green is pretty much debunked. If you get a high-quality monitor, with good ambient lighting, go ahead and ask some colleagues to find the differently-colored green square. They'll do so, just fine, and quite quickly!

[1] http://www.sciencedirect.com/science/article/pii/00100277839...


Oh god, please stop it with the medium.com, quantamagazine, and other dumbed down TED talk crowd 'news' sites. Can't you please link to the original articles?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: