

Natural Language Processing and the human brain  - ohadfrankfurt
http://thetokenizer.com/2012/12/25/understand/

======
dschiptsov
Just two systems is too simple, too naive.

The Society of Mind by Marvin Minsky is a must read for anyone who is trying
to speak about AI.

The notion that there are quick and slow systems are old one - reflexes vs.
recognition is the simplest example - recognition is too slow.

But there are not two or three systems, there are.. I cannot say how many, but
if we believe that we have a distinct "subsystems" to recognize eyes, and
mouths and faces by matching some cues, there must be vast hierarchies of such
sub-systems (agencies in Minsky terminology).

So, some processes (agencies) are "slow" some are "fast", some works with
quick-and-dirty data, such like a sudden motion catches our attention, before
we're able to recognize what's going on..

The notion that each word we see does some "priming", pre-fetching in CS terms
leads to the vastly complex view of how even seemingly simple tasks are
performed.

So, all we can do is to recognize familiar (known in advance) patterns and
look at its weights as modern computer translation services do.

------
pbateman
Two points:

1) Kahneman's model is probably too high level to apply to solving NLP
problems. It's an interesting abstraction, not a road map to human reasoning.

2) System 2 is _Effortful Reasoning_ , not just doing math. For example, if
you were engaging system 2 to plan how long a project was going to take system
2 would think about what kind of questions were relevant to generating an
answer (how long have similar projects taken, are there any differences
between this project and the others, etc.) and then think about how to combine
these to produce an answer. Computers do not perform this kind of reasoning
well at all.

~~~
beering
I think this is quite similar to Moravec's paradox, where tasks that are
"natural" to the brain are the most difficult to simulate on a computer,
seemingly difficult tasks are easier on a computer. It makes sense if you
consider that the mechanisms of each are fundamentally different, and each
exploits its own strengths.

Planning how long a project takes given data about similar projects could
readily be done on a computer (e.g. case-based reasoning or other machine
learning). Having seen how bad many people are at planning, there's a good
chance a computer could do just as well as an average human.

------
woodchuck64
It kind of makes intuitive sense that "understanding" is a graph of nodes
representing arbitrary (meaningless) concepts with edges doing all the work of
imbuing meaning.

So how do we test this idea with a graph large enough and fast enough to
approach the complexity of the brain? All computer architectures seem to
deeply resist the graph architecture by forcing edges/nodes of large graphs to
be sequentially accessed over a set of limited memory bus lines. Large
computer networks force edges/nodes to be sequentially accessed over network
access points, hubs and switches. Everywhere we look, graph traversal and
transformation is unavoidably sequentialized at critical moments on today's
hardware. What's the solution?

~~~
redwood
While I certainly can't answer your question, you did make me think of
something interesting:

Since the internet as a whole acts more and more like a slow brain, and since
the internet influences how we act as people more and more (e.g. everything
from groupon deals to the Arab Spring), perhaps humanity as a whole is
beginning to act like a decentralized network brain!

I recognize there's nothing particularly novel or useful about what I've just
said. It's just an interesting visualization.

~~~
woodchuck64
Reminds me of Eric Schwitzgebel's question: is the United States conscious?
[http://schwitzsplinters.blogspot.com/2011/10/is-united-
state...](http://schwitzsplinters.blogspot.com/2011/10/is-united-states-
conscious.html) (and <http://www.youtube.com/watch?v=EbQi0tVNsdo>)

------
cristianpascu
I wish there was a mathematical theory for what 'understanding' is. For
instance, I too suspect that whatever we call a function of a system,
something that the system does, can not simply occur by random parts coming
together.

To be clear, I don't think a layer of sands has the function of filtering
water. It does filter water, but it does it in the virtue of static property,
in the first of the geometry of the system not the mechanics.

There is something about systems that do something that I can't name, having
limited knowledge on the subject.

Could anyone give me a hint on research done on this?

~~~
pirateking
I believe the phenomenon you are referring to is emergence[1][2].

Most of the research in this area I have come across is related to education
and epistemology. I would recommend reading works of Seymour Papert and Jean
Piaget.

I think the mathematical theory you are looking for to explain "what
understanding is", is actually the philosophical motivation for the field of
mathematics itself. At least it is why I am really interested in studying it
these days.

[1] <http://en.wikipedia.org/wiki/Emergence>

[2] <http://plato.stanford.edu/entries/properties-emergent/>

~~~
cristianpascu
Emergence sounds right. But I think there's something more. The properties of
a car are emergent on the properties of its part. But the function of the car
is not necessarily so. There is intent, something that a pile of rocks does
not have, although it has emergent properties.

~~~
pirateking
It might be easier if you warp your viewpoint across multiple frames of
reference. The intent of the car (to travel) is a human mental construct,
which _can be_ independent of its physical properties.

There are the physical properties of parts, driving higher level mechanical
systems, all working together to create an even higher level model with its
own emergent properties (speed, handling, fuel economy, etc.).

The ability to discern the intent of a car is an emergent property of our
culture. We grow up seeing wheels in motion all around us, houses have
garages, roads and parking lots exist. A car can just as easily be viewed as a
mobile shelter or a pizza transport machine.

Similarly, a pile of rocks may actually have intent in some cultures. A pair
of sticks can map to eating utensil after all.

~~~
cristianpascu
Then my question is how can function occur in the natural world without
intent.

Natural selection explains why that function is kept once it occurs.

Think of it like this: a space of N dimensions, for N properties, with an
infinite number of values for each.

Small variations (genetic mutations) of each of these property values, will
move the position of the system in this N dimensions space. Initially, most of
the parameters are null. But, there's a clear trajectory, or a set of possible
trajectories from the initial position to the position where, by a certain
combination of parameters, a function occurs.

When there's intent, it's very easy to see the path from A to B. But when
there's no intent, and not foreknowledge of the possibility of a particular
function (a special position in the N space), finding that particular spot by
random variations sound enormously improbable.

Does this low probability have a name or something?

