
Pursuing the Next Level of Artificial Intelligence - rockstar9
http://www.nytimes.com/2008/05/03/technology/03koller.html?_r=1&partner=rssnyt&emc=rss&oref=slogin
======
aswanson
Article content is pretty empty, other than saying she uses statistics. Any
pointers to her papers?

~~~
plinkplonk
"Article content is pretty empty, other than saying she uses statistics. Any
pointers to her papers?"

<http://ai.stanford.edu/~koller/papers.cgi>

enjoy!

~~~
aswanson
Thanks.

------
pchristensen
This is a dupe: <http://news.ycombinator.com/item?id=180047>

------
rms
>After three decades of disappointments, artificial intelligence researchers
are making progress.

Do you think AI progress is going to continue accelerating for a while?

~~~
cia_plant
Old-style AI solved a few of the "smart" things that people do, such as
playing chess and proving (simple) mathematical theorems. I predict that new-
style AI will solve a few of the "dumb" things that people do, such as keeping
our balance and getting a visual sense of our 3d location and orientation.

The difficult part will continue to be those areas where we're not just doing
combinatorics and we're not just doing statistics. The meaning of a sentence
is formed both by the grammar, which is nearly combinatorial, and by the
meanings and connotations of the words used, which are hardly combinatorial at
all. I doubt that this sort of thing will just fall out from bayesian
networks, or from any simple development based on bayesian networks.

~~~
angstrom
Language will be difficult for quite some time due to its links with culture
and the fact that it is always evolving and cross polinating between
languages.

I don't believe there's any way to solve that without an AI that can actually
learn and drive it's own understanding by observing and interacting with its
environment both physically and through mimicry.

~~~
byrneseyeview
"Language will be difficult for quite some time due to its links with culture
and the fact that it is always evolving and cross polinating between
languages."

How much trouble do you have understanding things written in English 100 years
ago? 400? 800? Older works require footnotes, but they don't introduce new
verbal _concepts_ \-- you can footnote Chaucer in modern English, and I
suspect that a Chaucer scholar would be able to describe modern concepts by
building on old ideas -- there isn't a discontinuity where you have some
concept that simply cannot be linked to the words Chaucer would have used.

So the problem is not that language changes too fast.

~~~
aswanson
I think what he is saying is that the generation and extraction of concepts
requires context that language may or may not provide. If you think about the
approach of AI with language, it is almost the opposite of how humans learn:
presentation of grammatics/language first and the extraction of concepts from
this, whereas humans learn basic concepts of reality before speaking, and
language is a partial representation of what they know.

~~~
byrneseyeview
That's true, but it's not at all what he said. He argued that language is a
hard problem because it's linked with culture, and always changing. I think
this is incorrect, because it doesn't stop humans from picking up second
languages or from understanding older texts in their native languages. The
hard part is whatever loop humans have that translates sentences into concepts
-- not the loop that uses one sentence-concept chunk to link a new sentence to
a new concept. That's in the library, not the interpreter.

~~~
aswanson
_That's true, but it's not at all what he said_

What he implied from culture, I took to mean a polymorphic mapping of
"sentences" or "units of communication" in a given understood context that can
be learned only through experience.

 _it doesn't stop humans from picking up second languages or from
understanding older texts in their native languages_

It doesn't stop them, but it is far from trivial to learn a new language or to
even resolve syntactic,lexical, or sematic ambiguity without context, even in
a native speakers language: <http://en.wikipedia.org/wiki/Ambiguity>. The ill-
posed nature of this mapping is not static, and for people not speaking their
native language or young children this becomes the basis for some forms of
comedy.

As far as I know we have no generic grammar that can generate the rules for
all languages; there is no universal "interpreter", and it is not certain that
"concept" can be decoupled from "phrase" or "sentence" without context, so I
don't think your separation of "library" and "interpreter" holds.

~~~
angstrom
That is in fact much closer to what I was thinking. The cultural context is
important when it comes to understanding the less formal common grammar we
humans tend to use more frequently and which is littered throughout the
Internet.

We can understand 400, 800, etc year old text, but one of the first things
you'll notice is the language has terminology and informal phrases that may
die off, live on, or evolve over the years. Toss in ambiguity, metaphores,
double entendres, etc and it's not as simple. Perfect example would be
Shakespeare. His style is rife with creative linguistic twists.

------
tokipin
is it true most AI was based on logic stuff? no wonder it failed. it's like
making a building out of iron legos and wondering why it isn't flexible

~~~
angstrom
For a good read on the subject of AI and how humans think check out the book
OnIntelligence by Jeff Hawkins. It covers some about the previous attempts at
AI and why they failed.

[http://www.amazon.com/Intelligence-Jeff-
Hawkins/dp/080507456...](http://www.amazon.com/Intelligence-Jeff-
Hawkins/dp/0805074562)

~~~
pchristensen
That book is in my top 5 favorite books ever!

~~~
9oliYQjP
I keep hearing about this book. Can anybody tell me if it is a pseudo-
scientific look at the topic? I don't mean this in a particularly negative
way. Rather, what I mean is does it put forth a lot of as-yet unsubstantiated
claims perhaps backed up with some hand-waving analysis?

~~~
andreyf
The book is great. His criticisms are spot-on.

His proposed solution is bold, but as of yet, unproven. It's probably wrong,
and in a way, he admits to that in the book (he says we'll need a lot of
changes).

If you get bored on chapters 6, skip it.

~~~
lacker
I second the recommendation of skipping chapter 6 - when I read this comment I
went to check where my bookmark is because I had gotten bored while re-
reading, and it was near the start of chapter 6 ;-)

------
andreyf

      "My husband still berates me for not 
      having jumped on the Google bandwagon at the beginning,"
      she said. Still, she insists she does not regret her 
      decision to stay in academia. "I like the freedom to 
      explore the things I care about," she said.
    

That seems a bit silly. Given the computational infrastructure available at
Google, it would seem that it's the only place where one would be free to try
all kinds of ideas unavailable elsewhere. If she's as brilliant as they make
her seem, I'm sure they'll let her work on whatever she wants.

------
schtog
did she invent bayesian spamfilters? i know there was ideas about it before
paul graham did it.

~~~
byrneseyeview
<http://www.paulgraham.com/better.html> The footnotes here cite earlier work,
as does the referenced Slashdot article.

