
Rethinking artificial intelligence - jacek
http://web.mit.edu/newsoffice/2009/ai-overview.html
======
coffeemug
I always welcome ambitious goals - the fact that a problem has not been solved
for half a century is absolutely no reason to call it quits. However, in the
interest of spending money wisely, it's worth it to at least sit back for a
moment and think about what went wrong before, and what we can do to steer
ourselves towards the right path.

The article mentions revisiting fundamental assumptions, but doesn't mention a
single specific thing that the team will do differently. I've studied AI on my
own for some time, and the biggest problem (as far as I can tell) is that
researchers can't agree on what "intelligence" means. AI research for the past
fifty years hasn't been about building an artificial intelligence machine,
it's been about precisely defining what "intelligence" means. Every time there
was a breakthrough, after a bit of hype people realized that the program is
actually pretty dumb, is a testament to the intelligence of the programmer,
not the machine, and that the bar for "intelligence" simply shifts a bit
higher up.

So, what exactly is this team doing differently? How do they define
"intelligence", and what do they intend to build?

~~~
Confusion
For one, to do things differently, I'd start without trying to define
'intelligence'. Nobody can define what a game is and still we learn,
understand and say plenty of interesting things about games. Nobody can you
define 'water' in a way that captures all the mental images people have when
they hear that word and still we can say lots of relevant things about water.
Trying to capture something like 'intelligence' with words is a foolish
endeavour.

~~~
horseass
How dictionaries circularly define words using other words, and how humans
might learn by progressively expanding analogies starting with simple 'axioms'
of body sensations like up vs. down (more on other pages):
<http://members.cox.net/deleyd/politics/cogsci5.htm>

~~~
yafujifide
Wow, these three comments have just mirrored my own recent thoughts to the
greatest extent I can remember. I present to you the Infinite Curiosity Loop:

[http://funnylogic.com/times/txt/2009-11-infinite-
curiosity-l...](http://funnylogic.com/times/txt/2009-11-infinite-curiosity-
loop-and-curiosity-halters/)

And for what it's worth, the definition of intelligence has been on my mind a
lot.

~~~
Confusion
If you'd like to explore this subject in a structured way, then there's a
large library of philosophical writings on the subject :). I think Hilary
Putnam's essay on "Brains in a vat"[1] may be a nice starting point, from
which can explore both earlier and later work.

[1] <http://evans-experientialism.freewebspace.com/putnam05.htm>

------
motters
Rethinking your fundamental assumptions is always a good idea, especially if
progress seems to be stagnating.

Some of the assumptions might be:

\- That thought occurs inside the head. To what extent is thought a process
distributed across multiple individuals?

\- That you can make a sharp delineation between reasoning and perception.
Many AI systems assume that tasks like object recognition can be completely
separated from the rest of the system as it's own module. Neuroscience, on the
other hand, suggests that perception, memory and reasoning are all tightly
integrated together.

\- That the brain can be modeled as an electrochemical system. The mainstream
view is that quantum effects play no significant role, and that Penrose &
Hameroff are wrong.

~~~
Retric
IMO, one of the fundamental problems with AI research is the idea that
_thought_ is important. When a chess program keeps track of the best moves
nobody really cares how that _thought_ is stored or retrieved. What's
interesting is how to measure what thoughts are useful (for a given problem)
and how to generate useful thoughts efficiently (better than random.) But,
when people focus on thought implying something that people do, but earthworms
don't do they get into a tailspin.

Language as a portable method of conveying thought is a great subject of
research. But, attempting to attack a human language in all it's glory is a
huge pitfall. Nouns, verbs, additives, adverbs, intonation, tense, and a 29
years of knowledge and I frequently have no idea what someone is saying. But,
linking the word ball, with the concept ball, with the sensory perception of
ball might be possible today. Link that to some simple nouns like roll, toss,
catch, etc and we can start a meaningful integration of language and actions.

------
liuliu
They may on the right track to say:

"Part of this difficulty comes from the very nature of the human mind, evolved
over billions of years as a complex mix of different functions and systems.
“The pieces are very disparate; they’re not necessarily built in a compatible
way,” Gershenfeld says. “There’s a similar pattern in AI research. There are
lots of pieces that work well to solve some particular problem, and people
have tried to fit everything into one of these.” Instead, he says, what’s
needed are ways to “make systems made up of lots of pieces” that work together
like the different elements of the mind. “Instead of searching for silver
bullets, we’re looking at a range of models, trying to integrate them and
aggregate them,” he says."

It makes sense to create a single intelligent entity by combining different
types of technology that works well on specific topic. We have many good
techniques that do things better than human being, why try to mimic ourselves
in general (the neuron way) rather than utilize a better solution to each
problem?

~~~
amackera
The problem is that all of our successes combined in AI (even including neural
networks) don't add up to something even remotely resembling general
intelligence. It's not simply a matter of connecting the output of a DSP to
the input of an edge detector, etc.

------
Eliezer
<http://lesswrong.com/lw/vs/selling_nonapples/>

<http://wiki.lesswrong.com/wiki/Futility_of_chaos>

------
chrishan
The group's approach to build cognitive assistant might be on the right way.
Rather than build standalone AI, the assistive approach encourages interfacing
human brain. During the collaboration between human and the assistant, both
will get better understanding about the other. We human don't need computer to
work with emotion, but work by understanding our need and satisfy our need
just in time.

------
asciilifeform
Yes, let's throw a few more $millions at well-credentialed plodders.

MIT is a zombie: a corpse with delusions of youthful vigor.

~~~
Xichekolas
As someone who is early in the process of considering Ph.D. programs, I'd be
interested if you could expound on that. Also, any schools seem like the
opposite of zombie?

(This question also goes out to anyone else that has something to add.)

~~~
asciilifeform
AFAIK the entire field is dead:

<http://en.wikipedia.org/wiki/AI_winter>

There are various explanations as to the cause of death: the end of the Cold
War; the humbling of the mega-monopolies which funded "blue sky" research
(mainly AT&T); a general loss of faith resulting from a decades-long lack of
progress. Take your pick.

In fact, the entire field of computer science has been stagnant for a while,
shiny gadgets to please people with 5-minute attention spans notwithstanding:

[http://www.eng.uwaterloo.ca/~ejones/writing/systemsresearch....](http://www.eng.uwaterloo.ca/~ejones/writing/systemsresearch.html)

Bureaucrats have replaced thinkers:

[http://unqualified-
reservations.blogspot.com/2007/08/whats-w...](http://unqualified-
reservations.blogspot.com/2007/08/whats-wrong-with-cs-research.html)

My advice: study physics or chemistry.

~~~
yummyfajitas
While the CS _academy_ has many issues, the field itself is alive and well.
Google and MS both do systems research, as do several financial firms and
software companies servicing the financial industry.

As a person moving from physics to CS, I personally find computing to be a
very interesting place right now.

~~~
asciilifeform
> Google and MS both do systems research, as do several financial firms and
> software companies servicing the financial industry

Where, then, is the desktop operating system not built of recycled crud? Where
can I see a _conceptually original_ system created after the 1980s?

> the field itself is alive and well

I disagree entirely. It is a zombie, maintaining the illusion of life where
there is none.

Hell, UNIX still lives, and this _proves_ that systems research is dead:

<http://www.art.net/~hopkins/Don/unix-haters/handbook.html>

Why is my desktop computer running software crippled by the conceptual
limitations of 1970s hardware? Why are there "files" on my disk? Where is my
single, orthogonally-persistent address space? Why is my data locked up in
"applications"? Why must I write programs in ASCII text files, and plod
through core dumps and stack traces? Why can't I repair and resume a crashed
program?

~~~
potatolicious
> Where, then, is the desktop operating system not built of recycled crud?

You're willing to look past our massive advances in optimization, control
systems, search technology, computer vision, etc etc, and pretend they don't
exist...

... because desktop OSes still suck?

~~~
asciilifeform
> our massive advances in optimization, control systems, search technology,
> computer vision, etc

Ok, I'll bite. _What_ advances? I'm talking about real change, not incremental
bug-stomping by plodders.

~~~
Xichekolas
Computer Vision, for one, is making massive advances.

I just spent three weeks (class project) implementing a new algorithm to find
the minimum cut of a directed planar graph in O(nlgn) time. The algorithm is
actually quite elegant:

[http://www-cvpr.iai.uni-bonn.de/pub/pub/schmidt_et_al_cvpr09...](http://www-
cvpr.iai.uni-bonn.de/pub/pub/schmidt_et_al_cvpr09.pdf)

This came out of a Ph.D. thesis written in 2008, and was applied to some
computer vision problems in the paper I linked above. This isn't a minor
speedup or optimization... it yields _asymptotically_ faster results.

My vision professor is fairly young, and recently did his own Ph.D. work on
Shape From Shading. This is the problem of recovering 3D shape from a single
image (no stereo or video). His solution used Loopy Belief Propagation and
some clever probability priors to achieve solutions that were orders of
magnitude better than previous work. In fact, his solution is so good that
rendering the resulting 3D estimate is identical (to the naked eye) to the
original (although the actual underlying shape varies, since there are
multiple shapes that can all appear the same given the lightning conditions
and viewing angles).

There is also a ton of interesting progress in the last two decades making
functional languages practical in terms of speed (and hence useful). My
advisor did his Ph.D. in this area.

The entirety of CS is not evidenced by the current state of operating systems.
In fact, I'd argue that OS research at this point has less to do with
computation than it does with human-computer interaction, which seems like it
requires more research about humans than computers.

~~~
yters
That sounds like clever engineering to me, not science.

------
horseass
I think evolution is the best approach, given its proven power. Maybe we need
replicators (like genes, only artificial/computer related) that actually codes
to build hardware as the 'phenotype'. Then evolution can have something 'real'
to select from instead of just a bunch of software. Only real stuff can
interact with the real world obviously (opposable thumbs, eyes, ears). With
DNA, the phenotype is physical like that.. molding form like
hair/muscle/bone/etc. And it seems to me that a sophisticated sense, like eyes
(that can see into the real world, not just 'seeing' inside the isolated
simple second life software world model or something) is needed.

~~~
DrJokepu
Evolution has its pitfalls. There are things that evolution cannot create
because there is no chain of gradual improvements that would lead to them. A
great example of that is the wheel; no living organism feature wheels of any
type no matter how useful they are. The human heart would be a lot more
efficient than it is if it was "implemented" as a circular pump. I think a
combination of "Evolution" "Intelligent design" in AI research would be a more
ideal path.

~~~
forensic
Evolution created the wheel by evolving humans who would create the wheel.

