
A Better Lesson - pchiusano
https://rodneybrooks.com/a-better-lesson/
======
lisper
I did my Ph.D. 28 years ago and Rod was on my thesis committee. At the time
the big debate was over whether robots should plan or react (with Rod, of
course, being at the vanguard of the "react" contingent). My thesis basically
said: they should do both. 30 years on it feels like things have come full
circle and we are once again arguing over what AI architectures should look
like. I think the answer now is the same as it was then: all extreme positions
are wrong. We are not going to build AI simply by throwing more computing
power and larger training sets at the One True Algorithm, neither are we going
to do it by hand-coding everything. Human brains are the messy result of
millions of years of evolution. I see no reason to believe that successful AI
will be any less messy.

~~~
randallsquared
But the reason people keep expecting a specific algorithm to take off is that
this is how it was in so many other areas: figuring out the principles needed
for duplicating human or animal capabilities has always allowed us to exceed
human or animal limits almost immediately. Flying, swimming, running, and
lifting heavy things all require Rube Goldberg complexity for evolved systems,
and stunning simplicity (by comparison) for designed systems.

~~~
lisper
I actually believe that there could be a "one true algorithm" for learning and
creativity. The problem is that in humans that algorithm interacts with a ton
of other stuff that has evolved and influences our worldviews. For example,
humans don't learn how to recognize cats. We have cat detectors hard-wired
into our brains because having cat detectors provided a reproductive advantage
in the ancestral environment.

------
marcosdumay
Well, what use is there for AI solutions that require humans to do everything?
Of course the procedures that only require computer's time will add more
value.

But that doesn't mean that they'll solve more problems, or solve them better.
It just means that if some expensive developers are competing on the market
with a cheap GPU cluster, it's obvious who will win. At some point I imagine
they will stop competing for solving the same problems, things are just not
that mature yet.

~~~
ohduran
This is a very good point: the compelling value proposition of AI isn't that
is accurate; it's that we need no human tweaking.

I mean, it HAS to be accurate. But once it can replicate the accuracy of
humans (which isn't 100% in many use cases), what I demand from an AI is that
it removes the need for day to day maintenance.

This post focuses entirely on where AI research is going, and indeed we need
the bread and butter of human intelligence to get us there; but from a
business perspective, what I'm trying to do is outsource my non-core business
units (and some core ones) to someplace called AI, that upgrades what
companies used to do when they outsourced to India: a good enough alternative
that becomes cheaper.

------
mturmon
It's a useful debate to be having -- although it is somewhat endless and
undecidable in absolute terms, in the way that fundamental philosophical
debates are.

I think about robotics as an example of a problem domain that seems resistant
to pure-learning approaches. I'm thinking about the whole perception-planning-
execution problem, in which a map of the world, and a notion of a goal, are
key modeling objects.

It's really hard to think about how a robotic system that doesn't maintain a
map could work intelligently to accomplish navigation goals, for instance. A
map seems to imply some kind of model. And if the world (the map) isn't
static, the modeling problem seems to get even more urgent.

~~~
dbcurtis
> It's really hard to think about how a robotic system that doesn't maintain a
> map could work intelligently to accomplish navigation goals,

For navigation with less map input, see the work of Oussama Khatib. A single
nav goal is not a problem. A sequence of nav goals is where the planner adds
value.

Pure Sense-Plan-Act has long since been abandoned. Brooks' paper "Elephants
Don't Play Chess" is a fun and easy read, and even though old, is still on the
must-read list if you want to think about these problems.

These days the "three layer architecture" is common in some form: classic
controllers close to the hardware, a reactive layer, and a deliberative layer.
If you squint your eyes the typical ROS-based robot is more-or-less three-
layer. The default nav stack has a reactive Khatib-style layer executing nav
sub-goals dropped into it from a planner that does map searches.

Pure world-modelling is too slow and error-prone in the short term. Pure
reactive is unwieldy with complex sequences of sub-goals.

