

Super Mario AI Competition - fogus
http://julian.togelius.com/mariocompetition2009/

======
naveensundar
AI contests like this and the DARPA Grand Challenge Race encourage solutions
in a single domain. AFAIK in the DARPA Challenge there was overfitting and I
doubt the generalization capability of the systems involved. We need more
contests in the spirit of "General Game Playing" contests where the problem
domain is unknown to the programmers. <http://games.stanford.edu/>. You only
get a formal description before the game and a small amount of "thinking" time
which the system has to use understand the game and formulate a strategy.

~~~
apu
There is a vast difference between AI for games (which are purely theoretical
exercises) and AI for working in the real world. The fundamental
characteristic of the real world is that there is uncertainty and ambiguity in
every measurement and even your actions (did my car actually move when I
pressed the accelerator?).

The kinds of systems you build to tackle these latter types of problems are
_very different_ from the kinds you need for the first type. Both have their
place, of course, although IMHO, the latter type is more interesting, as it's
a much messier area to work in.

~~~
naveensundar
Well, messiness is just one part of the problem. I usually see AI as a layered
system that solves a general problem. The layers are broadly: lower level
sensory systems(vision, speech recognition) and higher level symbolic
reasoning systems. Recently, there has a been a lot of success in the lower
layers due to Machine Learning and probabilistic systems. The messiness comes
in the lower layers. When dealing with the higher layers you face much harder
problems. Hard in the sense of Turing uncomputable. The only formal solution
proposed for general AI is uncomputable
<http://www.hutter1.net/ai/uaibook.htm>.

Also there is nothing which restricts real world problem solvers to a single
domain. Most messy problems have a formal solution. The engineering part is
what remains (but the engineering part is nevertheless hard!). Some non
"messy" problems have no computable solution e.g. deciding whether an
arbitrary argument in first order logic is true. They challenge the bounds on
human thinking.

Also "General Game Playing" is not only about games. (Like Game Theory is not
only about games).

~~~
apu
I don't agree that the higher level symbolic reasoning is necessarily harder,
nor that the lower level is simply engineering. The problems at the low-level
are still very tough, and require scientific (and not just hacky- or
engineering-) approaches.

As for the "single domain" issue, this was a big red herring in old-style AI
(before the so-called AI winter) -- basically, it turns out that the different
branches of what used to be called AI (e.g., vision, machine learning, NLP,
speech, etc.) have to use quite different techniques.

"General" AI systems such as logic, game players (and yes, I know it's not
just about games), etc., can be used in various domains, but in general
they're not very applicable to most real-world problems in the different
domains.

For example, I work in computer vision, and while we certainly use a lot of
probabilistic analysis that can be broadly applicable to other domains, a lot
of the progress in recent years has been using techniques that aren't really
transferable (e.g., SIFT or SLAM).

~~~
naveensundar
General AI is not applicable in real life yet. Probabilistic reasoning can be
subsumed into a logic system. For example, a human(David Lowe) has originated
SIFT. A reasoning system should in principle be able to derive SIFT. We are
not there yet but that should be the goal.

The problem with singe domain systems is none are even capable of robust
inductive transfer (<http://en.wikipedia.org/wiki/Inductive_transfer>). Some
reasoning problems are mathematically hard problems
(<http://en.wikipedia.org/wiki/List_of_undecidable_problems>). There is no
algorithm which even in principle can solve them. Among other things, you need
infinite processing speed to do so. No amount of science can help here as it
is in the logical domain.

~~~
apu
Your statement about inductive transfer is false. There are many vision
systems, for example, that use transfer learning (as it is known in our
community) to use knowledge gained from one task in another. Moreover, many of
these approaches are specific to vision -- they can't easily be applied in
other domains.

The rest of your comments about reasoning systems are either part of
theoretical computer science (e.g., most of that list of undecidable
problems), or in the realm of philosophy/metaphysics/80s-style AI. The former
is an interesting area, but much closer to math than most other areas of CS. I
call the latter philosophy/metaphysics because these are often questions that
are interesting from a purely academic viewpoint, but are utterly useless for
trying to make progress on real tasks. This was the big problem with 80s style
AI: people hoped that they could build general systems which could "reason"
about how to derive algorithms to do particular tasks. Knowledge bases were
supposed to be steps in this direction.

What the community learned is that this is not a valid approach, in part
because it doesn't take into account the enormous amount of data people have
access to from birth to adulthood (not to mention the fact that we can
interact with our environment and see the results of our actions). There
doesn't seem to be a good way to provide a computer system this kind of
information.

There are also strong biological arguments against this idea. For example, a
large fraction of human brain cells are exclusively devoted to processing
visual information. This is in addition to the significant amount of visual
processing done by our eyes and the optic nerve. There are similar systems in
the brain devoted to speech processing and language, etc.

All of this suggests that a general reasoning system _cannot_ hope to solve
the challenging problems in these different domains.

------
dkarl
Oy, such a short timeline for a task like this! Submissions are due by August
14th or September 3.

------
apgwoz
Programming video game AI is fun. I took a class at UPenn where we had to
build a pacman playing agent with reenforcement learning. The gameplay was't
standard pacman, but with just a few, general features and correct training it
was easy to get pretty decent results. And it's not that hard to implement.
Not sure how it'd work on Mario though. Anyone have any thoughts?

------
SwellJoe
On a related note, one of the more interesting lightning talks at YAPC this
year was about the Tactical Amulet Extraction Bot, a framework for
programmatically playing nethack. Here's a blog about it: <http://taeb-
blog.sartak.org/>

And the code: <http://search.cpan.org/~sartak/TAEB-0.03/lib/TAEB.pm>

I think it's been discussed here at HN once or twice, as well.

------
nazgulnarsil
I've long wondered why the DARPA challenge for automated cars is necessary
when the code could be tested using video games for orders of magnitude less
money.

~~~
CGamesPlay
In theory there is no difference between theory and practice. In practice,
however...

~~~
nazgulnarsil
crude simulations don't invalidate theories. i hate anti-scientific quips like
that.

~~~
jacquesm
In theory, there is no difference between theory and practice, but in practice
there is.

I'm quite sure that if there is one set of people that know this by heart then
it is the military. Real world experience will bring out the trouble much
faster than any amount of simulation.

Use simulations to help you design, validate your design in the real world.
Then update your simulation to reflect the lessons you learned during your
real world tests and iterate.

Have a look at armadillo aerospace, John Carmack is as versed as anybody in
that field with simulations and STILL they find out almost every trial they do
that the real world is not as clean as the simulated one.

~~~
jwhitlark
Indeed. Just take a look at Murphy's laws of combat: <http://www.murphys-
laws.com/murphy/murphy-war.html>

~~~
marvin
Those are great, thanks :)

------
rawr
I like how a hefty fraction of the algorithms use neural networks so that they
get better as they watch humans play.

The reason for this, of course, is that the developers just want a reason to
play Mario :-)

~~~
rw
Fine, yes; but you don't have to use neural networks to get that behavior.

